text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Mutualism (economic theory) The origin of a natural right! Good God! who ever inquired into the origin of the rights of liberty, security, or equality? In "What Is Mutualism?", Clarence Lee Swartz says: It is, therefore, one of the purposes of Mutualists, not only to awaken in the people the appreciation of and desire for freedom, but also to arouse in them a determination to abolish the legal restrictions now placed upon non-invasive human activities and to institute, through purely voluntary associations, such measures as will liberate all of us from the exactions of privilege and the power of concentrated capital. Swartz also states that mutualism differs from anarcho-communism and other collectivist philosophies by its support of private property, arguing: "One of the tests of any reform movement with regard to personal liberty is this: Will the movement prohibit or abolish private property? If it does, it is an enemy of liberty. For one of the most important criteria of freedom is the right to private property in the products of ones labor. State Socialists, Communists, Syndicalists and Communist-Anarchists deny private property". However, Proudhon warned that a society with private property would lead to statist relations between people, arguing: The purchaser draws boundaries, fences himself in, and says, 'This is mine; each one by himself, each one for himself | https://en.wikipedia.org/wiki?curid=1799997 |
Mutualism (economic theory) ' Here, then, is a piece of land upon which, henceforth, no one has right to step, save the proprietor and his friends; which can benefit nobody, save the proprietor and his servants. Let these multiply, and soon the people [...] will have nowhere to rest, no place of shelter, no ground to till. They will die of hunger at the proprietor's door, on the edge of that property which was their birth-right; and the proprietor, watching them die, will exclaim, 'So perish idlers and vagrants.' Unlike capitalist private-property supporters, Proudhon stressed equality. He thought that all workers should own property and have access to capital, stressing that in every cooperative "every worker employed in the association [must have] an undivided share in the property of the company". This distinction Proudhon made between different kinds of property has been articulated by some later anarchist and socialist theorists as one of the first distinctions between private property and personal property, with the latter having direct use-value to the individual possessing it. Mutualists believe that land should not be a commodity to be bought and sold, advocating for conditional titles to land based on occupancy and use norms. Mutualists argue about whether an individual has a legitimate claim to ownership of land if he is not currently using it but has already incorporated his labor into it. All mutualists agree that everything which is produced by human labor and machines can be owned as personal property | https://en.wikipedia.org/wiki?curid=1799997 |
Mutualism (economic theory) Mutualists reject the idea of non-personal property and non-proviso Lockean sticky property. Any property that is obtained through the use of violence, bought with money that was gained through exploitation, or bought with money that was gained violating usufruct property norms is considered illegitimate. According to mutualist theory, the main problem with capitalism is that it allows for non-personal property ownership. Under these conditions, a person can buy property that they do not physically use themselves with the only goal of owning said property in order to prevent others from using it, putting them in an economically weak position, vulnerable enough to be controlled and exploited. Mutualists argue that this is historically how certain people were able to become capitalists. According to mutualism, a capitalist is someone who makes money by exercising power rather than laboring. Over time, under these conditions there emerged a minority class of individuals who owned all the means of production as non-personal property (the capitalist class) and a large class of individuals with no access to the means of production (the laboring class). The laboring class does not have direct access to the means of production and therefore is forced to sell the only thing they can in order to survive, i.e. their labor power, giving up their freedom to someone who the owns means of production in exchange for a wage. The wage a worker receives is always less than the value of the goods and services he produces | https://en.wikipedia.org/wiki?curid=1799997 |
Mutualism (economic theory) If an employer pays a laborer equal to the value of the goods and services that he produced, then the capitalist would at most break even. In reality, the capitalist pays his worker less and after subtracting overhead, the remaining difference is exploited profit which the capitalist has gained without working. Mutualists point out that the money capitalists use to buy new means of production is the surplus value that they exploited from laborers. Mutualists also argue that capitalists maintain ownership of their non-personal properties because they support state violence through the funding of election campaigns. The state protects capitalist non-personal property ownership against direct occupation and use by the public in exchange for money and election support. Capitalists are then able to continue buying labor-power and the means of production as non-personal property and are able to continue extracting more surplus-value from more laborers, continuing the cycle. Mutualist theory states that by establishing usufruct property norms, exclusive non-personal ownership of the means of production by the capitalist class would be eliminated. The laboring classes would then have direct access to means of production, enabling them to work and produce freely in worker owned enterprises while retaining the full value of whatever they sell | https://en.wikipedia.org/wiki?curid=1799997 |
Mutualism (economic theory) Wage labor would be eliminated and it would be impossible to become a capitalist because the widespread labor market would no longer exist and no one would be able to own the means of production in the form of non-personal property, two ingredients which are necessary for the exploitation of labor. This would result in the capitalist class to labor along with the rest of society. In Europe, a contemporary critic of Proudhon was the early libertarian communist Joseph Déjacque, who was able to serialise his book "L'Humanisphère, Utopie anarchique" ("The Humanisphere: Anarchic Utopia") in his periodical "Le Libertaire, Journal du Mouvement Social" ("Libertarian: Journal of Social Movement"), published in 27 issues from 9 June 1858 to 4 February 1861 while living in New York. Unlike and against Proudhon, he argued that "it is not the product of his or her labor that the worker has a right to, but to the satisfaction of his or her needs, whatever may be their nature". In his critique of Proudhon, Déjacque also coined the word libertarian and argued that Proudhon was merely a liberal, a moderate, suggesting him to become "frankly and completely an anarchist" instead by giving up all forms of authority and property. Since then, the word libertarian has been used to describe this consistent anarchism which rejected private and public hierarchies along with property in the products of labour as well as the means of production. Libertarianism is frequently used as a synonym for anarchism and libertarian socialism | https://en.wikipedia.org/wiki?curid=1799997 |
Mutualism (economic theory) One area of disagreement between anarcho-communists and mutualists stems from Proudhon's alleged advocacy of labour vouchers to compensate individuals for their labor as well as markets or artificial markets for goods and services. Peter Kropotkin, like other anarcho-communists, advocated the abolition of labor remuneration and questioned "how can this new form of wages, the labor note, be sanctioned by those who admit that houses, fields, mills are no longer private property, that they belong to the commune or the nation?". According to George Woodcock, Kropotkin believed that a wage system in any form, whether "administered by Banks of the People or by workers' associations through labor cheques", is a form of compulsion. Collectivist anarchist Mikhail Bakunin was an adamant critic of Proudhonian mutualism as well, stating: "How ridiculous are the ideas of the individualists of the Jean Jacques Rousseau school and of the Proudhonian mutualists who conceive society as the result of the free contract of individuals absolutely independent of one another and entering into mutual relations only because of the convention drawn up among men. As if these men had dropped from the skies, bringing with them speech, will, original thought, and as if they were alien to anything of the earth, that is, anything having social origin". Criticism from pro-capitalist market sectors has been common as well | https://en.wikipedia.org/wiki?curid=1799997 |
Mutualism (economic theory) Some critics object to the use of the term capitalism in reference to historical or actually existing economic arrangements which they term mixed economies. They reserve the term for the abstract ideal or future possibility of a genuinely free market. This sort of free-market capitalism may closely follow contemporary mutualist Kevin Carson's free-market anti-capitalism in its practical details, except for the fact that Carson does not recognize a right of an individual to protect land that he has transformed through labor or purchased to be protected when he is not using it. Like other mutualists, Carson only recognize occupancy and use as the standard for retaining legitimate control over something. As a result, Austrian School economist and Objectivist George Reisman charges that mutualism supports exploitation when it does not recognize a right of an individual to protect land that he has mixed his labor with if he happens to not be using it. Reisman sees the seizure of such land as the theft of the product of labor and has said: "Mutualism claims to oppose the exploitation of labor, i.e. the theft of any part of its product. But when it comes to labor that has been mixed with land, it turns a blind eye out foursquare on the side of the exploiter" | https://en.wikipedia.org/wiki?curid=1799997 |
Mutualism (economic theory) This is due to the different conception of property rights between capitalism and mutualism, with the latter supporting free access to capital, the means of production and natural resources, arguing that permanent private ownership of land and capital entails to monopolization, if there is not the equal liberty of access; and that a society with capitalist private property inevitably lead to statist relations between people. For mutualists, occupancy and use is "the only legitimate standard for establishing ownership of land, regardless of how many times it has changed hands". According to Carson, "[a]n existing owner may transfer ownership by sale or gift; but the new owner may establish legitimate title to the land only by his own occupancy and use. A change in occupancy will amount to a change in ownership. Absentee landlord rent, and exclusion of homesteaders from vacant land by an absentee landlord, are both considered illegitimate by mutualists. The actual occupant is considered the owner of a tract of land, and any attempt to collect rent by a self-styled landlord is regarded as a violent invasion of the possessor's absolute right of property". | https://en.wikipedia.org/wiki?curid=1799997 |
Direct labour cost variance is the difference between the standard cost for actual production and the actual cost in production. There are two kinds of labour variances. "Labour Rate Variance" is the difference between the standard cost and the actual cost paid for the actual number of hours. "Labour efficiency variance" is the difference between the standard labour hour that should have been worked for the actual number of units produced and the actual number of hours worked when the labour hours are valued at the standard rate. Difference between the amount of labor time that should have been used and the labor that was actually used, multiplied by the standard rate. For example, assume that the standard cost of direct labor per unit of product A is 2.5 hours x $14 = $35. Assume further that during the month of March the company recorded 4500 hours of direct labor time. The actual cost of this labor time was $64,800, or an average of $14.40 per hour. The company produced 2000 units of product A during the month. The labor efficiency variance is (4500 - 5000) x $14 = $7000, where 5000 hours = 2.5 hours x 2000 units of output. This variance is favorable since the actual hours used are less than the standard hours allowed. This may be the result of efficient use of labor time due to automation or the use of improved production methods. | https://en.wikipedia.org/wiki?curid=1803535 |
Flipover A flip-over is one of five types of poison pills in which current shareholders of a targeted firm will have the option to purchase discounted stock after the potential takeover. Introduced in late 1984 and adopted by many firms, the strategy gave a common stock dividend in the form of rights to acquire the firm's common stock or preferred stock under market value. Following a takeover, the rights would "flip over" and allow the current shareholder to purchase the unfriendly competitor's shares at a discount. If this tool is exercised, the number of shares held by the unfriendly competitors will realize dilution and price devaluation. | https://en.wikipedia.org/wiki?curid=1805899 |
Killer bees (business) Killer bees are firms or individuals that are employed by a target company to fend off a takeover bid. These include investment bankers (primary), accountants, attorneys, tax specialists, etc. They aid by utilizing various anti-takeover strategies, thereby making the target company economically unattractive and acquisition more costly. Corporations defend against these strategies using so-called 'shark repellents.' Examples of strategy implementation by third parties are poison pills, people pills, white knights, white squires, Pac-Man defense, lobster traps, sandbagging, whitemail, and greenmail. | https://en.wikipedia.org/wiki?curid=1808848 |
Lobster trap (finance) A lobster trap, in corporate finance, is an anti-takeover strategy used by target firms. In a lobster trap, the target firm issues a charter that prevents individuals with more than 10% ownership of convertible securities (includes convertible bonds, convertible preferred stock, and warrants) from transferring these securities to voting stock. The term derives from the fact that lobster traps are designed to catch large lobsters but allow small lobsters to escape. | https://en.wikipedia.org/wiki?curid=1808976 |
Instant payment notification (IPN) is a method for online retailers to automatically track purchases and other server-to-server communication in real time. This allows E-commerce systems the opportunity to store payment transactions, order information and other sales internally. IPN messages can represent payment success or failures, order transaction status changes, accounting ledger information and many others depending on the payment gateway. The payments industry is an evolving market, technology like IPN and instant payment are now used in the retail market and in the domestic sphere, but they are expected to evolve into the corporate, B2B segment and cross-border space. IPN is used by merchant to automate backend functions related to: the end user account creation, order tracking, customer and merchant notifications related to acquired services. When an E-commerce system requests a resource from a payment gateway, like a new invoice or bill for goods, the request must contain a URL endpoint representing a script or program to handle returning notifications. IPN messages are then sent to the retailer's E-commerce system by HTTP POST as the resource is updated by the gateway. The IPN handler usually performs standard actions like validating the message, updating inventory levels in the E-commerce system, notifying customers of successful or failed payments, etc. Depending on the retailer's business requirements and the level of sophistication of the E-commerce software, some or all of the IPN messages can be handled or ignored | https://en.wikipedia.org/wiki?curid=1809995 |
Instant payment notification Server-side scripting languages such as PHP and ASP that power most E-commerce systems are event driven and make no distinction between a user-generated event or a machine-generated event. Utilizing this fact, IPN messages facilitate the coordination of the order state changes between the ecommerce system and the payment gateway handling the order. | https://en.wikipedia.org/wiki?curid=1809995 |
Common law of business balance The common law of business balance, usually expressed as "you get what you pay for", is the principle that one cannot pay a little and get a lot. In addition, paying a cheap price will not guarantee the buyer will receive a product of high quality value. In other words, a low price of a good may indicate that the producer compromised quality. The statement is often displayed or published in a one-sentence version: "There is hardly anything in the world that someone cannot make a little worse and sell a little cheaper, and the people who consider price alone are that person's lawful prey." Or simply, "you get what you pay for." This statement is also found in this lengthier version: "There is hardly anything in the world that someone cannot make a little worse and sell a little cheaper, and the people who consider price alone are that person's lawful prey. It's unwise to pay too much, but it's worse to pay too little. When you pay too much, you lose a little money – that is all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing it was bought to do. The common law of business balance prohibits paying a little and getting a lot – it can't be done. If you deal with the lowest bidder, it is well to add something for the risk you run, and if you do that you will have enough to pay for something better | https://en.wikipedia.org/wiki?curid=1811075 |
Common law of business balance " The statement has frequently been attributed to 19th-century art critic and social thinker John Ruskin, although there is little evidence to support Ruskin's authorship. In the "Yale Book of Quotations", editor Fred R. Shapiro states that this statement was "Attributed in "Chicago Daily Tribune", 29 Jan. 1928. This quotation, repeated in many commercial advertisements, has not been found anywhere in Ruskin's works. An earlier unattributed occurrence appeared in the "Washington Post", 1 Nov. 1914: "There is absolutely nothing in the world that some man cannot make a little worse and sell a little cheaper; and the people who consider price only are this man's lawful prey." Shapiro maintains that the statement does not appear anywhere in Ruskin's works, George Landow, a professor of English and art history at Brown University and a specialist on Victorian literature is also skeptical of Ruskin's authorship of this statement. In a posting of the "Ruskin Library News", a blog associated with the Ruskin Library (a major collection of Ruskiniana located at Lancaster University), an anonymous library staff member briefly mentions the statement and its widespread use, saying that, "This is one of many quotations ascribed to Ruskin, without there being any trace of them in his writings – although someone, somewhere, thought they sounded like Ruskin | https://en.wikipedia.org/wiki?curid=1811075 |
Common law of business balance " Ruth Hutchison, who maintains the website for the Ruskin Centre at Lancaster University, stated that, "we have been asked many times about this quote, or similar versions of it, and have never been able to identify it as being by Ruskin. We suspect that it has been wrongly attributed to him in the past and found its way into a book of quotations or something like that." In an issue of the journal, "Heat Transfer Engineering", Bell quotes the statement and mentions that it has been attributed to Ruskin. While Bell believes in the veracity of the content of the statement, he adds that the statement does not appear in Ruskin's published works. In the 20th century, this statement appeared—without any authorship attribution—in magazine advertisements, business catalogs, student publications, and, occasionally, in editorial columns. Also in the 20th century and continuing into the 21st century, newspaper advertisements, magazine advertisements, trade publications, student publications, business books, technical publications, business catalogs, and other publications often included the statement with attribution to Ruskin. For many years, various Baskin Robbins ice cream parlors prominently displayed a section of the statement in framed signs. ("There is hardly anything in the world that someone cannot make a little worse and sell a little cheaper, and the people who consider price alone are that man's lawful prey.") | https://en.wikipedia.org/wiki?curid=1811075 |
Iron law of prohibition The iron law of prohibition is a term coined by Richard Cowan in 1986 which posits that as law enforcement becomes more intense, the potency of prohibited substances increases. Cowan put it this way: "the harder the enforcement, the harder the drugs." This law is an application of the Alchian–Allen effect; Libertarian judge Jim Gray calls the law the "cardinal rule of prohibition", and notes that is a powerful argument for the legalization of drugs. It is based on the premise that when drugs or alcohol are prohibited, they will be produced in black markets in more concentrated and powerful forms, because these more potent forms offer better efficiency in the business model—they take up less space in storage, less weight in transportation, and they sell for more money. Economist Mark Thornton writes that the iron law of prohibition undermines the argument in favor of prohibition, because the higher potency forms are less safe for the consumer. Thornton published research showing that the potency of marijuana increased in response to higher enforcement budgets. He later expanded this research in his dissertation to include other illegal drugs and alcohol during Prohibition in the United States (1920–1933). The basic approach is based on the Alchian and Allen Theorem. This argument says that a fixed cost (e.g. transportation fee) added to the price of two varieties of the same product (e.g. high quality red apple and a low quality red apple) results in greater sales of the more expensive variety | https://en.wikipedia.org/wiki?curid=1811508 |
Iron law of prohibition When applied to rum-running, drug smuggling, and blockade running the more potent products become the sole focus of the suppliers. Thornton notes that the greatest added cost in illegal sales is the avoidance of detection. Thornton says that if drugs are legalized, then consumers will begin to wean themselves off the higher potency forms, for instance with cocaine users buying coca leaves, and heroin users switching to opium. The popular shift from beer to wine to hard liquor during the US Prohibition era has a parallel in the narcotics trade in the late 20th century. Bulky opium was illegal, so refined heroin became more prevalent, albeit with significant risk from blood-borne disease because of injection by needle, and far greater risk of death from overdose. Marijuana was also found too bulky and troublesome to smuggle across borders, so smugglers turned to refined cocaine with its much higher potency and profit per pound. Cowan wrote in 1986 that crack cocaine was entirely a product of the prohibition of drugs. Clinical psychiatrist Michael J. Reznicek adds crystal meth to this list. In the 2010s the iron law has been invoked to explain why heroin is displaced by fentanyl and other, even stronger, synthetic opioids. With underage drinking by teens in the U.S., one of the impacts of laws against possession of alcohol by minors is that teens tend to prefer distilled spirits, because they are easier to conceal than beer | https://en.wikipedia.org/wiki?curid=1811508 |
Iron law of prohibition Consider the situation where there are two goods formula_1 and formula_2, which are the higher quality and lower quality goods with formula_3 - the higher quality good has a higher price. Each of these goods has a compensated demand curve - a demand curve which holds utility constant - of the form:formula_4We will also assume that income is held constant, because income effects are indeterminate in forecasting changes in demand. Suppose that there is an associated cost formula_5 per item that is added to each good due to transport costs. We want to know how the ratio of demand formula_6 changes for the two goods based on formula_5. Taking the derivative with respect to formula_5 gives us:formula_9From our assumptions, we have that the total price for each item is formula_10. Therefore, we may compute formula_11 to be:formula_12Define formula_13. We may rewrite the last equation to be:formula_14Finally, let's define the elasticity formula_15 to be:formula_16Now we may rewrite the change in the price ratio with respect to the cost in its final form:formula_17We want to show that formula_18, but seem to be stuck with elasticities that are indeterminate. However, Hicks' third law of demand gives us that:formula_19To see why this is, suppose that we take a more general version of the compensated demand function with formula_20 goods and compensated demand curve formula_21 | https://en.wikipedia.org/wiki?curid=1811508 |
Iron law of prohibition For a homogeneous function formula_22 of degree formula_23, defined as:formula_24Euler's homogeneous function theorem states that:formula_25Demand functions are homogeneous of degree 0 - if all prices and income are multiplied by formula_26, then the consumer's demand for goods remains the same - which implies that in the general formula_20 good case:formula_28Dividing by the good stock formula_29 then gives us:formula_30Thereby establishing Hicks' third law of demand. And from this law, we may use formula_31 and formula_32 to show that:formula_33Therefore, we may conclude using the earlier identities that:formula_34But this last inequality was just our starting assumption! This implies that as the transport costs increase, the higher quality good will become more prevalent than the lower quality good. In the drug-specific context, as costs associated with drug enforcement increase, the more potent drug will become more prevalent in the illegal drug market. | https://en.wikipedia.org/wiki?curid=1811508 |
Jim Merkel (born 1957) is an American author and engineer, who moved from involvement in the military industry to advocating simple living. Since 1989, Merkel has dedicated himself to trying to reduce his personal impact on the environment and to encourage others to do the same. Initially trained as an electrical engineer, Merkel spent twelve years designing industrial and military systems. After witnessing the devastation following the 1989 Exxon Valdez oil spill, however, he concluded that global problems had become so urgent as to require immediate action. He consequently quit his job and began a new career as an environmental activist and spokesman. He claims to have lived on $5,000 a year (close to the global median income) for 16 years (ca. 1989 – 2005), later increasing to $10,000 per year. He founded the Alternative Transportation Task Force in San Luis Obispo, California and served briefly as an elected officer of the Sierra Club; he conducts approximately 60 workshops each year on sustainable living and "radical simplicity" in the United States, Canada, and Spain. In 1994 he received an Earthwatch Gaia Fellowship, allowing him to visit Kerala, India, and parts of the Himalayas to research sustainable living. In 1995, he founded the Global Living Project and continues to serve as its co-director. In April 2005, Dartmouth College appointed him its first Sustainability Director. He lives in Belfast, Maine with his partner, Susan, and his Son, Walden | https://en.wikipedia.org/wiki?curid=1812162 |
Jim Merkel He is now working on Saving Walden's World, A film About how having small families and small ecological footprints can save the planet. | https://en.wikipedia.org/wiki?curid=1812162 |
Discounted utility In economics, discounted utility is the utility (desirability) of some future event, such as consuming a certain amount of a good, as perceived at the present time as opposed to at the time of its occurrence. It is calculated as the present discounted value of future utility, and for people with time preference for sooner rather than later gratification, it is less than the future utility. The utility of an event "x" occurring at future time "t" under utility function "u", discounted back to the present (time 0) using discount factor formula_1 Is Since more distant events are less liked, formula_3 calculations made for events at various points in the future as well as at the present take the form where formula_5 is the utility of some choice formula_6 at time formula_7 and "T" is the time of the most distant future satisfaction event. Here, since utility comparisons are being made across time when the utilities are combined in a single evaluation, the utility function is necessarily cardinal in nature. In a typical intertemporal consumption model, the above summation of utilities discounted from various future times would be maximized with respect to the amounts "x" consumed in each period, subject to an intertemporal budget constraint that says that the present value of current and future expenditures does not exceed the present value of financial resources available for spending. The interpretation of formula_8 is not straightforward. Sometimes it is explained as the degree of a person's patience | https://en.wikipedia.org/wiki?curid=1812304 |
Discounted utility Given the interpretation of economic agents as rational, this exempts time-valuations from rationality judgments, so that someone who spends and borrows voraciously is just as rational as someone who spends and saves moderately, or as someone who hoards his wealth and never spends it: different people have different rates of time preference. Some formulations treat formula_8 not as a constant, but as a function formula_10 that itself varies over time, for example in models which use the concept of hyperbolic discounting. This view is consistent with empirical observations that humans display inconsistent time preferences. For example, experiments by Tversky and Kahneman showed that the same people who would choose 1 candy bar now over 2 candy bars tomorrow, would choose 2 candy bars 101 days from now over 1 candy bar 100 days from now. (This is inconsistent because if the same question were posed 100 days from now, the person would ostensibly again choose 1 candy bar immediately instead of 2 candy bars the next day.) Despite arguments about how formula_8 should be interpreted, the basic idea is that all other things equal, the agent prefers to have something now as opposed to later (hence formula_12). | https://en.wikipedia.org/wiki?curid=1812304 |
Financial secretary is an administrative and executive government position within the governance of a state, corporation, private or public organization, small group or other body with financial assets. A financial secretary oversees policy concerning the flow of financial resources like money in and out of an organization. The officer sometimes determines policy concerning the purchase or sale of goods and services, collection of dues and employment. The officer implements policy with the cooperation of other executives. can also be the title of a cabinet member in a number of former and current British dependencies. This is the case in Hong Kong (see Financial Secretary (Hong Kong)), Jamaica, Montserrat, Saint Helena, etc. In the United Kingdom, the Financial Secretary to the Treasury is a junior minister position but the office holder attends the meetings of the cabinet. | https://en.wikipedia.org/wiki?curid=1813766 |
Policy-ineffectiveness proposition The policy-ineffectiveness proposition (PIP) is a new classical theory proposed in 1975 by Thomas J. Sargent and Neil Wallace based upon the theory of rational expectations, which posits that monetary policy cannot systematically manage the levels of output and employment in the economy. Prior to the work of Sargent and Wallace, macroeconomic models were largely based on the adaptive expectations assumption. Many economists found this unsatisfactory since it assumes that agents may repeatedly make systematic errors and can only revise their expectations in a backward-looking way. Under adaptive expectations, agents do not revise their expectations even if the government announces a policy that involves increasing money supply beyond its expected growth level. Revisions would only be made after the increase in the money supply has occurred, and even then agents would react only gradually. In each period that agents found their expectations of inflation to be wrong, a certain proportion of agents' forecasting error would be incorporated into their initial expectations. Therefore, equilibrium in the economy would only be converged upon and never reached. The government would be able to maintain employment above its natural level and easily manipulate the economy. This behavior by agents is contrary to that which is assumed by much of economics | https://en.wikipedia.org/wiki?curid=1814486 |
Policy-ineffectiveness proposition Economics has firm foundations in assumption of rationality, so the systematic errors made by agents in macroeconomic theory were considered unsatisfactory by Sargent and Wallace. More importantly, this behavior seemed inconsistent with the stagflation of the 1970s, when high inflation coincided with high unemployment, and attempts by policymakers to actively manage the economy in a Keynesian manner were largely counterproductive. When applying rational expectations within a macroeconomic framework, Sargent and Wallace produced the policy-ineffectiveness proposition, according to which the government could not successfully intervene in the economy if attempting to manipulate output. If the government employed monetary expansion in order to increase output, agents would foresee the effects, and wage and price expectations would be revised upwards accordingly. Real wages would remain constant and therefore so would output; no money illusion occurs. Only stochastic shocks to the economy can cause deviations in employment from its natural level. Taken at face value, the theory appeared to be a major blow to a substantial proportion of macroeconomics, particularly Keynesian economics. However, criticisms of the theory were quick to follow its publication. The Sargent and Wallace model has been criticised by a wide range of economists. Some, like Milton Friedman, have questioned the validity of the rational expectations assumption | https://en.wikipedia.org/wiki?curid=1814486 |
Policy-ineffectiveness proposition Sanford Grossman and Joseph Stiglitz argued that even if agents had the cognitive ability to form rational expectations, they would be unable to profit from the resultant information since their actions would then reveal their information to others. Therefore, agents would not expend the effort or money required to become informed and government policy would remain effective. The New Keynesian economists Stanley Fischer (1977) and Edmund Phelps and John B. Taylor (1977) assumed that workers sign nominal wage contracts that last for more than one period, making wages "sticky". With this assumption the model shows government policy is fully effective since, although workers rationally expect the outcome of a change in policy, they are unable to respond to it as they are locked into expectations formed when they signed their wage contract. Not only is it possible for government policy to be used effectively, but its use is also desirable. The government is able to respond to stochastic shocks in the economy which agents are unable to react to, and so stabilise output and employment. The Barro–Gordon model showed how the ability of government to manipulate output would lead to inflationary bias. The government would be able to cheat agents and force unemployment below its natural level but would not wish to do so. The role of government would therefore be limited to output stabilisation | https://en.wikipedia.org/wiki?curid=1814486 |
Policy-ineffectiveness proposition Since it was possible to incorporate the rational expectations hypothesis into macroeconomic models whilst avoiding the stark conclusions that Sargent and Wallace reached, the policy-ineffectiveness proposition has had less of a lasting impact on macroeconomic reality than first may have been expected. In fact, Sargent himself admitted that macroeconomic policy could have nontrivial effects, even under the rational expectations assumption, in the preface to the 1987 edition of his textbook "Dynamic Macroeconomic Theory": Despite the criticisms, Anatole Kaletsky has described Sargent and Wallace's proposition as a significant contributor to the displacement of Keynesianism from its role as the leading economic theory guiding the governments of advanced nations. While the policy-ineffectiveness proposition has been debated, its validity can be defended on methodological grounds. To do so, one has to realize its conditional character. For new , countercyclical stimulation of aggregate demand through monetary policy instruments is neither possible nor beneficial "if the assumptions of the theory hold". "If" expectations are rational and "if" markets are characterized by completely flexible nominal quantities and "if" shocks are unforeseeable white noises, then macroeconomic systems can deviate from the equilibrium level only under contingencies (i.e. random shocks) | https://en.wikipedia.org/wiki?curid=1814486 |
Policy-ineffectiveness proposition However, no systematic countercyclical monetary policy can be built on these conditions, since even monetary policy makers cannot foresee these shocks hitting economies, so no planned response is possible. According to the common and traditional judgement, new classical macroeconomics brought the inefficiency of economic policy into the limelight. Moreover, these statements are always undermined by the fact that new classical assumptions are too far from life-world conditions to plausibly underlie the theorems. So, it has to be realized that the precise design of the assumptions underlying the policy-ineffectiveness proposition makes the most influential, though highly ignored and misunderstood, scientific development of new classical macroeconomics. New did not assert simply that activist economic policy (in a narrow sense: monetary policy) is ineffective. Robert Lucas and his followers drew the attention to "the conditions under which this inefficiency probably emerges". | https://en.wikipedia.org/wiki?curid=1814486 |
Samuel Hollander Samuel Hollander, (born April 6, 1937) is a British/Canadian/Israeli economist. Born in London, he received a B.Sc. in economics from the London School of Economics in 1959. In 1961 he received an AM and a Ph.D. in 1963 from Princeton University. He started with the University of Toronto becoming an Assistant Professor (1963–1966), Associate Professor (1966–1970), Professor (1970–1984), University Professor (1984–1998), and upon his retirement in 1998, University Professor Emeritus. Since 2000 he is a Professor at Ben-Gurion University of the Negev. He became a citizen of Canada in 1967 and of Israel in 2000. is one of the most influential and controversial living authors on History of Economic Thought, especially on classical economics. His monumental studies of Adam Smith, David Ricardo, Thomas Malthus and John Stuart Mill have provoked some sharp reactions. Especially his "new view" of David Ricardo as a direct predecessor of later neo-classical economists such as Marshall and Walras has triggered heated debates. Apart from many critics he has also enjoyed the support of a considerable number of prominent fellow economists. His work was highly recommended by the late Lord Robbins, who says "... he really surpasses all previous historians of economic thought, especially on Ricardo" (Robbins, 1998, p. 143). | https://en.wikipedia.org/wiki?curid=1815696 |
Top-ups In business, a top-up is a variation of a company’s stock repurchase program for common shareholders. Although this buyback reduces voting interest of its shareholder, the shareholder may subsequently increase its holdings, called a top-up. For example, if company A holds 20% of voting power, and company B reduces this power to 10%, company A may increase its voting power to 15% within 6 months. In the event of a hostile takeover attempt, a target company can use a top-up to increase time for enhancing takeover defenses. | https://en.wikipedia.org/wiki?curid=1817668 |
Location model A location (spatial) model refers to any monopolistic competition model in economics that demonstrates consumer preference for particular brands of goods and their locations. Examples of location models include Hotelling’s Location Model, Salop’s Circle Model, and hybrid variations. In traditional economic models, consumers display preference given the constraints of a product characteristic space. Consumers perceive certain brands with common characteristics to be close substitutes, and differentiate these products from their unique characteristics. For example, there are many brands of chocolate with nuts and others without them. Hence, the chocolate with nuts is a constraint of its product characteristic space. On the other hand, consumers in location models display preference for both the utility gained from a particular brand’s characteristics as well as its geographic location; these two factors form an enhanced “product characteristic space”. Consumers are now willing to sacrifice pleasure from products for a closer geographic location, and vice versa. For example, consumers realize high costs for products that are located far from their spatial point (e.g. transportation costs, time, etc.) and also for products that deviate from their ideal features. Firms have greater market power when they satisfy the consumer’s demand for products at closer distance or preferred products. In 1929, Hotelling developed a location model that demonstrates the relationship between location and pricing behavior of firms | https://en.wikipedia.org/wiki?curid=1820751 |
Location model He represented this notion through a line of fixed length. Assuming all consumers are identical (except for location) and consumers are evenly dispersed along the line, both the firms and consumer respond to changes in demand and the economic environment. In Hotelling’s Location Model, firms do not exercise variations in product characteristics; firms compete and price their products in only one dimension, geographic location. Therefore, traditional usage of this model should be used for consumers who perceive products to be perfect substitutes or as a foundation for modern location models. Assume that the line in Hotelling’s location model is actually a street with fixed length. All consumers are identical, except they are uniformly located at two equal quadrants formula_1 and formula_2, which is divided in the center by point formula_3. Consumers face a transportation/time cost for reaching a firm, denoted by formula_4; they have no preferences for the firms. There are two firms in this scenario, Firm x and Firm y; each one is located at a different end of the street, is fixed in location and sells an identical product. Given the assumptions of the Hotelling model, consumers will choose either firm as long as the combined price formula_5 and transportation cost formula_4 of the product is less than the competitive firm. For example, if both firms sell the product at the same price formula_5, consumers in quadrants formula_1 and formula_2 will pick the firm closest to them | https://en.wikipedia.org/wiki?curid=1820751 |
Location model The price realized by the consumer is formula_10, where formula_11 is the price of the product including the cost of transportation. As long as formula_4 for Firm x is greater than Firm y, consumers will travel to Firm y to purchase their product; this minimizes formula_11. Only the consumers who live at point formula_3, the halfway point between the two firms, will be indifferent between the two product locations. Assume that the line in Hotelling’s location model is actually a street with fixed length. All consumers are identical, except they are uniformly located in four quadrants formula_1, formula_2, formula_4, and formula_18; the halfway point between the endpoints is point formula_3. Consumers face an equal transportation/time cost for reaching a firm, denoted by formula_4; they have no preferences for the firms. There are two firms in this scenario, Firm x and Firm y; each one is located at a different end of the street, is able to relocate at no cost, and sells an identical product. In this example, Firm x and Firm y will maximize their profit by increasing their consumer pool. Firm x will move slightly toward Firm y, in order to gain Firm y’s customers. In response, Firm y will move slightly toward Firm x to re-establish its loss, and increase the pool from its competitor. The cycle repeats until both firms are at point formula_3, the halfway point of the street where each firm has the same number of customers. This result is known as Hotelling's law | https://en.wikipedia.org/wiki?curid=1820751 |
Location model If only Firm x can relocate without costs and Firm y is fixed, Firm x will move to the side of Firm y where the consumer pool is maximized. Consequently, the profits gained from Firm X significantly increase, while Firm Y incurs a significant loss. One of the most famous variations of Hotelling’s location model is Salop’s circle model. Similar to the previous spatial representations, the circle model examines consumer preference with regards to geographic location. However, Salop introduces two significant factors: 1) firms are located around a circle with no end-points, and 2) it allows the consumer to choose a second, heterogeneous good. Assume that the consumers are equidistant from one another around the circle. The model will occur for one time period, in which only one product is purchased. The consumer will have a choice of purchasing variations of Product A (a differentiated product) or Product B (an outside good; undifferentiated product). There are two firms also located equidistant around the circle. Each firm offers a variation of Product A, and an outside firm offers a good, Product B. In this example, the consumer wants to purchase their ideal variation of Product A. They are willing to purchase the product, given that it is within the constraint of their utility, transportation/distance costs, and price | https://en.wikipedia.org/wiki?curid=1820751 |
Location model The utility formula_22 for a particular product at distance formula_18 is represented in the following equation: formula_24 Where formula_22 is the utility from a superior brand, formula_26 denotes the rate at which an inferior brand lowers the utility from the superior brand, formula_18 is the location of the superior brand, and formula_28 is the location of the consumer. The distance between the brand and the consumer is thereby given in formula_29. The consumer’s primary goal is to maximize consumer surplus, i.e. purchase the product that best satisfies any combination of price and quality. Although the consumer may receive more pleasure from their superior brand, the inferior brand may maximize the surplus formula_30 which is given by: formula_31, where the difference is between the utility of a product at location formula_18 and the price formula_5. Now suppose the consumer also has the option to purchase an outside, undifferentiated Product B. The consumer surplus gained from Product B is denoted by formula_34. Therefore, for a given amount of money, the consumer will purchase the superior variation of Product A over Product B as long as formula_35, where the consumer surplus from the superior variation of Product A is greater than the consumer surplus gained from Product B. Alternatively, the consumer only purchases the superior variation of product A as long as formula_36, where the difference between the surplus of the superior variation of Product A and the surplus gained from Product B is positive. | https://en.wikipedia.org/wiki?curid=1820751 |
Cointegration is a statistical property of a collection of time series variables. First, all of the series must be integrated of order "d" (see Order of integration). Next, if a linear combination of this collection is integrated of order less than d, then the collection is said to be co-integrated. Formally, if ("X","Y","Z") are each integrated of order "d", and there exist coefficients "a","b","c" such that is integrated of order less than d, then "X", "Y", and "Z" are cointegrated. has become an important property in contemporary time series analysis. Time series often have trends—either deterministic or stochastic. In an influential paper, Charles Nelson and Charles Plosser (1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends. If two or more series are individually integrated (in the time series sense) but some linear combination of them has a lower order of integration, then the series are said to be cointegrated. A common example is where the individual series are first-order integrated () but some (cointegrating) vector of coefficients exists to form a stationary linear combination of them. For instance, a stock market index and the price of its associated futures contract move through time, each roughly following a random walk | https://en.wikipedia.org/wiki?curid=1822603 |
Cointegration Testing the hypothesis that there is a statistically significant connection between the futures price and the spot price could now be done by testing for the existence of a cointegrated combination of the two series. The first to introduce and analyse the concept of spurious—or nonsense—regression was Udny Yule in 1926. Before the 1980s, many economists used linear regressions on non-stationary time series data, which Nobel laureate Clive Granger and Paul Newbold showed to be a dangerous approach that could produce spurious correlation, since standard detrending techniques can result in data that are still non-stationary. Granger's 1987 paper with Robert Engle formalized the cointegrating vector approach, and coined the term. For integrated processes, Granger and Newbold showed that de-trending does not work to eliminate the problem of spurious correlation, and that the superior alternative is to check for co-integration. Two series with trends can be co-integrated only if there is a genuine relationship between the two. Thus the standard current methodology for time series regressions is to check all-time series involved for integration. If there are series on both sides of the regression relationship, then it's possible for regressions to give misleading results. The possible presence of cointegration must be taken into account when choosing a technique to test hypotheses concerning the relationship between two variables having unit roots (i.e. integrated of at least order one) | https://en.wikipedia.org/wiki?curid=1822603 |
Cointegration The usual procedure for testing hypotheses concerning the relationship between non-stationary variables was to run ordinary least squares (OLS) regressions on data which had been differenced. This method is biased if the non-stationary variables are cointegrated. For example, regressing the consumption series for any country (e.g. Fiji) against the GNP for a randomly selected dissimilar country (e.g. Afghanistan) might give a high R-squared relationship (suggesting high explanatory power on Fiji's consumption from Afghanistan's GNP). This is called spurious regression: two integrated series which are not directly causally related may nonetheless show a significant correlation; this phenomenon is called spurious correlation. The three main methods for testing for cointegration are: If formula_1 and formula_2 are non-stationary and Order of integration d=1, then a linear combination of them must be stationary for some value of formula_3 and formula_4 . In other words: where formula_4 is stationary. If we knew formula_3, we could just test it for stationarity with something like a Dickey–Fuller test, Phillips–Perron test and be done. But because we don't know formula_3, we must estimate this first, generally by using ordinary least squares, and then run our stationarity test on the estimated formula_4 series, often denoted formula_10. A second regression is then run on the first differenced variables from the first regression, and the lagged residuals formula_11 is included as a regressor | https://en.wikipedia.org/wiki?curid=1822603 |
Cointegration The Johansen test is a test for cointegration that allows for more than one cointegrating relationship, unlike the Engle–Granger method, but this test is subject to asymptotic properties, i.e. large samples. If the sample size is too small then the results will not be reliable and one should use Auto Regressive Distributed Lags (ARDL). Peter C. B. Phillips and Sam Ouliaris (1990) show that residual-based unit root tests applied to the estimated cointegrating residuals do not have the usual Dickey–Fuller distributions under the null hypothesis of no-cointegration. Because of the spurious regression phenomenon under the null hypothesis, the distribution of these tests have asymptotic distributions that depend on (1) the number of deterministic trend terms and (2) the number of variables with which co-integration is being tested. These distributions are known as Phillips–Ouliaris distributions and critical values have been tabulated. In finite samples, a superior alternative to the use of these asymptotic critical value is to generate critical values from simulations. In practice, cointegration is often used for two series, but it is more generally applicable and can be used for variables integrated of higher order (to detect correlated accelerations or other second-difference effects). Multicointegration extends the cointegration technique beyond two variables, and occasionally to variables integrated at different orders | https://en.wikipedia.org/wiki?curid=1822603 |
Cointegration Tests for cointegration assume that the cointegrating vector is constant during the period of study. In reality, it is possible that the long-run relationship between the underlying variables change (shifts in the cointegrating vector can occur). The reason for this might be technological progress, economic crises, changes in the people's preferences and behaviour accordingly, policy or regime alteration, and organizational or institutional developments. This is especially likely to be the case if the sample period is long. To take this issue into account, tests have been introduced for cointegration with one unknown structural break, and tests for cointegration with two unknown breaks are also available. | https://en.wikipedia.org/wiki?curid=1822603 |
Solow residual The is a number describing empirical productivity growth in an economy from year to year and decade to decade. Robert Solow, the Nobel Memorial Prize in Economic Sciences-winning economist, defined rising productivity as rising output with constant capital and labor input. It is a "residual" because it is the part of growth that is not accounted for by measures of capital accumulation or increased labor input. Increased physical throughput – i.e. environmental resources – is specifically excluded from the calculation; thus some portion of the residual can be ascribed to increased physical throughput. The example used is for the intracapital substitution of aluminium fixtures for steel during which the inputs do not alter. This differs in almost every other economic circumstance in which there are many other variables. The Solow Residual is procyclical and measures of it are now called the rate of growth of multifactor productivity or total factor productivity, though Solow (1957) did not use these terms. In the 1950s, many economists undertook comparative studies of economic growth following World War II reconstruction. Some said that the path to long-term growth was achieved through investment in industry and infrastructure and in moving further and further into capital intensive automated production. Although there was always a concern about diminishing returns to this approach because of equipment depreciation, it was a widespread view of the correct industrial policy to adopt | https://en.wikipedia.org/wiki?curid=1827716 |
Solow residual Many economists pointed to the Soviet command economy as a model of high-growth through tireless re-investment of output in further industrial construction. However, some economists took a different view: they said that greater capital concentrations would yield diminishing returns once the marginal return to capital had equalized with that of labour – and that the apparently rapid growth of economies with high savings rates would be a short-term phenomenon. This analysis suggested that improved labour productivity or total factor technology was the long-run determinant of national growth, and that only under-capitalized countries could grow per-capita income substantially by investing in infrastructure – some of these undercapitalized countries were still recovering from the war and were expected to rapidly develop in this way on a path of convergence with developed nations. The is defined as per-capita economic growth above the rate of per-capita capital stock growth, so its detection indicates that there must be some contribution to output other than advances in industrializing the economy. The fact that the measured growth in the standard of living, also known as the ratio of output to labour input, could not be explained entirely by the growth in the capital/labour ratio was a significant finding, and pointed to innovation rather than capital accumulation as a potential path to growth | https://en.wikipedia.org/wiki?curid=1827716 |
Solow residual The 'Solow growth model' is not intended to explain or derive the empirical residual, but rather to demonstrate how it will affect the economy in the long run when imposed on an aggregate model of the macroeconomy exogenously. This model was really a tool for demonstrating the impact of "technology" growth as against "industrial" growth rather than an attempt to understand where either type of growth was coming from. The is primarily an observation to explain, rather than predict the outcome of a theoretical analysis. It is a question rather than an answer, and the following equations should not obscure that fact. Solow assumed a very basic model of annual aggregate output over a year ("t"). He said that the output quantity would be governed by the amount of capital (the infrastructure), the amount of labour (the number of people in the workforce), and the productivity of that labour. He thought that the productivity of labour was the factor driving long-run GDP increases | https://en.wikipedia.org/wiki?curid=1827716 |
Solow residual An example economic model of this form is given below: where: To measure or predict the change in output within this model, the equation above is differentiated in time ("t"), giving a formula in partial derivatives of the relationships: labour-to-output, capital-to-output, and productivity-to-output, as shown: Observe: Similarly: Therefore: The growth factor in the economy is a proportion of the output last year, which is given (assuming small changes year-on-year) by dividing both sides of this equation by the output, "Y": The first two terms on the right hand side of this equation are the proportional changes in labour and capital year-on-year, and the left hand side is the proportional output change. The remaining term on the right, giving the effect of productivity improvements on GDP is defined as the Solow residual: The residual, "SR"("t") is that part of growth not explicable by measurable changes in the amount of capital, "K", and the number of workers, "L". If output, capital, and labour all double every twenty years the residual will be zero, but in general it is higher than this: output goes up faster than growth in the input factors. The residual varies between periods and countries, but is almost always positive in peace-time capitalist countries. Some estimates of the post-war U.S. residual credited the country with a 3% productivity increase per-annum until the early 1970s when productivity growth appeared to stagnate | https://en.wikipedia.org/wiki?curid=1827716 |
Solow residual The above relation gives a very simplified picture of the economy in a single year; what growth theory econometrics does is to look at a sequence of years to find a statistically significant pattern in the changes of the variables, and perhaps identify the existence and value of the "Solow residual". The most basic technique for doing this is to assume constant rates of change in all the variables (obscured by noise), and regress on the data to find the best estimate of these rates in the historical data available (using an Ordinary least squares regression). Economists always do this by first taking the natural log of their equation (to separate out the variables on the right-hand-side of the equation); logging both sides of this production function produces a simple linear regression with an error term, formula_8: A constant growth factor implies exponential growth in the above variables, so differentiating gives a linear relationship between the growth factors which can be deduced in a simple regression. In a regression analysis, the equation one would estimate is: where: "y" is (log) output, ln(Y) "k" is capital, ln(K) "ℓ" is labour, ln(L) "C" can be interpreted as the co-efficient on log("A") – the rate of technological change – (1 − "α"). Given the form of the regression equation, we can interpret the coefficients as elasticities. For calculation of the actual quantity/ level of technology formula_11 we simply refer back to our equation in levels | https://en.wikipedia.org/wiki?curid=1827716 |
Solow residual Knowing quantities of output formula_13, Capital formula_14, Labor formula_15 and estimates for formula_16,formula_17 and formula_18 we can solve for formula_19 as: Mankiw, Romer, and Weil augmented the Solow-Swan model with a human capital term. The explicit inclusion of this term in the model transfers the effect of changes in human capital from the to capital accumulation. As a consequence, the is smaller in the augmented Solow model: where: The associated regression to estimate this model is: Breton estimates the for the human capital-augmented version of the Solow-Swan model over the 20th century. He finds that from 1910 to 2000 formula_23 in 42 of the world's leading economies increased at an average rate of 1%/year and formula_24 increased at 0.3%/year. The measures total factor productivity, but the productivity variable is normally attached to the labor variable in the Solow-Swan model to make technological growth labor-augmenting. This type of productivity growth is required mathematically to keep the shares of national income accruing to the factors of production constant over time. These shares appear to have been stable historically in developing nations, and developed nations. However, Thomas Piketty's famous study of inequality in 2014, using a version of the Solow model, argued that a stable, relatively low profit share of national income was largely a twentieth century phenomenon | https://en.wikipedia.org/wiki?curid=1827716 |
Solow residual Rapidly expanding countries (catching up after a crisis or trade liberalization) tend to have a rapid turn-over in technologies as they accumulate capital. It has been suggested that this will tend to make it harder to gain experience with the available technologies and that a zero in these cases actually indicates rising labour productivity. In this theory, the fact that "A" (labour output productivity) is not falling as new skills become essential indicates that the labour force is capable of adapting, and is likely to have its productivity growth underestimated by the residual—This idea is linked to "learning-by-doing". | https://en.wikipedia.org/wiki?curid=1827716 |
An Austrian Perspective on the History of Economic Thought is two-volume non-fiction work written by Murray N. Rothbard. Rothbard said he originally intended to write a "standard Adam Smith-to-the-present moderately sized book"; but expanded the scope of the project to include economists who preceded Smith and to comprise a multi-volume series. Rothbard completed only the first two volumes, "Economic Thought Before Adam Smith" and "Classical Economics". | https://en.wikipedia.org/wiki?curid=1835603 |
Slutsky equation The (or Slutsky identity) in economics, named after Eugen Slutsky, relates changes in Marshallian (uncompensated) demand to changes in Hicksian (compensated) demand, which is known as such since it compensates to maintain a fixed level of utility. There are two parts of the Slutsky equation, namely the substitution effect, and income effect. In general, the substitution effect is negative. He designed this formula to explore a consumer's response as the price changes. When the price increases, the budget set moves inward, which causes the quantity demanded to decrease. In contrast, when the price decreases, the budget set moves outward, which leads to an increase in the quantity demanded. The equation demonstrates that the change in the demand for a good, caused by a price change, is the result of two effects: The decomposes the change in demand for good "i" in response to a change in the price of good "j": where formula_2 is the Hicksian demand and formula_3 is the Marshallian demand, at the vector of price levels formula_4, wealth level (or, alternatively, income level) formula_5, and fixed utility level formula_6 given by maximizing utility at the original price and income, formally given by the indirect utility function formula_7. The right-hand side of the equation is equal to the change in demand for good "i" holding utility fixed at "u" minus the quantity of good "j" demanded, multiplied by the change in demand for good "i" when wealth changes | https://en.wikipedia.org/wiki?curid=1838280 |
Slutsky equation The first term on the right-hand side represents the substitution effect, and the second term represents the income effect. Note that since utility is not observable, the substitution effect is not directly observable, but it can be calculated by reference to the other two terms in the Slutsky equation, which are observable. This process is sometimes known as the Hicks decomposition of a demand change. The equation can be rewritten in terms of elasticity: where ε is the (uncompensated) price elasticity, ε is the compensated price elasticity, ε the income elasticity of good i, and b the budget share of good j. The same equation can be rewritten in matrix form to allow multiple price changes at once: where D is the derivative operator with respect to price and D is the derivative operator with respect to wealth. The matrix formula_10 is known as the Slutsky matrix, and given sufficient smoothness conditions on the utility function, it is symmetric, negative semidefinite, and the Hessian of the expenditure function. While there are several ways to derive the Slutsky equation, the following method is likely the simplest. Begin by noting the identity formula_11 where formula_12 is the expenditure function, and "u" is the utility obtained by maximizing utility given p and "w". Totally differentiating with respect to "p" yields the following: Making use of the fact that formula_14 by Shephard's lemma and that at optimum, one can substitute and rewrite the derivation above as the Slutsky equation | https://en.wikipedia.org/wiki?curid=1838280 |
Slutsky equation A Giffen good is a product that is in greater demand when the price increases, which are also special cases of inferior goods. In the extreme case of income inferiority, the size of income effect overpowered the size of the substitution effect, leading to a positive overall change in demand responding to an increase in the price. Slutsky's decomposition of the change in demand into a pure substitution effect and income effect explains why the law of demand doesn't hold for Giffen goods. | https://en.wikipedia.org/wiki?curid=1838280 |
Shared services is the provision of a service by one part of an organization or group, where that service had previously been found, in more than one part of the organization or group. Thus the funding and resourcing of the service is shared and the providing department effectively becomes an internal service provider. The key here is the idea of 'sharing' within an organization or group. This sharing needs to fundamentally include shared accountability of results by the unit from where the work is migrated to the provider. The provider, on the other hand, needs to ensure that the agreed results are delivered based on defined measures (KPIs, cost, quality etc.). is similar to collaboration that might take place between different organizations such as a Hospital Trust or a Police Force. For example, adjacent Trusts might decide to collaborate by merging their HR or IT functions. There are two arguments for sharing services: The ‘less of a common resource' argument and the ‘efficiency through industrialization' argument. The former is ‘obvious': if you have fewer managers, IT systems, buildings etc; if you use less of some resource, it will reduce costs. The second argument is ‘efficiency through industrialization’. This argument assumes that efficiencies follow from specialization and standardization – resulting in the creation of ‘front' and ‘back' offices. The typical method is to simplify, standardize and then centralize, using an IT 'solution' as the means | https://en.wikipedia.org/wiki?curid=1838287 |
Shared services is different from the model of outsourcing, which is where an external third party is paid to provide a service that was previously internal to the buying organization, typically leading to redundancies and re-organization. There is an ongoing debate about the advantages of shared services over outsourcing. It is sometimes assumed that a joint venture between a government department and a commercial organization is an example of shared services. The joint venture involves the creation of a separate legal commercial entity (jointly owned), which provides profit to its shareholders. Traditionally the development of a shared-service organization (SSO) or shared-service centre (SSC) within an organization is an attempt to reduce costs (often attempted through economies of scale), standardized processes (through centralization). A global Service Center Benchmark study carried out by the Shared Services & Outsourcing Network (SSON) and the Hackett Group, which surveyed more than 250 companies, found that only about a third of all participants were able to generate cost savings of 20% or greater from their SSOs. At NASA, the 2006 switch to a shared services model is realizing nearly $20 million of savings annually. Further, by the end of 2015, the NASA Shared Services Center is expected to save the organization a total of over $200 million, according to NASA's Director of Service Delivery | https://en.wikipedia.org/wiki?curid=1838287 |
Shared services A large-scale cultural and process transformation can be a key component of a move to shared services and may include redundancies and changes of work practices. It is claimed that transformation often results in a better quality of work life for employees although there are few case studies to back this up . are more than just centralization or consolidation of similar activities in one location. can mean running these service activities like a business and delivering services to internal customers at a cost, quality, and timeliness that is competitive with alternatives. A shared service can take a variety of different commercial structures. The basic commercial structures include: It is sometimes argued that there are three basic location variations for a shared service including: This is not just to take advantage of wage arbitrage but to appreciate the talents of particular economies in delivering specific service offerings. The difficulty with this argument is that near-shore and off-shore are normally associated with the outsourcing model and are difficult to reconcile with the notion of an internally shared service as distinct from an externally purchased service. Clearly, the use of off-shore facilities by a government department is not an example of shared services. In establishing and running a shared service, benchmarking and measurement is considered by some as a necessity. Benchmarking is the comparison of the service provision usually against best in class | https://en.wikipedia.org/wiki?curid=1838287 |
Shared services The measurement occurs by using agreed key performance indicators (KPIs). Although the amount of KPIs chosen differs greatly it is generally accepted that fewer than 10 carefully chosen KPIs will deliver the best results. Organizations do attempt to define benchmarks for processes and business operations. Benchmarking can be used to achieve different goals including:<br> 1. To drive performance improvements using benchmarks as a means for setting performance targets that are met either through incremental performance improvements or transformational change.<br> - Strategic: with a focus on a long-term horizon; and <br> - Tactical: with a focus on the short and medium term 2. To focus an organization on becoming world class with processes that deliver the highest levels of performance that are better than those of its peer group. The private sector has been moving towards shared services since the beginning of the 1980s. Large organizations such as the BBC, BP, Bristol Myers Squibb, Ford, GE, HP, Pfizer, Rolls-Royce, ArcelorMittal, and SAP are operating them with great success. According to the English Institute of Chartered Accountants, more than 30% of U.S. Fortune 500 companies have implemented a shared-service center, and are reporting cost savings in their general accounting functions of up to 46% | https://en.wikipedia.org/wiki?curid=1838287 |
Shared services The conventional accounting practice used to generate these figures is disputed however by management thinker Professor John Seddon, who argues that the measurement known as 'unit cost' tells you nothing about overall costs. Overall costs include 'failure demand', which is defined as a failure to do something or do something right for the customer. The public sector has taken note of the benefits derived in the private sector and continues to strive for best practice. The United States and Australia among others have had shared services in government since the late 1990s. However, the failures of these projects are increasingly being reported by the press and exposed by opposition politicians. The UK government under a central drive to efficiency following from the Gershon Review are working to an overall plan for realizing the benefits of shared services. The Cabinet Office has established a team specifically tasked with the role of accelerating the take up and developing the strategy for all government departments to converge and consolidate. The savings potential of this transformation in the UK Public Sector was initially estimated by the Cabinet Office at £1.4bn per annum (20% of the estimated cost of HR and Finance functions). The National Audit Office (United Kingdom) in its November 2007 report pointed out that this £1.4bn figure lacked a clear baseline of costs and contained several uncertainties, such as the initial expenditure required and the time frame for the savings | https://en.wikipedia.org/wiki?curid=1838287 |
Shared services There are reports of UK government shared service centres failing to realise savings, such as the Department of Transport's project, described as 'stupendous incompetence' by the Parliament's Public Accounts Committee. The Northern Ireland Civil Service (NICS), has implemented shared services for a number of departments and functions. For example, IT Assist (the ICT Shared Service Centre) provides common infrastructure and desktop services to NICS staff in the office, at home or when mobile working. The government of Canada instituted Shared Services Canada on August 4, 2011, with the objective of consolidating its data centres, networks and email systems. This follows a tendency to centralize IT services that has been followed by the provinces of British Columbia, Québec, and Ontario, as well as the federal government of the United States of America and some states like Texas. PriceWaterhouseCoopers recommended integrating government data centres in a report ordered by Public Works and Government Services Canada, revealed in December 2011. In the Republic of Ireland, the health service nationally has been reorganized from a set of regional Health Boards to a unified national structure, the Health Services Executive. Within this structure there will be a National Shared Services Organisation, based on the model developed at the former Eastern Health Shared Services, where procurement, HR, finance and ICT services were provided to health agencies in the Eastern Region of Ireland on a business-to business basis | https://en.wikipedia.org/wiki?curid=1838287 |
Shared services Organizations that have centralized their IT functions have now begun to take a close look at the technology services that their IT departments provide to internal customers, evaluating where it makes sense to provide specific technology components as a shared service. E-mail and scanning operations were obvious early candidates; many organizations with document-intensive operations are deploying scanning centers as a shared service. Many large organizations, in both the public and private sectors, are now considering deploying enterprise content management (ECM) technology as a shared service. | https://en.wikipedia.org/wiki?curid=1838287 |
Operating budget The operating budget contains the expenditure and revenue generated from the daily business functions of the company. The operating budget concentrates on the operating expenditures, including cost of produce sold in the market or popularly known as cost of sold goods (COGS) and the revenue or income. COGS is the cost of direct labor and direct materials that are tied to production. The operating budget also depicts the overhead and administration costs tied directly to manufacturing the goods and providing services. However, the operating budget will not contain capital expenditures and long-term loans. | https://en.wikipedia.org/wiki?curid=1845959 |
Wealth effect The wealth effect is the change in spending that accompanies a change in perceived wealth. Usually the wealth effect is positive: spending changes in the same direction as perceived wealth. Changes in a consumer's wealth cause changes in the amounts and distribution of his or her consumption. People typically spend more overall when one of two things is true: when people "actually are" richer, objectively, or when people "perceive themselves" to be richer—for example, the assessed value of their home increases, or a stock they own goes up in price. Demand for some goods (called inferior goods) decreases with increasing wealth. For example, consider consumption of cheap fast food versus steak. As someone becomes wealthier, their demand for cheap fast food is likely to decrease, and their demand for more expensive steak may increase. Consumption may be tied to relative wealth. Particularly when supply is highly inelastic, or when the seller is a monopoly, one's ability to purchase a good may be highly related to one's relative wealth in the economy. Consider for example the cost of real estate in a city with high average wealth (for example New York or London), in comparison to a city with a low average wealth | https://en.wikipedia.org/wiki?curid=1854161 |
Wealth effect Supply is fairly inelastic, so if a helicopter drop (or gold rush) were to suddenly create large amounts of wealth in the low wealth city, those who did not receive this new wealth would rapidly find themselves crowded out of such markets, and materially worse off in terms of their ability to consume/purchase real estate (despite having participated in a weak Pareto improvement). In such situations, one cannot dismiss the relative effect of wealth on demand and supply, and cannot assume that these are static (see also General equilibrium). However, according to David Backus the wealth effect is not observable in economic data, at least in regard to increases or decreases in home or stock equity. For example, while the stock market boom in the late 1990s (caused by the dot-com bubble) increased the wealth of Americans, it did not produce a significant change in consumption, and after the crash, consumption did not decrease. Economist Dean Baker disagrees and says that “housing wealth effect” is well-known and is a standard part of economic theory and modeling, and that economists expect households to consume based on their wealth. He cites approvingly research done by Carroll and Zhou that estimates that households increase their annual consumption by 6 cents for every additional dollar of home equity. In macroeconomics, a rise in real wealth increases consumption, shifting the IS curve out to the right, thus pushing up interest rates and increasing aggregate demand. A decrease in real wealth does the opposite. | https://en.wikipedia.org/wiki?curid=1854161 |
Drawdown (economics) The drawdown is the measure of the decline from a historical peak in some variable (typically the cumulative profit or total open equity of a financial trading strategy). Somewhat more formally, if formula_1 is a stochastic process with formula_2, the drawdown at time formula_3, denoted formula_4, is defined as:formula_5The average drawdown (AvDD) up to time formula_3 is the time average of drawdowns that have occurred up to time formula_3:formula_8The maximum drawdown (MDD) up to time formula_3 is the maximum of the drawdown over the history of the variable. More formally, the MDD is defined as:formula_10 The following pseudocode computes the Drawdown ("DD") and Max Drawdown ("MDD") of the variable "NAV", the Net Asset Value of an investment. Drawdown and Max Drawdown are calculated as percentages: There are two main definitions of a drawdown: In finance, the use of the maximum drawdown as an indicator of risk is particularly popular in the world of commodity trading advisors through the widespread use of three performance measures: the Calmar ratio, the Sterling ratio and the Burke ratio. These measures can be considered as a modification of the Sharpe ratio in the sense that the numerator is always the excess of mean returns over the risk-free rate while the standard deviation of returns in the denominator is replaced by some function of the drawdown. Many assume Max DD Duration is the length of time between new highs during which the Max DD (magnitude) occurred. But that isn’t always the case | https://en.wikipedia.org/wiki?curid=1854458 |
Drawdown (economics) The Max DD duration is the longest time between peaks, period. So it could be the time when the program also had its biggest peak to valley loss (and usually is, because the program needs a long time to recover from the largest loss), but it doesn’t have to be. When formula_11 is Brownian motion with drift, the expected behavior of the MDD as a function of time is known. If formula_11 is represented as:formula_13Where formula_14 is a standard Wiener process, then there are three possible outcomes based on the behavior of the drift formula_15: Where an amount of credit is offered, a drawdown against the line of credit results in a debt (which may have associated interest terms if the debt is not cleared according to an agreement.) Where funds are made available, such as for a specific purpose, drawdowns occur if the funds – or a portion of the funds – are released when conditions are met. A passing glance at the mathematical definition of drawdown suggests significant difficulty in using an optimization framework to minimize the quantity, subject to other constraints; this is due to the non-convex nature of the problem. However, there is a way to turn the drawdown minimization problem into a linear program. The authors start by proposing an auxiliary function formula_19, where formula_20 is a vector of portfolio returns, that is defined by:formula_21They call this the "conditional drawdown-at-risk" (CDaR); this is a nod to conditional value-at-risk (CVaR), which may also be optimized using linear programming | https://en.wikipedia.org/wiki?curid=1854458 |
Drawdown (economics) There are two limiting cases to be aware of: | https://en.wikipedia.org/wiki?curid=1854458 |
Voronezhavia was an airline based in Voronezh, Russia. Its flight operations had been taken over by Polet Airlines, itself manages Voronezh Airport. | https://en.wikipedia.org/wiki?curid=1855109 |
SONIA (interest rate) SONIA (Sterling Over Night Index Average) is the effective reference for overnight indexed swaps for unsecured transactions in the Sterling market. The SONIA itself is a risk-free rate. It was launched in March 1997 by the WMBA, and is endorsed by the British Bankers Association (BBA). The Bank of England took on administration of rate in April 2016. Two years later, in April 2018, the rate underwent a number of reforms. In the same year efforts to promote SONIA as the standard Sterling interest rate benchmark for loans, derivatives and bonds were stepped up. On each London business day, SONIA is measured as the trimmed mean, rounded to four decimal places, of interest rates paid on eligible sterling denominated deposit transactions.The trimmed mean is calculated as the volume-weighted mean rate, based on the central 50% of the volume-weighted distribution of rates. Eligible transactions are: The rate conventions are: annualised rate, act/365, four decimal places. In 2018, SONIA (floating rate) bonds accounted for 20.7 per cent share of UK issuance compared to 48.1 per cent share of IBOR (floating rate) bonds. | https://en.wikipedia.org/wiki?curid=1859819 |
Infant industry argument The infant industry argument is an economic rationale for trade protectionism. The core of the argument is that nascent industries often do not have the economies of scale that their older competitors from other countries may have, and thus need to be protected until they can attain similar economies of scale. The argument was first fully articulated by Alexander Hamilton in his 1790 Report on Manufactures, was systematically developed by Daniel Raymond, and was later picked up by Friedrich List in his 1841 work "The National System of Political Economy," following his exposure to the idea during his residence in the United States in the 1820s. Infant industry protection is controversial as a policy recommendation. As with the other economic rationales for protectionism, it is often abused by rent seeking interests. Even when infant industry protection is well–intentioned, it is difficult for governments to know which industries they should protect; "infant" industries may never "grow up" relative to "adult" foreign competitors. For example, during the 1980s Brazil enforced strict controls on the import of foreign computers in an effort to nurture its own "infant" computer industry. This industry never matured; the technological gap between Brazil and the rest of the world actually widened, while the protected industries merely copied low-end foreign computers and sold them at inflated prices | https://en.wikipedia.org/wiki?curid=1859820 |
Infant industry argument In addition, countries that put up barriers to imports will often face retaliatory barriers to their exports, potentially hurting the same industries that infant industry protection is intended to help. Ernesto Zedillo, in his 2000 report to the UN Secretary-General, recommended "legitimising limited, time-bound protection for certain industries by countries in the early stages of industrialisation", arguing that "however misguided the old model of blanket protection intended to nurture import substitute industries, it would be a mistake to go to the other extreme and deny developing countries the opportunity of actively nurturing the development of an industrial sector". Many countries have successfully industrialized behind tariff barriers. For example, from 1816 through 1945, tariffs in the United States were among the highest in the world. According to Ha-Joon Chang, "almost all of today's rich countries used tariff protection and subsidies to develop their industries". | https://en.wikipedia.org/wiki?curid=1859820 |
Moody's Aaa Bond Moody's Aaa Corporate Bond, also known as "Moody's Aaa" for short is an investment bond that acts as an index of the performance of all bonds given an Aaa rating by Moody's Investors Service. This corporate bond is often used in macroeconomics as an alternative to the federal ten-year Treasury Bill as an indicator of the interest rate. Moody's and other investment companies have other less common investment bonds that are also used. Moody's Seasoned Aaa Corporate Bond Yield are available at the St. Louis FRED database: | https://en.wikipedia.org/wiki?curid=1860839 |
Branch plant economy It is not entirely evident who first used the branch plant economy concept; however, it has been extensively used in Canadian and UK literature since the 1970s. This concept broadly describes the negative consequences on the growth of the regions whose economies are primarily composed of branch plants that belong to multi-plant firms. Since the position of branch plants within the command chain is low, the regions that host these branch plants tended to be remotely controlled by the plant headquarters, which are usually located distantly. Authors at that time thought that branch plants might create a short-term boom in the regional economies when initial investments were deployed, or when they performed well owing to external factors such as the sector's expansion (e.g., the oil industry boom led to an economic boom in Aberdeen). That boom, however, did not sustain itself over the long term. In Scotland, it was mainly Scottish journalists and political readers who warned of the danger of Scotland's dependence on English firms' branches in Scotland. In Canada, an upsurge of Canadian nationalism in the 1960s and early 1970s led the Liberal governments of Lester Pearson and Pierre Trudeau to implement policies aimed at regulating foreign investment. The views of Walter L. Gordon were especially influential in the 1960s. Further left, the Waffle emerged in the New Democratic Party on a program based on Canadian economic nationalism and independence | https://en.wikipedia.org/wiki?curid=1860973 |
Branch plant economy These developments led to measures such as the creation of Petro-Canada, a government-owned oil and gas company, implemented by the Trudeau government in the mid-1970s to increase Canadian control over the oil industry. The crown corporation was created as one of the demands of the NDP in exchange for their support of Trudeau's minority government. Trudeau also established the Foreign Investment Review Agency to regulate foreign investment in the economy and limit the takeover of Canadian-owned companies by foreign multinational corporations. The election of Brian Mulroney's Progressive Conservative government in the 1984 election brought this period of economic nationalism to an end. Mulroney's government dismantled Foreign Investment Review Agency and moved to privatize Petro-Canada. The Mulroney government's negotiation and implementation of the Canada-US Free Trade Agreement resulted in increased economic integration between the US and Canada, and was opposed by economic nationalists in the 1988 election. The Canada-US FTA, the North American Free Trade Agreement and the World Trade Organization may bring branch plants to an end as the elimination of many tariffs and trade controls makes it much easier for a foreign supplier to sell in the Canadian market without having a branch plant in the country. Numerous plants, particularly in the textile and manufacturing sector, have shut down and moved to Mexico or other countries with lower wages and costs of production. | https://en.wikipedia.org/wiki?curid=1860973 |
Bradespar is a Brazilian holding company headquartered in São Paulo. The company was formed in 2000 by Banco Bradesco in order to allow the bank to spin off some of its industrial investments. In 2005, the company began to hold large holdings in mining company Vale and utility company CPFL Energia, which is one of the largest companies in the Brazilian electric sector. Bradespar's stock is traded in São Paulo and Madrid stock exchanges, and it is part of the São Paulo's Ibovespa index. Currently the single investment of the company is in the mining multinational company Vale, being one of the largest shareholders. | https://en.wikipedia.org/wiki?curid=1873231 |
Decommodification In political economics, decommodification is the strength of social entitlements and citizens' degree of immunization from market dependency. In regards to the labor force, decommodification describes a "degree to which individual, or families, can uphold a socially acceptable standard of living independently of market participation." While commodification is the transformation of goods, services, ideas and people into commodities or objects of trade, decommodification would be the "extent that workers can leave the labor market through choice." The idea of decommodification as an egalitarian concept as set forth by Esping Andersen sparked contemporary research efforts focusing on perceived inequities. In 2008, a research journal pointed out a feminist critique that "the absolute focus on the welfare of individuals who are already working" leaves a central bias in the pursuit of decommodification. Rather, the objective of women is often to be commodified in the first place so that they can enter the labor market. has been identified by ecological economists as a strategy for sustainable consumption that acts one level up on the institutional context of consumption in Western societies as compared to strategies such as eco-efficiency and eco-sufficiency. Thus, while the eco-efficiency strategy targets the product and the eco-sufficiency strategy targets the person (the consumer as decision-maker), the decommodification strategy targets the institutional context in which consumption takes place | https://en.wikipedia.org/wiki?curid=1874375 |
Decommodification It aims to decrease the influence of commodities and to limit the effect of commercialization. Esping-Andersen's fundamental study of decommodification sparked contemporary academic research efforts hoping to resolve "paradoxes" in this application. Exiting the labor market with little or no loss of income clashed with the idea that social democracy has the goal of high labor force participation. Research efforts to resolve this paradox showed that "employment impeding policies" came out of Christian democracy institutions, not social democracy institutions. This research suggests that decommodification in the social democratic model is viable. Scandinavian countries are the closest to decommodification according to the scale created by Esping Andersen's research which places Sweden as the most decommodified country in the 1980s. Sweden's level of pensions, sickness entitlements and unemployment insurance are the highest among many other leading industrial countries. Sweden's social welfare programs are mandated by the government which also offers a "de facto" guarantee to the wages of citizens' rather than taking averages and creating regulations through a means-based test on citizens' wages, level of education and their past history with the law. | https://en.wikipedia.org/wiki?curid=1874375 |
The Natural Economic Order (; published in Bern in 1916) is considered Silvio Gesell's most important book. It is a work on monetary reform and land reform. It attempts to provide a solid basis for economic liberalism in contrast to the 20th-century trend of collectivism and planned economy. The work was translated into English by Philip Pye in 1929. | https://en.wikipedia.org/wiki?curid=1876294 |
Intermediate good Intermediate goods, producer goods or semi-finished products are goods, such as partly finished goods, used as inputs in the production of other goods including final goods. A firm may make and then use intermediate goods, or make and then sell, or buy then use them. In the production process, intermediate goods either become part of the final product, or are changed beyond recognition in the process. This means intermediate goods are resold among industries. Intermediate goods are not counted in a country's GDP, as that would mean double counting, as the final product only should be counted, and the value of the intermediate good is included in the value of the final good. The value-added method can be used to calculate the amount of intermediate goods incorporated into GDP. This approach counts every phase of processing included in production of final goods. Characterization of intermediate goods as physical goods can be misleading, since, in advanced economies, about half of the value of intermediate inputs consist of services. Intermediate goods generally can be made and used in three different ways. First, a company can make and use its own intermediate goods. Second, a company can manufacture intermediate goods and sell them to others. Third, a company can buy intermediate goods to produce either secondary intermediate goods or final goods. | https://en.wikipedia.org/wiki?curid=1876560 |
Machine orders data (also known as machine tool order data) is a figure issued by Japan Machine Tool Builders Association (JMTBA) every month. It serves as one indicator of the Japanese economy. In the forex market, the release of such data is often followed by sharp change in currency exchange rate. | https://en.wikipedia.org/wiki?curid=1876938 |
Vivo Minas (formerly known as Telemig Celular) was a regional Brazilian telecommunications company headquartered in Belo Horizonte. The company used to be one of eight wireless telephone companies that emerged from the break-up of Brazil's government-owned telephone monopoly Telebras. The name Telemig comes from the fact that it was the wireless company for the state of Minas Gerais. A group led by Canada's Telesystem International Wireless, together with Brazilian bank Opportunity and six Brazilian pension funds, paid 756 million reais for Telemig Celular when it was sold by the Brazilian government in June 1998. The company operates AMPS and IS-136 networks on the 800/850 MHz band and a GSM network on the 1800 MHz (DCS) band in its licensed coverage area, the state of Minas Gerais. It started a 3G (HSDPA) deployment on 850 MHz in Belo Horizonte and has acquired 2100 MHz spectrum in its coverage area to offer statewide 3G services by the end of 2008. | https://en.wikipedia.org/wiki?curid=1878831 |
Market anomaly A market anomaly in a financial market is predictability that seems to be inconsistent with (typically risk-based) theories of asset prices. Standard theories include the capital asset pricing model and the Fama-French Three Factor Model, but a lack of agreement among academics about the proper theory leads many to refer to anomalies without a reference to a benchmark theory (Daniel and Hirschleifer 2015 and Barberis 2018, for example). Indeed, many academics simply refer to anomalies as "return predictors", avoiding the problem of defining a benchmark theory. Academics have documented more than 150 return predictors (see "List of Anomalies Documented in Academic Journals)." These "anomalies", however, come with many caveats. Almost all documented anomalies focus on illiquid, small stocks. Moreover, the studies do not account for trading costs. As a result, many anomalies do not offer profits, despite the presence of predictability. Additionally, return predictability declines substantially after the publication of a predictor, and thus may not offer profits in the future. Finally, return predictability may be due to cross-sectional or time-variation in risk, and thus does not necessarily provide a good investment opportunity. Relatedly, return predictability by itself does not disprove the efficient market hypothesis, as one needs to show predictability over and above that implied by a particular model of risk | https://en.wikipedia.org/wiki?curid=1881005 |
Market anomaly The four primary explanations for market anomalies are (1) mispricing, (2) unmeasured risk, (3) limits to arbitrage, and (4) selection bias. Academics have not reached a consensus on the underlying cause, with prominent academics continuing to advocate for selection bias, mispricing, and risk-based theories. Anomalies can be broadly categorized into time-series and cross-sectional anomalies. Time-series anomalies refer to predictability in the aggregate stock market, such as the often-discussed Cyclically Adjusted Price-Earnings (CAPE) predictor. These time-series predictors indicate times in which it is better to be invested in stocks vs a safe asset (such as Treasury bills). Cross-sectional anomalies refer to the predictable out-performance of particular stocks relative to others. For example, the well-known size anomaly refers to the fact that stocks with lower market capitalization tend to out-perform stocks with higher market capitalization in the future. Many, if not most, of the papers which document anomalies attribute them to mispricing (Lakonishok, Shelifer, and Visny 1994, for example). The mispricing explanation is natural, as anomalies are by definition deviations from a benchmark theory of asset prices. "Mispricing" is then defined as the deviation relative to the benchmark. The most common benchmark is the CAPM (Capital-Asset-Pricing Model). The deviation from this theory is measured by a non-zero intercept in an estimated security market line | https://en.wikipedia.org/wiki?curid=1881005 |
Market anomaly This intercept is commonly denoted by the Greek letter alpha: formula_1 where formula_2 is the return on the anomaly, formula_3is the return on the risk-free rate, formula_4is the slope from regressing the anomaly's return on the market's return, and formula_5is the return on the "market", often proxied by the return on the CRSP index (an index of all publicly traded U.S. stocks). The mispricing explanations are often contentious within academic finance, as academics do not agree on the proper benchmark theory (see Unmeasured Risk, below). This disagreement is closely related to the "joint-hypothesis problem" of the efficient market hypothesis. Among academics, a common response to claims of mispricing was the idea that the anomaly captures a dimension of risk that is missing from the benchmark theory. For example, the anomaly may generate expected returns beyond those measured using the CAPM regression because the time-series of its returns are correlated with labor income, which is not captured by standard proxies for the market return. Perhaps the most well-known example of this unmeasured risk explanation is found in Fama and French's seminar paper on their 3-factor model: "if assets are priced rationally, variables that are related to average returns ... ..., must proxy for sensitivity to common (shared and thus undiversifiable) risk factors in returns. The [3-factor model] time-series regressions give direct evidence on this issue | https://en.wikipedia.org/wiki?curid=1881005 |
Market anomaly " The unmeasured risk explanation is closely related to the shortcomings of the CAPM as a theory of risk as well as shortcomings of empirical tests of the CAPM and related models. Perhaps the most common critique of the CAPM is that it is derived in a single period setting, and thus is missing dynamic features like periods of high uncertainty. In a more general setting, the CAPM typically implies multiple risk factors, as shown in Merton's Intertemporal CAPM theory. Moreover, the ICAPM generally implies the expected returns vary over time, and thus time-series predictability is not clear evidence of mispricing. Indeed, since the CAPM cannot at all capture dynamic expected returns, evidence of time-series predictability is less often regarded as mispricing as compared to cross-sectional predictability. Empirical shortcomings primarily regard the difficulty in measuring wealth or marginal utility. Theoretically, wealth includes not only stock market wealth, but also non-tradable wealth like private assets and future labor income. In the consumption CAPM, (which is theoretically equivalent to Merton's ICAPM), the proper proxy for wealth is consumption, which is difficult to measure (Savov 2011, for example). Despite the theoretical soundness of the unmeasured risk explanation, there is little consensus among academics about the proper risk model over and above the CAPM | https://en.wikipedia.org/wiki?curid=1881005 |
Market anomaly Propositions include the well-known Fama-French 3-Factor Model, Fama-French-Carhart 4-factor model, Fama-French 5-factor model, and Stambaugh and Yuan's 4-factor model. These models are all empirically-oriented, rather than derived from a formal theory of equilibrium like Merton's ICAPM. Anomalies are almost always documented using closing prices from the CRSP dataset. These prices do not reflect trading costs, which can prevent arbitrage and thus the elimination predictability. Moreover, almost all anomalies are documented using equally-weighted portfolios, and thus require trading of illiquid (costly-to-trade) stocks. The limits to arbitrage explanation can be thought of as a refinement of the mispricing framework. A return pattern only offers profits if the returns it offers survives trading costs, and thus should not be considered mispricing unless trading costs are accounted for. A large literature documents that trading costs greatly reduce anomaly returns. This literature goes back to Stoll and Whaley (1983) and Ball, Kothari, and Shanken (1995). A recent paper that studies dozens of anomalies finds that trading costs have a massive effect on the average anomaly (Novy-Marx and Velikov 2015). The documented anomalies are likely the best performers from a much larger set of potential return predictors. This selection creates a bias and implies that estimates of the profitability of anomalies is overstated | https://en.wikipedia.org/wiki?curid=1881005 |
Market anomaly This explanation for anomalies is also known as data snooping, p-hacking, data mining, and data dredging, and is closely related to the multiple comparisons problem. Concerns about selection bias in anomalies goes back at least to Jensen and Bennington (1970). Most research on selection bias in market anomalies focuses on particular subsets of predictors. For example, Sullivan, Timmermann, and White (2001) show that calendar-based anomalies are no longer significant after adjusting for selection bias. A recent meta-analysis of the size premium shows that the reported estimates of the size premium are exaggerated twofold because of selection bias . Research on selection bias for anomalies more generally is relatively limited and inconclusive. McLean and Pontiff (2016) use an out-of-sample test to show that selection bias accounts for at most 26% of the typical anomaly's mean return during the sample period of the original publication. To show this, they replicate almost 100 anomalies, and show that the average anomaly's return is only 26% smaller in the few years immediately after the end of the original samples. As some of this decline may be due to investor learning effects, the 26% is an upper bound. In contrast, Harvey, Liu, and Zhu (2016) adapt multiple testing adjustments from statistics such as the False Discovery Rate to asset pricing "factors". They refer to a factor as any variable that helps explain the cross-section of expected returns, and thus include many anomalies in their study | https://en.wikipedia.org/wiki?curid=1881005 |
Market anomaly They find that multiple-testing statistics imply that factors with t-stats < 3.0 should not be considered statistically significant, and conclude that most published findings are likely false. | https://en.wikipedia.org/wiki?curid=1881005 |
Wealth elasticity of demand The wealth elasticity of demand, in microeconomics and macroeconomics, is the proportional change in the consumption of a good relative to a change in consumers' wealth (as distinct from changes in personal income). Measuring and accounting for the variability in this elasticity is a continuing problem in behavioral finance and consumer theory. The wealth elasticity of consumption quantity for some good will determine the size of the expenditure shift due to "unexpected" changes in net personal wealth, "ceteris paribus" (i.e. the size of the so-called "wealth effect" for a given good). It is calculated as "the ratio of the percent change in consumption to the percent change in wealth that caused it." This is analogous to the definition of the income effect from the income elasticity of demand, or the substitution effect from the price elasticity. The measure of "wealth" is mostly taken to be total personal realizable wealth at market prices, liquid or not: Some economists say that bonds are simply a loan to the government and that they are not considered (on the aggregate) to be part of net wealth. Generally, the wealth change is measured in real terms. It may seem obvious that an unanticipated windfall will lead to greater consumption and that a fiscal loss will have the opposite effect. However, when the stock markets crashed in April 2000 (wiping out $2.1 trillion in nominal investor wealth) U.S. household consumption did not drop substantially | https://en.wikipedia.org/wiki?curid=1883907 |
Wealth elasticity of demand Some researchers have tried to resolve this difficulty by redefining wealth as the 'stable underlying value' of assets, which doesn't change with asset values, although this raises other questions of consumer rationality. Most researchers calculate the wealth effect in real terms, so a deflation in price levels will increase personal wealth on average (because the total wealth in society is positive, the difference between saving and debt is tangible assets, such as land). The increase in private real wealth may give rise to a wealth effect of increased consumption. The macroeconomic effect of this on employment is called the Pigou effect, but whether or not this acts as a significant brake on a deflationary spiral is controversial. Pigou's reasoning for a positive wealth elasticity was that richer people feel more secure in the future and hence save less from current income. (So wealth is not redistributed by the effect.) The elasticity has important implications for monetary policy: Investments with a fixed yield (such as a bond paying coupons at 5%) will increase in net present value as interest rates fall. Since fixed-income bond-holders’ personal wealth (at market rates) has increased, this may stimulate expenditure in a wealth effect. Working the other way, central banks often need to guess the wealth elasticity for asset price changes that have already happened in order to adjust the interest rate | https://en.wikipedia.org/wiki?curid=1883907 |
Wealth elasticity of demand In particular, the extent to which house price increases affect the rest of the economy is a critical question. A naïve assumption (or first approximation) linking the wealth and income elasticities of demand is: However, this approach overlooks the fact that people typically treat income and capital differently. (Behavioural economics hypothesises different "mental accounts" for income and assets, and points to empirical studies showing that the marginal propensity to consume extra income is one, but is lower for windfall asset increases.) Econometric research is ongoing to find good wealth elasticity parameters, especially in areas like house-price-related wealth effects. However, some patterns are widely believed to hold: If 'leisure time' is a superior good the income effect will partially cancel itself out, since people will work less as their hourly pay goes up. A change in net wealth doesn't require economic labour to produce, and has a different impact on the labour market. | https://en.wikipedia.org/wiki?curid=1883907 |
Dual economy A dual economy is the existence of two separate economic sectors within one country, divided by different levels of development, technology, and different patterns of demand. The concept was originally created by Julius Herman Boeke to describe the coexistence of modern and traditional economic sectors in a colonial economy. Dual economies are common in less developed countries, where one sector is geared to local needs and another to the global export market. Dual economies may exist within the same sector, for example a modern plantation or other commercial agricultural entity operating in the midst of traditional cropping systems. Sir Arthur Lewis used the concept of a dualistic economy as the basis of his labour supply theory of rural-urban migration. Lewis distinguished between a rural low-income subsistence sector with surplus population, and an expanding urban capitalist sector (see Dual-sector model). The urban economy absorbed labour from rural areas (holding down urban wages) until the rural surplus was exhausted. A World Bank comparison of sectoral growth in Côte d'Ivoire, Ghana and Zimbabwe since 1965 provided evidence against the existence of a basic dual economy model. The research implied that a positive link existed between growth in industry and growth in agriculture. The authors argued that for maximum economic growth, policymakers should have focused on agriculture and services as well as industrial development. | https://en.wikipedia.org/wiki?curid=1884317 |
Business valuation is a process and a set of procedures used to estimate the economic value of an owner's interest in a business. Valuation is used by financial market participants to determine the price they are willing to pay or receive to effect a sale of a business. In addition to estimating the selling price of a business, the same valuation tools are often used by business appraisers to resolve disputes related to estate and gift taxation, divorce litigation, allocate business purchase price among business assets, establish a formula for estimating the value of partners' ownership interest for buy-sell agreements, and many other business and legal purposes such as in shareholders deadlock, divorce litigation and estate contest. In some cases, the court would appoint a forensic accountant as the joint expert doing the business valuation. Before the value of a business can be measured, the valuation assignment must specify the reason for and circumstances surrounding the business valuation. These are formally known as the business value standard and premise of value. The standard of value is the hypothetical conditions under which the business will be valued. The premise of value relates to the assumptions, such as assuming that the business will continue forever in its current form (going concern), or that the value of the business lies in the proceeds from the sale of all of its assets minus the related debt (sum of the parts or assemblage of business assets) | https://en.wikipedia.org/wiki?curid=1885799 |
Business valuation Premise of value for fair value Calculation results can vary considerably depending upon the choice of both the standard and premise of value. In an actual business sale, it would be expected that the buyer and seller, each with an incentive to achieve an optimal outcome, would determine the fair market value of a business asset that would compete in the market for such an acquisition. If the synergies are specific to the company being valued, they may not be considered. Fair value also does not incorporate discounts for lack of control or marketability. Note, however, that it is possible to achieve the fair market value for a business asset that is being liquidated in its secondary market. This underscores the difference between the standard and premise of value. These assumptions might not, and probably do not, reflect the actual conditions of the market in which the subject business might be sold. However, these conditions are assumed because they yield a uniform standard of value, after applying generally accepted valuation techniques, which allows meaningful comparison between businesses which are similarly situated. A business valuation report generally begins with a summary of the purpose and scope of business appraisal as well as its date and stated audience. What follows is a description of national, regional and local economic conditions existing as of the valuation date, as well as the conditions of the industry in which the subject business operates | https://en.wikipedia.org/wiki?curid=1885799 |
Business valuation A common source of economic information for the first section of the business valuation report is the Federal Reserve Board's Beige Book, published eight times a year by the Federal Reserve Bank. State governments and industry associations also publish useful statistics describing regional and industry conditions. The financial statement analysis generally involves common size analysis, ratio analysis (liquidity, turnover, profitability, etc.), trend analysis and industry comparative analysis. This permits the valuation analyst to compare the subject company to other businesses in the same or similar industry, and to discover trends affecting the company and/or the industry over time. By comparing a company's financial statements in different time periods, the valuation expert can view growth or decline in revenues or expenses, changes in capital structure, or other financial trends. How the subject company compares to the industry will help with the risk assessment and ultimately help determine the discount rate and the selection of market multiples. It is important to mention that among the financial statements, the primary statement to show the liquidity of the company is cash flow. Cash flow shows the company's cash in and out flow. The key objective of normalization is to identify the ability of the business to generate income for its owners. A measure of the income is the amount of cash flow that the owners can remove from the business without adversely affecting its operations | https://en.wikipedia.org/wiki?curid=1885799 |
Business valuation The most common normalization adjustments fall into the following four categories: Three different approaches are commonly used in business valuation: the income approach, the asset-based approach, and the market approach. Within each of these approaches, there are various techniques for determining the value of a business using the definition of value appropriate for the appraisal assignment. Generally, the income approaches determine value by calculating the net present value of the benefit stream generated by the business (discounted cash flow); the asset-based approaches determine value by adding the sum of the parts of the business (net asset value); and the market approaches determine value by comparing the subject company to other companies in the same industry, of the same size, and/or within the same region. A number of business valuation models can be constructed that utilize various methods under the three business valuation approaches. Venture Capitalists and Private Equity professionals have long used the First chicago method which essentially combines the income approach with the market approach. In certain cases equity may also be valued by applying the techniques and frameworks developed for financial options, via a real options framework, as discussed below. In determining which of these approaches to use, the valuation professional must exercise discretion. Each technique has advantages and drawbacks, which must be considered when applying those techniques to a particular subject company | https://en.wikipedia.org/wiki?curid=1885799 |
Business valuation Most treatises and court decisions encourage the valuator to consider more than one technique, which must be reconciled with each other to arrive at a value conclusion. A measure of common sense and a good grasp of mathematics is helpful. The income approach relies upon the economic principle of expectation: the value of business is based on the expected economic benefit and level of risk associated with the investment. Income based valuation methods determine fair market value by dividing the benefit stream generated by the subject or target company times a discount or capitalization rate. The discount or capitalization rate converts the stream of benefits into present value. There are several different income methods, including capitalization of earnings or cash flows, discounted future cash flows ("DCF"), and the excess earnings method (which is a hybrid of asset and income approaches). The result of a value calculation under the income approach is generally the fair market value of a controlling, marketable interest in the subject company, since the entire benefit stream of the subject company is most often valued, and the capitalization and discount rates are derived from statistics concerning public companies. IRS Revenue Ruling 59-60 states that earnings are preeminent for the valuation of closely held operating companies. However, income valuation methods can also be used to establish the value of a severable business asset as long as an income stream can be attributed to it | https://en.wikipedia.org/wiki?curid=1885799 |
Business valuation An example is licensable intellectual property whose value needs to be established to arrive at a supportable royalty structure. A discount rate or capitalization rate is used to determine the present value of the expected returns of a business. The discount rate and capitalization rate are closely related to each other, but distinguishable. Generally speaking, the discount rate or capitalization rate may be defined as the yield necessary to attract investors to a particular investment, given the risks associated with that investment. See Required rate of return. There are several different methods of determining the appropriate discount rates. The discount rate is composed of two elements: (1) the risk-free rate, which is the return that an investor would expect from a secure, practically risk-free investment, such as a high quality government bond; plus (2) a risk premium that compensates an investor for the relative level of risk associated with a particular investment in excess of the risk-free rate. Most importantly, the selected discount or capitalization rate must be consistent with stream of benefits to which it is to be applied. Capitalization and discounting valuation calculations become mathematically equivalent under the assumption that the business income grows at a constant rate. The capital asset pricing model (CAPM) provides one method of determining a discount rate in business valuation | https://en.wikipedia.org/wiki?curid=1885799 |
Business valuation The CAPM originated from the Nobel Prize-winning studies of Harry Markowitz, James Tobin, and William Sharpe. The method derives the discount rate by adding risk premium to the risk-free rate. The risk premium is derived by multiplying the equity risk premium with "beta", a measure of stock price volatility. Beta is compiled by various researchers for particular industries and companies, and measures systematic risks of investment. One of the criticisms of the CAPM is that beta is derived from volatility of prices of publicly traded companies, which differ from non-publicly companies in liquidity, marketability, capital structures and control. Other aspects such as access to credit markets, size, and management depth are generally different, too. The rate build-up method also requires an assessment of the subject company's risk, which provides valuation of itself. Where a privately held company can be shown to be sufficiently similar to a public company, the CAPM may be suitable. However, it requires the knowledge of market stock prices for calculation. For private companies that do not sell stock on the public capital markets, this information is not readily available. Therefore, calculation of beta for private firms is problematic. The build-up cost of capital model is the typical choice in such cases. With regard to capital market-oriented valuation approaches there are numerous valuation approaches besides the traditional CAPM model | https://en.wikipedia.org/wiki?curid=1885799 |
Business valuation They include, for example, the Arbitrage pricing theory (APT) as well as the Consumption-based Capital Asset Pricing Model (CCAPM). Furthermore, alternative capital market models were developed, having in common that expected return hinge on "multiple" risk sources and thus being less restrictive: Nevertheless, even these models are not wholly consistent, as they also show market anomalies. However, the method of incomplete replication and risk covering come along without the need of capital market data and thus being more solid. Equally notable is the existence of investment based approaches, considering different investment opportunities and determining an investment program by means of linear optimization. Among them the approximative decomposition valuation approach can be found. The cost of equity (Ke) is computed by using the modified capital asset pricing model (Mod. CAPM) formula_1 Where: formula_2 = Risk free rate of return (Generally taken as 10-year Government Bond Yield) formula_3 = Beta Value (Sensitivity of the stock returns to market returns) formula_4 = Cost of Equity formula_5 = Market Rate of Return SCRP = Small Company Risk Premium CSRP= Company specific Risk premium The weighted average cost of capital is an approach to determining a discount rate. The WACC method determines the subject company's actual cost of capital by calculating the weighted average of the company's cost of debt and cost of equity | https://en.wikipedia.org/wiki?curid=1885799 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.