text
stringlengths
559
401k
source
stringlengths
13
121
HSAB is an acronym for "hard and soft (Lewis) acids and bases". HSAB is widely used in chemistry for explaining the stability of compounds, reaction mechanisms and pathways. It assigns the terms 'hard' or 'soft', and 'acid' or 'base' to chemical species. 'Hard' applies to species which are small, have high charge states (the charge criterion applies mainly to acids, to a lesser extent to bases), and are weakly polarizable. 'Soft' applies to species which are big, have low charge states and are strongly polarizable. The theory is used in contexts where a qualitative, rather than quantitative, description would help in understanding the predominant factors which drive chemical properties and reactions. This is especially so in transition metal chemistry, where numerous experiments have been done to determine the relative ordering of ligands and transition metal ions in terms of their hardness and softness. HSAB theory is also useful in predicting the products of metathesis reactions. In 2005 it was shown that even the sensitivity and performance of explosive materials can be explained on basis of HSAB theory. Ralph Pearson introduced the HSAB principle in the early 1960s as an attempt to unify inorganic and organic reaction chemistry. == Theory == Essentially, the theory states that soft acids prefer to form bonds with soft bases, whereas hard acids prefer to form bonds with hard bases, all other factors being equal. It can also be said that hard acids bind strongly to hard bases and soft acids bind strongly to soft bases. The HSAB classification in the original work was largely based on equilibrium constants of Lewis acid/base reactions with a reference base for comparison. Borderline cases are also identified: borderline acids are trimethylborane, sulfur dioxide and ferrous Fe2+, cobalt Co2+ caesium Cs+ and lead Pb2+ cations. Borderline bases are: aniline, pyridine, nitrogen N2 and the azide, chloride, bromide, nitrate and sulfate anions. Generally speaking, acids and bases interact and the most stable interactions are hard–hard (ionogenic character) and soft–soft (covalent character). An attempt to quantify the 'softness' of a base consists in determining the equilibrium constant for the following equilibrium: BH + CH3Hg+ ⇌ H+ + CH3HgB where CH3Hg+ (methylmercury ion) is a very soft acid and H+ (proton) is a hard acid, which compete for B (the base to be classified). Some examples illustrating the effectiveness of the theory: Bulk metals are soft acids and are poisoned by soft bases such as phosphines and sulfides. Hard solvents such as hydrogen fluoride, water and the protic solvents tend to dissolve strong solute bases such as fluoride and oxide anions. On the other hand, dipolar aprotic solvents such as dimethyl sulfoxide and acetone are soft solvents with a preference for solvating large anions and soft bases. In coordination chemistry soft–soft and hard–hard interactions exist between ligands and metal centers. == Chemical hardness == In 1983 Pearson together with Robert Parr extended the qualitative HSAB theory with a quantitative definition of the chemical hardness (η) as being proportional to the second derivative of the total energy of a chemical system with respect to changes in the number of electrons at a fixed nuclear environment: η = 1 2 ( ∂ 2 E ∂ N 2 ) Z {\displaystyle \eta ={\frac {1}{2}}\left({\frac {\partial ^{2}E}{\partial N^{2}}}\right)_{Z}} The factor of one-half is arbitrary and often dropped as Pearson has noted. An operational definition for the chemical hardness is obtained by applying a three-point finite difference approximation to the second derivative: η ≈ E ( N + 1 ) − 2 E ( N ) + E ( N − 1 ) 2 = ( E ( N − 1 ) − E ( N ) ) − ( E ( N ) − E ( N + 1 ) ) 2 = 1 2 ( I − A ) {\displaystyle {\begin{aligned}\eta &\approx {\frac {E(N+1)-2E(N)+E(N-1)}{2}}\\&={\frac {(E(N-1)-E(N))-(E(N)-E(N+1))}{2}}\\&={\frac {1}{2}}(I-A)\end{aligned}}} where I is the ionization potential and A the electron affinity. This expression implies that the chemical hardness is proportional to the band gap of a chemical system, when a gap exists. The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, μ, of the system, μ = ( ∂ E ∂ N ) Z {\displaystyle \mu =\left({\frac {\partial E}{\partial N}}\right)_{Z}} , from which an operational definition for the chemical potential is obtained from a finite difference approximation to the first order derivative as μ ≈ E ( N + 1 ) − E ( N − 1 ) 2 = − ( E ( N − 1 ) − E ( N ) ) − ( E ( N ) − E ( N + 1 ) ) 2 = − 1 2 ( I + A ) {\displaystyle {\begin{aligned}\mu &\approx {\frac {E(N+1)-E(N-1)}{2}}\\&={\frac {-(E(N-1)-E(N))-(E(N)-E(N+1))}{2}}\\&=-{\frac {1}{2}}(I+A)\end{aligned}}} which is equal to the negative of the electronegativity (χ) definition on the Mulliken scale: μ = −χ. The hardness and Mulliken electronegativity are related as 2 η = ( ∂ μ ∂ N ) Z ≈ − ( ∂ χ ∂ N ) Z {\displaystyle 2\eta =\left({\frac {\partial \mu }{\partial N}}\right)_{Z}\approx -\left({\frac {\partial \chi }{\partial N}}\right)_{Z}} , and in this sense hardness is a measure for resistance to deformation or change. Likewise a value of zero denotes maximum softness, where softness is defined as the reciprocal of hardness. In a compilation of hardness values only that of the hydride anion deviates. Another discrepancy noted in the original 1983 article are the apparent higher hardness of Tl3+ compared to Tl+. == Modifications == If the interaction between acid and base in solution results in an equilibrium mixture the strength of the interaction can be quantified in terms of an equilibrium constant. An alternative quantitative measure is the heat (enthalpy) of formation of the Lewis acid-base adduct in a non-coordinating solvent. The ECW model is quantitative model that describes and predicts the strength of Lewis acid base interactions, -ΔH . The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is -ΔH = EAEB + CACB + W The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. The ECW model accommodates the failure of single parameter descriptions of acid-base interactions. A related method adopting the E and C formalism of Drago and co-workers quantitatively predicts the formation constants for complexes of many metal ions plus the proton with a wide range of unidentate Lewis acids in aqueous solution, and also offered insights into factors governing HSAB behavior in solution. Another quantitative system has been proposed, in which Lewis acid strength toward Lewis base fluoride is based on gas-phase affinity for fluoride. Additional one-parameter base strength scales have been presented. However, it has been shown that to define the order of Lewis base strength (or Lewis acid strength) at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent . == Kornblum's rule == An application of HSAB theory is the so-called Kornblum's rule (after Nathan Kornblum) which states that in reactions with ambident nucleophiles (nucleophiles that can attack from two or more places), the more electronegative atom reacts when the reaction mechanism is SN1 and the less electronegative one in a SN2 reaction. This rule (established in 1954) predates HSAB theory but in HSAB terms its explanation is that in a SN1 reaction the carbocation (a hard acid) reacts with a hard base (high electronegativity) and that in a SN2 reaction tetravalent carbon (a soft acid) reacts with soft bases. According to findings, electrophilic alkylations at free CN− occur preferentially at carbon, regardless of whether the SN1 or SN2 mechanism is involved and whether hard or soft electrophiles are employed. Preferred N attack, as postulated for hard electrophiles by the HSAB principle, could not be observed with any alkylating agent. Isocyano compounds are only formed with highly reactive electrophiles that react without an activation barrier because the diffusion limit is approached. It is claimed that the knowledge of absolute rate constants and not of the hardness of the reaction partners is needed to predict the outcome of alkylations of the cyanide ion. === Criticism === Reanalysis of a large number of various most typical ambident organic system reveals that thermodynamic/kinetic control describes reactivity of organic compounds perfectly, whereas the HSAB principle fails and should be abandoned in the rationalization of ambident reactivity of organic compounds. == See also == Acid-base reaction Oxophilicity == References ==
Wikipedia/HSAB_theory
Macroeconomics is a branch of economics that deals with the performance, structure, behavior, and decision-making of an economy as a whole. This includes regional, national, and global economies. Macroeconomists study topics such as output/GDP (gross domestic product) and national income, unemployment (including unemployment rates), price indices and inflation, consumption, saving, investment, energy, international trade, and international finance. Macroeconomics and microeconomics are the two most general fields in economics. The focus of macroeconomics is often on a country (or larger entities like the whole world) and how its markets interact to produce large-scale phenomena that economists refer to as aggregate variables. In microeconomics the focus of analysis is often a single market, such as whether changes in supply or demand are to blame for price increases in the oil and automotive sectors. From introductory classes in "principles of economics" through doctoral studies, the macro/micro divide is institutionalized in the field of economics. Most economists identify as either macro- or micro-economists. Macroeconomics is traditionally divided into topics along different time frames: the analysis of short-term fluctuations over the business cycle, the determination of structural levels of variables like inflation and unemployment in the medium (i.e. unaffected by short-term deviations) term, and the study of long-term economic growth. It also studies the consequences of policies targeted at mitigating fluctuations like fiscal or monetary policy, using taxation and government expenditure or interest rates, respectively, and of policies that can affect living standards in the long term, e.g. by affecting growth rates. Macroeconomics as a separate field of research and study is generally recognized to start in 1936, when John Maynard Keynes published his The General Theory of Employment, Interest and Money, but its intellectual predecessors are much older. Since World War II, various macroeconomic schools of thought like Keynesians, monetarists, new classical and new Keynesian economists have made contributions to the development of the macroeconomic research mainstream. == Basic macroeconomic concepts == Macroeconomics encompasses a variety of concepts and variables, but above all the three central macroeconomic variables are output, unemployment, and inflation.: 39  Besides, the time horizon varies for different types of macroeconomic topics, and this distinction is crucial for many research and policy debates.: 54  A further important dimension is that of an economy's openness, economic theory distinguishing sharply between closed economies and open economies.: 373  === Time frame === It is usual to distinguish between three time horizons in macroeconomics, each having its own focus on e.g. the determination of output:: 54  the short run (e.g. a few years): Focus is on business cycle fluctuations and changes in aggregate demand which often drive them. Stabilization policies like monetary policy or fiscal policy are relevant in this time frame the medium run (e.g. a decade): Over the medium run, the economy tends to an output level determined by supply factors like the capital stock, the technology level and the labor force, and unemployment tends to revert to its structural (or "natural") level. These factors move slowly, so that it is a reasonable approximation to take them as given in a medium-term time scale, though labour market policies and competition policy are instruments that may influence the economy's structures and hence also the medium-run equilibrium the long run (e.g. a couple of decades or more): On this time scale, emphasis is on the determinants of long-run economic growth like accumulation of human and physical capital, technological innovations and demographic changes. Potential policies to influence these developments are education reforms, incentives to change saving rates or to increase R&D activities. === Output and income === National output is the total amount of everything a country produces in a given period of time. Everything that is produced and sold generates an equal amount of income. The total net output of the economy is usually measured as gross domestic product (GDP). Adding net factor incomes from abroad to GDP produces gross national income (GNI), which measures total income of all residents in the economy. In most countries, the difference between GDP and GNI are modest so that GDP can approximately be treated as total income of all the inhabitants as well, but in some countries, e.g. countries with very large net foreign assets (or debt), the difference may be considerable.: 385  Economists interested in long-run increases in output study economic growth. Advances in technology, accumulation of machinery and other capital, and better education and human capital, are all factors that lead to increased economic output over time. However, output does not always increase consistently over time. Business cycles can cause short-term drops in output called recessions. Economists look for macroeconomic policies that prevent economies from slipping into either recessions or overheating and that lead to higher productivity levels and standards of living. === Unemployment === The amount of unemployment in an economy is measured by the unemployment rate, i.e. the percentage of persons in the labor force who do not have a job, but who are actively looking for one. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are not part of the labor force and consequently not counted as unemployed, either.: 156  Unemployment has a short-run cyclical component which depends on the business cycle, and a more permanent structural component, which can be loosely thought of as the average unemployment rate in an economy over extended periods, and which is often termed the natural or structural: 167  rate of unemployment. Cyclical unemployment occurs when growth stagnates. Okun's law represents the empirical relationship between unemployment and short-run GDP growth. The original version of Okun's law states that a 3% increase in output would lead to a 1% decrease in unemployment. The structural or natural rate of unemployment is the level of unemployment that will occur in a medium-run equilibrium, i.e. a situation with a cyclical unemployment rate of zero. There may be several reasons why there is some positive unemployment level even in a cyclically neutral situation, which all have their foundation in some kind of market failure: Search unemployment (also called frictional unemployment) occurs when workers and firms are heterogeneous and there is imperfect information, generally causing a time-consuming search and matching process when filling a job vacancy in a firm, during which the prospective worker will often be unemployed. Sectoral shifts and other reasons for a changed demand from firms for workers with particular skills and characteristics, which occur continually in a changing economy, may also cause more search unemployment because of increased mismatch. Efficiency wage models are labor market models in which firms choose not to lower wages to the level where supply equals demand because the lower wages would lower employees' efficiency levels Trade unions, which are important actors in the labor market in some countries, may exercise market power in order to keep wages over the market-clearing level for the benefice of their members even at the cost of some unemployment Legal minimum wages may prevent the wage from falling to a market-clearing level, causing unemployment among low-skilled (and low-paid) workers. In the case of employers having some monopsony power, however, employment effects may have the opposite sign. === Inflation and deflation === A general price increase across the entire economy is called inflation. When prices decrease, there is deflation. Economists measure these changes in prices with price indexes. Inflation will increase when an economy becomes overheated and grows too quickly. Similarly, a declining economy can lead to decreasing inflation and even in some cases deflation. Central bankers conducting monetary policy usually have as a main priority to avoid too high inflation, typically by adjusting interest rates. High inflation as well as deflation can lead to increased uncertainty and other negative consequences, in particular when the inflation (or deflation) is unexpected. Consequently, most central banks aim for a positive, but stable and not very high inflation level. Changes in the inflation level may be the result of several factors. Too much aggregate demand in the economy will cause an overheating, raising inflation rates via the Phillips curve because of a tight labor market leading to large wage increases which will be transmitted to increases in the price of the products of employers. Too little aggregate demand will have the opposite effect of creating more unemployment and lower wages, thereby decreasing inflation. Aggregate supply shocks will also affect inflation, e.g. the oil crises of the 1970s and the 2021–2023 global energy crisis. Changes in inflation may also impact the formation of inflation expectations, creating a self-fulfilling inflationary or deflationary spiral. The monetarist quantity theory of money holds that changes in the price level are directly caused by changes in the money supply. Whereas there is empirical evidence that there is a long-run positive correlation between the growth rate of the money stock and the rate of inflation, the quantity theory has proved unreliable in the short- and medium-run time horizon relevant to monetary policy and is abandoned as a practical guideline by most central banks today. === Open economy macroeconomics === Open economy macroeconomics deals with the consequences of international trade in goods, financial assets and possibly factor markets like labor migration and international relocation of firms (physical capital). It explores what determines import, export, the balance of trade and over longer horizons the accumulation of net foreign assets. An important topic is the role of exchange rates and the pros and cons of maintaining a fixed exchange rate system or even a currency union like the Economic and Monetary Union of the European Union, drawing on the research literature on optimum currency areas. === GDP Equation Using Expenditure Approach === One way to calculate Gross Domestic Product, or total net output, is the expenditure method. The GDP essentially tells you how big the economy is. The larger the GDP value, the bigger the economy. The expenditure approach involves looking at four main components: Consumer Spending, Government Spending, Investment Spending, and Net Exports. Consumer Spending is made up of ordinary consumers spending money on different kinds of products and also investing their money in residential markets. Government Spending involves the government spending money on goods and services and they may assist consumers or businesses with spending as well. For instance, purchasing physical capital for businesses. While transfer payments, which includes things like welfare or social security payments), are things that a government pays, it is not included in the final calculation of the expenditure approach because it is not paying for any final goods and services. Investment spending involves businesses spending money on physical capital/equipment to help with producing goods and services. Lastly, net exports is just exports minus imports. Exports are goods and services that a country is selling to people abroad and imports are goods and services that people from a country are receiving from abroad. Hence, the equation for the expenditure approach to calculating the Gross Domestic Product is GDP = Consumer Spending(CS) + Government Spending(GS) + Investment Spending(IS) + Net Exports(EXP-IMP). === GDP Deflator Equation & Explanation === Another concern with measuring a country's economic growth is that even though we see the GDP growing, that does not inherently mean the economy is growing. Most of the increase in GDP may just be due to inflation. To know whether this is the case, we have to calculate the GDP Deflator which adjusts the GDP for inflation. GDP Deflator = (Nominal GDP/Real GDP) x 100 Nominal GDP is GDP that includes inflation and Real GDP is GDP adjusted for inflation. To adjust for inflation means that the effect of inflation on the value was removed. A GDP Deflator of 100 indicates that there is no inflation nor deflation. A GDP Deflator value that is greater than 100 indicates that there is inflation. A GDP Deflator value that is less than 100 indicates that there is deflation. === Money Supply & Money Multiplier: Equation & Explanations === Two common ways of determining the total money supply in an economy are M1 and M2. M2 consists of M1 plus a few other things. M1 is money that is liquid. Liquid refers to a financial asset being able to easily be converted into cash quickly and without losing a significant amount of value. This obviously includes cash but also things like coins, checking account deposits, etc. M2, however, includes time deposits, saving accounts, and money market mutual funds, which are not as liquid, in its measurement. It is important to know about the money supply as it affects interest rates and can also play a central role in monetary policy. The Money Multiplier equation shows how the bank can expand the money supply through taking in deposits and lending money. The Money Supply Reserve Multiplier equation is: Money Multiplier = 1 / Reserve Requirement Ratio The reserve requirement in this equation represents a proportion of money that the bank is required to keep in case they need to deal with withdrawals from customers. That proportion of money is based on the deposits of money made at the bank. So, if the reserve requirement is .20(20%), then the money multiplier is 5. This means that a $5 deposit would lead to a $25 increase in the money supply. This is because of the cycle of the bank keeping part of the deposit(in our example, 20%) and lending out the rest every time. These new spendable bank deposits are counted in the money supply even though the amount of physical currency did not change. So, while the physical amount of currency would still be $5, the amount of spendable money would be $25. == Development == Macroeconomics as a separate field of research and study is generally recognized to start with the publication of John Maynard Keynes' The General Theory of Employment, Interest, and Money in 1936.: 526  The terms "macrodynamics" and "macroanalysis" were introduced by Ragnar Frisch in 1933, and Lawrence Klein in 1946 used the word "macroeconomics" itself in a journal title in 1946. but naturally several of the themes which are central to macroeconomic research had been discussed by thoughtful economists and other writers long before 1936. === Before Keynes === In particular, macroeconomic questions before Keynes were the topic of the two long-standing traditions of business cycle theory and monetary theory. William Stanley Jevons was one of the pioneers of the first tradition, whereas the quantity theory of money, labelled the oldest surviving theory in economics, as an example of the second was described already in the 16th century by Martín de Azpilcueta and later discussed by personalities like John Locke and David Hume. In the first decades of the 20th century monetary theory was dominated by the eminent economists Alfred Marshall, Knut Wicksell and Irving Fisher. === Keynes and Keynesian economics === When the Great Depression struck, the reigning economists had difficulty explaining how goods could go unsold and workers could be left unemployed. In the prevailing neoclassical economics paradigm, prices and wages would drop until the market cleared, and all goods and labor were sold. Keynes in his main work, the General Theory, initiated what is known as the Keynesian Revolution. He offered a new interpretation of events and a whole intellectural framework - a novel theory of economics that explained why markets might not clear, which would evolve into a school of thought known as Keynesian economics, also called Keynesianism or Keynesian theory.: 526  In Keynes' theory, aggregate demand - by Keynes called "effective demand" - was key to determining output. Even if Keynes conceded that output might eventually return to a medium-run equilibrium (or "potential") level, the process would be slow at best. Keynes coined the term liquidity preference (his preferred name for what is also known as money demand) and explained how monetary policy might affect aggregate demand, at the same time offering clear policy recommendations for an active role of fiscal policy in stabilizing aggregate demand and hence output and employment. In addition, he explained how the multiplier effect would magnify a small decrease in consumption or investment and cause declines throughout the economy, and noted the role that uncertainty and animal spirits can play in the economy.: 526  The generation following Keynes combined the macroeconomics of the General Theory with neoclassical microeconomics to create the neoclassical synthesis. By the 1950s, most economists had accepted the synthesis view of the macroeconomy.: 526  Economists like Paul Samuelson, Franco Modigliani, James Tobin, and Robert Solow developed formal Keynesian models and contributed formal theories of consumption, investment, and money demand that fleshed out the Keynesian framework.: 527  === Monetarism === Milton Friedman updated the quantity theory of money to include a role for money demand. He argued that the role of money in the economy was sufficient to explain the Great Depression, and that aggregate demand oriented explanations were not necessary. Friedman also argued that monetary policy was more effective than fiscal policy; however, Friedman doubted the government's ability to "fine-tune" the economy with monetary policy. He generally favored a policy of steady growth in money supply instead of frequent intervention.: 528  Friedman also challenged the original simple Phillips curve relationship between inflation and unemployment. Friedman and Edmund Phelps (who was not a monetarist) proposed an "augmented" version of the Phillips curve that excluded the possibility of a stable, long-run tradeoff between inflation and unemployment. When the oil shocks of the 1970s created a high unemployment and high inflation, Friedman and Phelps were vindicated. Monetarism was particularly influential in the early 1980s, but fell out of favor when central banks found the results disappointing when trying to target money supply instead of interest rates as monetarists recommended, concluding that the relationships between money growth, inflation and real GDP growth are too unstable to be useful in practical monetary policy making. === New classical economics === New classical macroeconomics further challenged the Keynesian school. A central development in new classical thought came when Robert Lucas introduced rational expectations to macroeconomics. Prior to Lucas, economists had generally used adaptive expectations where agents were assumed to look at the recent past to make expectations about the future. Under rational expectations, agents are assumed to be more sophisticated.: 530  Consumers will not simply assume a 2% inflation rate just because that has been the average the past few years; they will look at current monetary policy and economic conditions to make an informed forecast. In the new classical models with rational expectations, monetary policy only had a limited impact. Lucas also made an influential critique of Keynesian empirical models. He argued that forecasting models based on empirical relationships would keep producing the same predictions even as the underlying model generating the data changed. He advocated models based on fundamental economic theory (i.e. having an explicit microeconomic foundation) that would, in principle, be structurally accurate as economies changed.: 530  Following Lucas's critique, new classical economists, led by Edward C. Prescott and Finn E. Kydland, created real business cycle (RBC) models of the macro economy. RBC models were created by combining fundamental equations from neo-classical microeconomics to make quantitative models. In order to generate macroeconomic fluctuations, RBC models explained recessions and unemployment with changes in technology instead of changes in the markets for goods or money. Critics of RBC models argue that technological changes, which typically diffuse slowly throughout the economy, could hardly generate the large short-run output fluctuations that we observe. In addition, there is strong empirical evidence that monetary policy does affect real economic activity, and the idea that technological regress can explain recent recessions seems implausible.: 533 : 195  Despite criticism of the realism in the RBC models, they have been very influential in economic methodology by providing the first examples of general equilibrium models based on microeconomic foundations and a specification of underlying shocks that aim to explain the main features of macroeconomic fluctuations, not only qualitatively, but also quantitatively. In this way, they were forerunners of the later DSGE models.: 194  === New Keynesian response === New Keynesian economists responded to the new classical school by adopting rational expectations and focusing on developing micro-founded models that were immune to the Lucas critique. Like classical models, new classical models had assumed that prices would be able to adjust perfectly and monetary policy would only lead to price changes. New Keynesian models investigated sources of sticky prices and wages due to imperfect competition, which would not adjust, allowing monetary policy to impact quantities instead of prices. Stanley Fischer and John B. Taylor produced early work in this area by showing that monetary policy could be effective even in models with rational expectations when contracts locked in wages for workers. Other new Keynesian economists, including Olivier Blanchard, Janet Yellen, Julio Rotemberg, Greg Mankiw, David Romer, and Michael Woodford, expanded on this work and demonstrated other cases where various market imperfections caused inflexible prices and wages leading in turn to monetary and fiscal policy having real effects. Other researchers focused on imperferctions in labor markets, developing models of efficiency wages or search and matching (SAM) models, or imperfections in credit markets like Ben Bernanke.: 532–36  By the late 1990s, economists had reached a rough consensus. The market imperfections and nominal rigidities of new Keynesian theory was combined with rational expectations and the RBC methodology to produce a new and popular type of models called dynamic stochastic general equilibrium (DSGE) models. The fusion of elements from different schools of thought has been dubbed the new neoclassical synthesis. These models are now used by many central banks and are a core part of contemporary macroeconomics.: 535–36  === 2008 financial crisis === The 2008 financial crisis, which led to the Great Recession, led to major reassessment of macroeconomics, which as a field generally had neglected the potential role of financial institutions in the economy. After the crisis, macroeconomic researchers have turned their attention in several new directions: the financial system and the nature of macrofinancial linkages and frictions, studying leverage, liquidity and complexity problems in the financial sector, the use of macroprudential tools and the dangers of an unsustainable public debt: 537  increased emphasis on empirical work as part of the so-called credibility revolution in economics, using improved methods to distinguish between correlation and causality to improve future policy discussions interest in understanding the importance of heterogeneity among the economic agents, leading among other examples to the construction of heterogeneous agent new Keynesian models (HANK models), which may potentially also improve understanding of the impact of macroeconomics on the income distribution understanding the implications of integrating the findings of the increasingly useful behavioral economics literature into macroeconomics and behavioral finance === Growth models === Research in the economics of the determinants behind long-run economic growth has followed its own course. The Harrod-Domar model from the 1940s attempted to build a long-run growth model inspired by Keynesian demand-driven considerations. The Solow–Swan model worked out by Robert Solow and, independently, Trevor Swan in the 1950s achieved more long-lasting success, however, and is still today a common textbook model for explaining economic growth in the long-run. The model operates with a production function where national output is the product of two inputs: capital and labor. The Solow model assumes that labor and capital are used at constant rates without the fluctuations in unemployment and capital utilization commonly seen in business cycles. In this model, increases in output, i.e. economic growth, can only occur because of an increase in the capital stock, a larger population, or technological advancements that lead to higher productivity (total factor productivity). An increase in the savings rate leads to a temporary increase as the economy creates more capital, which adds to output. However, eventually the depreciation rate will limit the expansion of capital: savings will be used up replacing depreciated capital, and no savings will remain to pay for an additional expansion in capital. Solow's model suggests that economic growth in terms of output per capita depends solely on technological advances that enhance productivity. The Solow model can be interpreted as a special case of the more general Ramsey growth model, where households' savings rates are not constant as in the Solow model, but derived from an explicit intertemporal utility function. In the 1980s and 1990s endogenous growth theory arose to challenge the neoclassical growth theory of Ramsey and Solow. This group of models explains economic growth through factors such as increasing returns to scale for capital and learning-by-doing that are endogenously determined instead of the exogenous technological improvement used to explain growth in Solow's model. Another type of endogenous growth models endogenized the process of technological progress by modelling research and development activities by profit-maximizing firms explicitly within the growth models themselves.: 280–308  ==== Environmental and climate issues ==== Since the 1970s, various environmental problems have been integrated into growth and other macroeconomic models to study their implications more thoroughly. During the oil crises of the 1970s when scarcity problems of natural resources were high on the public agenda, economists like Joseph Stiglitz and Robert Solow introduced non-renewable resources into neoclassical growth models to study the possibilities of maintaining growth in living standards under these conditions.: 201–39  More recently, the issue of climate change and the possibilities of a sustainable development are examined in so-called integrated assessment models, pioneered by William Nordhaus. In macroeconomic models in environmental economics, the economic system is dependant upon the environment. In this case, the circular flow of income diagram may be replaced by a more complex flow diagram reflecting the input of solar energy, which sustains natural inputs and environmental services which are then used as units of production. Once consumed, natural inputs pass out of the economy as pollution and waste. The potential of an environment to provide services and materials is referred to as an "environment's source function", and this function is depleted as resources are consumed or pollution contaminates the resources. The "sink function" describes an environment's ability to absorb and render harmless waste and pollution: when waste output exceeds the limit of the sink function, long-term damage occurs.: 8  In 2024 a new approach was proposed which would institutionalize Inclusion, Sustainability and Resilience in Domestic Economic Governance. == Macroeconomic policy == The division into various time frames of macroeconomic research leads to a parallel division of macroeconomic policies into short-run policies aimed at mitigating the harmful consequences of business cycles (known as stabilization policy) and medium- and long-run policies targeted at improving the structural levels of macroeconomic variables.: 18  Stabilization policy is usually implemented through two sets of tools: fiscal and monetary policy. Both forms of policy are used to stabilize the economy, i.e. limiting the effects of the business cycle by conducting expansive policy when the economy is in a recession or contractive policy in the case of overheating. Structural policies may be labor market policies which aim to change the structural unemployment rate or policies which affect long-run propensities to save, invest, or engage in education or research and development.: 19  === Monetary policy === Central banks conduct monetary policy mainly by adjusting short-term interest rates. The actual method through which the interest rate is changed differs from central bank to central bank, but typically the implementation happens either directly via administratively changing the central bank's own offered interest rates or indirectly via open market operations. Via the monetary transmission mechanism, interest rate changes affect investment, consumption, asset prices like listed companies' shares prices and house prices, and through exchange rate reactions export and import. In this way aggregate demand, employment and ultimately inflation is affected. Expansionary monetary policy lowers interest rates, increasing economic activity, whereas contractionary monetary policy raises interest rates. In the case of a fixed exchange rate system, interest rate decisions together with direct intervention by central banks on exchange rate dynamics are major tools to control the exchange rate. In developed countries, most central banks follow inflation targeting, focusing on keeping medium-term inflation close to an explicit target, say 2%, or within an explicit range. This includes the Federal Reserve and the European Central Bank, which are generally considered to follow a strategy very close to inflation targeting, even though they do not officially label themselves as inflation targeters. In practice, an official inflation targeting often leaves room for the central bank to also help stabilize output and employment, a strategy known as "flexible inflation targeting". Most emerging economies focus their monetary policy on maintaining a fixed exchange rate regime, aligning their currency with one or more foreign currencies, typically the US dollar or the euro. Conventional monetary policy can be ineffective in situations such as a liquidity trap. When nominal interest rates are near zero, central banks cannot loosen monetary policy through conventional means. In that situation, they may use unconventional monetary policy such as quantitative easing to help stabilize output. Quantity easing can be implemented by buying not only government bonds, but also other assets such as corporate bonds, stocks, and other securities. This allows lower interest rates for a broader class of assets beyond government bonds. A similar strategy is to lower long-term interest rates by buying long-term bonds and selling short-term bonds to create a flat yield curve, known in the US as Operation Twist. === Fiscal policy === Fiscal policy is the use of government's revenue (taxes) and expenditure as instruments to influence the economy. For example, if the economy is producing less than potential output, government spending can be used to employ idle resources and boost output, or taxes could be lowered to boost private consumption which has a similar effect. Government spending or tax cuts do not have to make up for the entire output gap. There is a multiplier effect that affects the impact of government spending. For instance, when the government pays for a bridge, the project not only adds the value of the bridge to output, but also allows the bridge workers to increase their consumption and investment, which helps to close the output gap. The effects of fiscal policy can be limited by partial or full crowding out. When the government takes on spending projects, it limits the amount of resources available for the private sector to use. Full crowding out occurs in the extreme case when government spending simply replaces private sector output instead of adding additional output to the economy. A crowding out effect may also occur if government spending should lead to higher interest rates, which would limit investment. Some fiscal policy is implemented through automatic stabilizers without any active decisions by politicians. Automatic stabilizers do not suffer from the policy lags of discretionary fiscal policy. Automatic stabilizers use conventional fiscal mechanisms, but take effect as soon as the economy takes a downturn: spending on unemployment benefits automatically increases when unemployment rises, and tax revenues decrease, which shelters private income and consumption from part of the fall in market income.: 657  === Comparison of fiscal and monetary policy === There is a general consensus that both monetary and fiscal instruments may affect demand and activity in the short run (i.e. over the business cycle).: 657  Economists usually favor monetary over fiscal policy to mitigate moderate fluctuations, however, because it has two major advantages. First, monetary policy is generally implemented by independent central banks instead of the political institutions that control fiscal policy. Independent central banks are less likely to be subject to political pressures for overly expansionary policies. Second, monetary policy may suffer shorter inside lags and outside lags than fiscal policy. There are some exceptions, however: Firstly, in the case of a major shock, monetary stabilization policy may not be sufficient and should be supplemented by active fiscal stabilization.: 659  Secondly, in the case of a very low interest level, the economy may be in a liquidity trap in which monetary policy becomes ineffective, which makes fiscal policy the more potent tool to stabilize the economy. Thirdly, in regimes where monetary policy is tied to fulfilling other targets, in particular fixed exchange rate regimes, the central bank cannot simultaneously adjust its interest rates to mitigate domestic business cycle fluctuations, making fiscal policy the only usable tool for such countries. == Macroeconomic models == Macroeconomic teaching, research and informed debates normally evolve around formal (diagrammatic or equational) macroeconomic models to clarify assumptions and show their consequences in a precise way. Models include simple theoretical models, often containing only a few equations, used in teaching and research to highlight key basic principles, and larger applied quantitative models used by e.g. governments, central banks, think tanks and international organisations to predict effects of changes in economic policy or other exogenous factors or as a basis for making economic forecasting. Well-known specific theoretical models include short-term models like the Keynesian cross, the IS–LM model and the Mundell–Fleming model, medium-term models like the AD–AS model, building upon a Phillips curve, and long-term growth models like the Solow–Swan model, the Ramsey–Cass–Koopmans model and Peter Diamond's overlapping generations model. Quantitative models include early large-scale macroeconometric model, the new classical real business cycle models, microfounded computable general equilibrium (CGE) models used for medium-term (structural) questions like international trade or tax reforms, Dynamic stochastic general equilibrium (DSGE) models used to analyze business cycles, not least in many central banks, or integrated assessment models like DICE. === Specific models === ==== IS–LM model ==== The IS–LM model, invented by John Hicks in 1936, gives the underpinnings of aggregate demand (itself discussed below). It answers the question "At any given price level, what is the quantity of goods demanded?" The graphic model shows combinations of interest rates and output that ensure equilibrium in both the goods and money markets under the model's assumptions. The goods market is modeled as giving equality between investment and public and private saving (IS), and the money market is modeled as giving equilibrium between the money supply and liquidity preference (equivalent to money demand). The IS curve consists of the points (combinations of income and interest rate) where investment, given the interest rate, is equal to public and private saving, given output. The IS curve is downward sloping because output and the interest rate have an inverse relationship in the goods market: as output increases, more income is saved, which means interest rates must be lower to spur enough investment to match saving. The traditional LM curve is upward sloping because the interest rate and output have a positive relationship in the money market: as income (identically equal to output in a closed economy) increases, the demand for money increases, resulting in a rise in the interest rate in order to just offset the incipient rise in money demand. The IS-LM model is often used in elementary textbooks to demonstrate the effects of monetary and fiscal policy, though it ignores many complexities of most modern macroeconomic models. A problem related to the LM curve is that modern central banks largely ignore the money supply in determining policy, contrary to the model's basic assumptions.: 262  In some modern textbooks, consequently, the traditional IS-LM model has been modified by replacing the traditional LM curve with an assumption that the central bank simply determines the interest rate of the economy directly.: 194 : 113  ==== AD-AS model ==== The AD–AS model is a common textbook model for explaining the macroeconomy. The original version of the model shows the price level and level of real output given the equilibrium in aggregate demand and aggregate supply. The aggregate demand curve's downward slope means that more output is demanded at lower price levels. The downward slope can be explained as the result of three effects: the Pigou or real balance effect, which states that as real prices fall, real wealth increases, resulting in higher consumer demand of goods; the Keynes or interest rate effect, which states that as prices fall, the demand for money decreases, causing interest rates to decline and borrowing for investment and consumption to increase; and the net export effect, which states that as prices rise, domestic goods become comparatively more expensive to foreign consumers, leading to a decline in exports. In many representations of the AD–AS model, the aggregate supply curve is horizontal at low levels of output and becomes inelastic near the point of potential output, which corresponds with full employment. Since the economy cannot produce beyond the potential output, any AD expansion will lead to higher price levels instead of higher output. In modern textbooks, the AD–AS model is often presented slightly differently, however, in a diagram showing not the price level, but the inflation rate along the vertical axis,: 263 : 399–428 : 595  making it easier to relate the diagram to real-world policy discussions.: vii  In this framework, the AD curve is downward sloping because higher inflation will cause the central bank, which is assumed to follow an inflation target, to raise the interest rate which will dampen economic activity, hence reducing output. The AS curve is upward sloping following a standard modern Phillips curve thought, in which a higher level of economic activity lowers unemployment, leading to higher wage growth and in turn higher inflation.: 263  == Real-life applications and data == === Trump's proposed tariff policy in Feb 2025 === In early February 2025, United States of America President Donald Trump stated that he would be imposing a 25% tariff on imported goods from Mexico and Canada and a 10% tariff on imported goods from China for US consumers. A tariff in this case is a tax on imported goods and services. US consumers will be less likely to buy imports from those three countries due to the higher price they would have to pay. This was projected to reduce US imports by 15% and generate federal revenue of $100 billion. While imports from Mexico and Canada are important to the US, the US does not rely as much on Canadian and Mexican imports compared to Mexico's and Canada's economies being highly reliant on their exports to the USA. There would be higher production and grocery costs for the US but Mexico would have its economy reduced by 16% as the US takes in 80% of their car exports and 60% of their petroleum exports. In addition, Canada would have its economy reduced by similar amounts as the US takes in 70% of all of their exports. In relation to the expenditure approach to calculating GDP, (Exports - Imports) would reduce significantly due to reduced exports, which means a negative net exports and a lower GDP. === GDP deflator data === When looking at data presented by the Bureau of Economic Analysis, with a base year of 2017, we see that the GDP deflator has been trending upwards since then. The base year serves as the standard year to which we can compare whether GDP increased or decreased. The base year's prices are used when calculating Real GDP for a specific year. For instance, calculating 2020's GDP Deflator would be equivalent to 2020's Nominal GDP/2020's Real GDP (using 2017 prices). The GDP Deflator has risen from 100 to 126.22 in 2024 Q4. == See also == Microeconomics Business cycle accounting Economic development Growth accounting == Notes == == References == Blanchard, Olivier. (2009). "The State of Macro." Annual Review of Economics 1(1): 209–228. Blanchard, Olivier (2021). Macroeconomics (Eighth, global ed.). Harlow, England: Pearson. ISBN 978-0-134-89789-9. Blaug, Mark (2002). "Endogenous growth theory". In Snowdon, Brian; Vane, Howard (eds.). An Encyclopedia of Macroeconomics. Northampton, Massachusetts: Edward Elgar Publishing. ISBN 978-1-84542-180-9. Dimand, Robert W. (2008). "Macroeconomics, origins and history of". In Durlauf, Steven N.; Blume, Lawrence E. (eds.). The New Palgrave Dictionary of Economics. Palgrave Macmillan UK. pp. 236–44. doi:10.1057/9780230226203.1009. ISBN 978-0-333-78676-5. Durlauf, Steven N.; Hester, Donald D. (2008). "IS–LM". In Durlauf, Steven N.; Blume, Lawrence E. (eds.). The New Palgrave Dictionary of Economics (2nd ed.). Palgrave Macmillan. pp. 585–91. doi:10.1057/9780230226203.0855. ISBN 978-0-333-78676-5. Dwivedi, D.N. (2001). Macroeconomics: theory and policy. New Delhi: Tata McGraw-Hill. ISBN 978-0-07-058841-7. Gärtner, Manfred (2006). Macroeconomics. Pearson Education Limited. ISBN 978-0-273-70460-7. Healey, Nigel M. (2002). "AD-AS model". In Snowdon, Brian; Vane, Howard (eds.). An Encyclopedia of Macroeconomics. Northampton, Massachusetts: Edward Elgar Publishing. pp. 11–18. ISBN 978-1-84542-180-9. Levi, Maurice (2014). The Macroeconomic Environment of Business (Core Concepts and Curious Connections). New Jersey: World Scientific Publishing. ISBN 978-981-4304-34-4. Mankiw, Nicholas Gregory (2022). Macroeconomics (Eleventh, international ed.). New York, NY: Worth Publishers, Macmillan Learning. ISBN 978-1-319-26390-4. Mayer, Thomas (2002). "Monetary policy: role of". In Snowdon, Brian; Vane, Howard R. (eds.). An Encyclopedia of Macroeconomics. Northampton, Massachusetts: Edward Elgar Publishing. pp. 495–99. ISBN 978-1-84542-180-9. Nakamura, Emi and Jón Steinsson. (2018). "Identification in Macroeconomics." Journal of Economic Perspectives 32(3): 59–86. Peston, Maurice (2002). "IS-LM model: closed economy". In Snowdon, Brian; Vane, Howard R. (eds.). An Encyclopedia of Macroeconomics. Edward Elgar. ISBN 9781840643879. Romer, David (2019). Advanced macroeconomics (Fifth ed.). New York, NY: McGraw-Hill. ISBN 978-1-260-18521-8. Solow, Robert (2002). "Neoclassical growth model". In Snowdon, Brian; Vane, Howard (eds.). An Encyclopedia of Macroeconomics. Northampton, Massachusetts: Edward Elgar Publishing. ISBN 1840643870. Snowdon, Brian, and Howard R. Vane, ed. (2002). An Encyclopedia of Macroeconomics, Description & scroll to Contents-preview links. Snowdon, Brian; Vane, Howard R. (2005). Modern Macroeconomics: Its Origins, Development And Current State. Edward Elgar Publishing. ISBN 1845421809. Sørensen, Peter Birch; Whitta-Jacobsen, Hans Jørgen (2022). Introducing advanced macroeconomics: growth and business cycles (Third ed.). Oxford, United Kingdom New York, NY: Oxford University Press. ISBN 978-0-19-885049-6. Warsh, David (2006). Knowledge and the Wealth of Nations. Norton. ISBN 978-0-393-05996-0. == Further reading == Macroeconomic Modeling: The Cowles Commission Approach by Ray C. Fair
Wikipedia/Macroeconomic_theory
The Debye–Hückel theory was proposed by Peter Debye and Erich Hückel as a theoretical explanation for departures from ideality in solutions of electrolytes and plasmas. It is a linearized Poisson–Boltzmann model, which assumes an extremely simplified model of electrolyte solution but nevertheless gave accurate predictions of mean activity coefficients for ions in dilute solution. The Debye–Hückel equation provides a starting point for modern treatments of non-ideality of electrolyte solutions. == Overview == In the chemistry of electrolyte solutions, an ideal solution is a solution whose colligative properties are proportional to the concentration of the solute. Real solutions may show departures from this kind of ideality. In order to accommodate these effects in the thermodynamics of solutions, the concept of activity was introduced: the properties are then proportional to the activities of the ions. Activity a is proportional to concentration c, with the proportionality constant known as an activity coefficient γ {\displaystyle \gamma } : a = γ c / c 0 . {\displaystyle a=\gamma c/c^{0}.} In an ideal electrolyte solution the activity coefficients for all the ions are equal to one. Ideality of an electrolyte solution can be achieved only in very dilute solutions. Non-ideality of more concentrated solutions arises principally (but not exclusively) because ions of opposite charge attract each other due to electrostatic forces, while ions of the same charge repel each other. In consequence, ions are not randomly distributed throughout the solution, as they would be in an ideal solution. Activity coefficients of single ions cannot be measured experimentally because an electrolyte solution must contain both positively charged ions and negatively charged ions. Instead, a mean activity coefficient γ ± {\displaystyle \gamma _{\pm }} is defined. For example, with the electrolyte NaCl, γ ± = ( γ Na + γ Cl − ) 1 / 2 . {\displaystyle \gamma _{\pm }={\left(\gamma _{{\ce {Na+}}}\gamma _{{\ce {Cl-}}}\right)}^{1/2}.} In general, the mean activity coefficient of a fully dissociated electrolyte of formula AnBm is given by γ ± = ( γ A n γ B m ) 1 / ( n + m ) . {\displaystyle \gamma _{\pm }={\left({\gamma _{A}}^{n}{\gamma _{B}}^{m}\right)}^{1/(n+m)}.} Activity coefficients are themselves functions of concentration, since the amount of inter-ionic interaction increases as the concentration of the electrolyte increases. Debye and Hückel developed a theory with which single-ion activity coefficients could be calculated. By calculating the mean activity coefficients from them, the theory could be tested against experimental data. It was found to give excellent agreement for "dilute" solutions. == The model == A description of Debye–Hückel theory includes a very detailed discussion of the assumptions and their limitations as well as the mathematical development and applications. A snapshot of a 2-dimensional section of an idealized electrolyte solution is shown in the picture. The ions are shown as spheres with unit electrical charge. The solvent (pale blue) is shown as a uniform medium, without structure. On average, each ion is surrounded more closely by ions of opposite charge than by ions of like charge. These concepts were developed into a quantitative theory involving ions of charge z1e+ and z2e−, where z can be any integer. The principal assumption is that departure from ideality is due to electrostatic interactions between ions, mediated by Coulomb's law: the force of interaction between two electric charges, separated by a distance, r in a medium of relative permittivity εr is given by force = z 1 z 2 e 2 4 π ε 0 ε r r 2 {\displaystyle {\text{force}}={\frac {z_{1}z_{2}e^{2}}{4\pi \varepsilon _{0}\varepsilon _{r}r^{2}}}} It is also assumed that The solute is completely dissociated; it is a strong electrolyte. Ions are spherical and are not polarized by the surrounding electric field. Solvation of ions is ignored except insofar as it determines the effective sizes of the ions. The solvent plays no role other than providing a medium of constant relative permittivity (dielectric constant). There is no electrostriction. Individual ions surrounding a "central" ion can be represented by a statistically averaged cloud of continuous charge density, with a minimum distance of closest approach. The last assumption means that each cation is surrounded by a spherically symmetric cloud of other ions. The cloud has a net negative charge. Similarly each anion is surrounded by a cloud with net positive charge. == Mathematical development == The deviation from ideality is taken to be a function of the potential energy resulting from the electrostatic interactions between ions and their surrounding clouds. To calculate this energy two steps are needed. The first step is to specify the electrostatic potential for ion j by means of Poisson's equation ∇ 2 ψ j ( r ) = − 1 ε 0 ε r ρ j ( r ) {\displaystyle \nabla ^{2}\psi _{j}(r)=-{\frac {1}{\varepsilon _{0}\varepsilon _{r}}}\rho _{j}(r)} ψ(r) is the total potential at a distance, r, from the central ion and ρ(r) is the averaged charge density of the surrounding cloud at that distance. To apply this formula it is essential that the cloud has spherical symmetry, that is, the charge density is a function only of distance from the central ion as this allows the Poisson equation to be cast in terms of spherical coordinates with no angular dependence. The second step is to calculate the charge density by means of a Boltzmann distribution. n i ′ = n i exp ⁡ ( − z i e ψ j ( r ) k B T ) {\displaystyle n'_{i}=n_{i}\exp \left({\frac {-z_{i}e\psi _{j}(r)}{k_{\text{B}}T}}\right)} where kB is Boltzmann constant and T is the temperature. This distribution also depends on the potential ψ(r) and this introduces a serious difficulty in terms of the superposition principle. Nevertheless, the two equations can be combined to produce the Poisson–Boltzmann equation. ∇ 2 ψ j ( r ) = − 1 ε 0 ε r ∑ i [ n i ( z i e ) exp ⁡ ( − z i e ψ j ( r ) k B T ) ] {\displaystyle \nabla ^{2}\psi _{j}(r)=-{\frac {1}{\varepsilon _{0}\varepsilon _{r}}}\sum _{i}\left[n_{i}(z_{i}e)\exp \left({\frac {-z_{i}e\psi _{j}(r)}{k_{\text{B}}T}}\right)\right]} Solution of this equation is far from straightforward. Debye and Hückel expanded the exponential as a truncated Taylor series to first order. The zeroth order term vanishes because the solution is on average electrically neutral (so that ∑ n i z i = 0 {\textstyle \sum n_{i}z_{i}=0} ), which leaves us with only the first order term. The result has the form of the Helmholtz equation ∇ 2 ψ j ( r ) = κ 2 ψ j ( r ) with κ 2 = e 2 ε 0 ε r k B T ∑ i n i z i 2 , {\displaystyle \nabla ^{2}\psi _{j}(r)=\kappa ^{2}\psi _{j}(r)\qquad {\text{with}}\qquad \kappa ^{2}={\frac {e^{2}}{\varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}\sum _{i}n_{i}z_{i}^{2},} which has an analytical solution. This equation applies to electrolytes with equal numbers of ions of each charge. Nonsymmetrical electrolytes require another term with ψ2. For symmetrical electrolytes, this reduces to the modified spherical Bessel equation ( ∂ 2 ∂ r 2 + 2 r ∂ ∂ r − κ 2 ) ψ j = 0 , {\displaystyle \left({\frac {\partial ^{2}}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial }{\partial r}}-\kappa ^{2}\right)\psi _{j}=0,} with solutions ψ j ( r ) = A ′ e − κ r r + A ″ e κ r r . {\displaystyle \psi _{j}(r)=A'{\frac {e^{-\kappa r}}{r}}+A''{\frac {e^{\kappa r}}{r}}.} The coefficients A ′ {\displaystyle A'} and A ″ {\displaystyle A''} are fixed by the boundary conditions. As r → ∞ {\displaystyle r\to \infty } , ψ {\displaystyle \psi } must not diverge, so A ″ = 0 {\displaystyle A''=0} . At r = a 0 {\displaystyle r=a_{0}} , which is the distance of the closest approach of ions, the force exerted by the charge should be balanced by the force of other ions, imposing ∂ r ψ j ( a 0 ) = − z j e / ( 4 π ε 0 ε r a 0 2 ) {\displaystyle \partial _{r}\psi _{j}(a_{0})=-z_{j}e/(4\pi \varepsilon _{0}\varepsilon _{r}a_{0}^{2})} , from which A ′ {\displaystyle A'} is found, yielding ψ j ( r ) = z j e 4 π ε 0 ε r e κ a 0 1 + κ a 0 e − κ r r {\displaystyle \psi _{j}(r)={\frac {z_{j}e}{4\pi \varepsilon _{0}\varepsilon _{r}}}{\frac {e^{\kappa a_{0}}}{1+\kappa a_{0}}}{\frac {e^{-\kappa r}}{r}}} The electrostatic potential energy, u j {\displaystyle u_{j}} , of the ion at r = 0 {\displaystyle r=0} is u j = z j e ( ψ j ( a 0 ) − z j e 4 π ε 0 ε r 1 a 0 ) = − z j 2 e 2 4 π ε 0 ε r κ 1 + κ a 0 {\displaystyle u_{j}=z_{j}e\left(\psi _{j}(a_{0})-{\frac {z_{j}e}{4\pi \varepsilon _{0}\varepsilon _{r}}}{\frac {1}{a_{0}}}\right)=-{\frac {z_{j}^{2}e^{2}}{4\pi \varepsilon _{0}\varepsilon _{r}}}{\frac {\kappa }{1+\kappa a_{0}}}} This is the potential energy of a single ion in a solution. The multiple-charge generalization from electrostatics gives an expression for the potential energy of the entire solution. The mean activity coefficient is given by the logarithm of this quantity as follows log 10 ⁡ γ ± = − A z j 2 I 1 + B a 0 I {\displaystyle \log _{10}\gamma _{\pm }=-Az_{j}^{2}{\frac {\sqrt {I}}{1+Ba_{0}{\sqrt {I}}}}} A = e 2 B 2.303 × 8 π ε 0 ε r k B T {\displaystyle A={\frac {e^{2}B}{2.303\times 8\pi \varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}} B = ( 2 e 2 N ε 0 ε r k B T ) 1 / 2 {\displaystyle B=\left({\frac {2e^{2}N}{\varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}\right)^{1/2}} where I is the ionic strength and a0 is a parameter that represents the distance of closest approach of ions. For aqueous solutions at 25 °C A = 0.51 mol−1/2dm3/2 and B = 3.29 nm−1mol−1/2dm3/2 A {\displaystyle A} is a constant that depends on temperature. If I {\displaystyle I} is expressed in terms of molality, instead of molarity (as in the equation above and in the rest of this article), then an experimental value for A {\displaystyle A} of water is 1.172 mol − 1 / 2 kg 1 / 2 {\displaystyle 1.172{\text{ mol}}^{-1/2}{\text{kg}}^{1/2}} at 25 °C. It is common to use a base-10 logarithm, in which case we factor ln 10, so A is 0.509 mol − 1 / 2 kg 1 / 2 {\displaystyle 0.509{\text{ mol}}^{-1/2}{\text{kg}}^{1/2}} . The multiplier 103 before I / 2 {\displaystyle I/2} in the equation is for the case when the dimensions of I {\displaystyle I} are mol / dm 3 {\displaystyle {\text{mol}}/{\text{dm}}^{3}} . When the dimensions of I {\displaystyle I} are mole / m 3 {\displaystyle {\text{mole}}/{\text{m}}^{3}} , the multiplier 103 must be dropped from the equation : section 2.5.2  The most significant aspect of this result is the prediction that the mean activity coefficient is a function of ionic strength rather than the electrolyte concentration. For very low values of the ionic strength the value of the denominator in the expression above becomes nearly equal to one. In this situation the mean activity coefficient is proportional to the square root of the ionic strength. This is known as the Debye–Hückel limiting law. In this limit the equation is given as follows: section 2.5.2  ln ⁡ ( γ i ) = − z i 2 q 2 κ 8 π ε r ε 0 k B T = − z i 2 q 3 N A 1 / 2 4 π ( ε r ε 0 k B T ) 3 / 2 10 3 I 2 = − A z i 2 I , {\displaystyle \ln(\gamma _{i})=-{\frac {z_{i}^{2}q^{2}\kappa }{8\pi \varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=-{\frac {z_{i}^{2}q^{3}N_{\text{A}}^{1/2}}{4\pi (\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T)^{3/2}}}{\sqrt {10^{3}{\frac {I}{2}}}}=-Az_{i}^{2}{\sqrt {I}},} The excess osmotic pressure obtained from Debye–Hückel theory is in cgs units: P ex = − k B T κ cgs 3 24 π = − k B T 24 π ( 4 π ∑ j c j q j ε 0 ε r k B T ) 3 / 2 . {\displaystyle P^{\text{ex}}=-{\frac {k_{\text{B}}T\kappa _{\text{cgs}}^{3}}{24\pi }}=-{\frac {k_{\text{B}}T}{24\pi }}{\left({\frac {4\pi \sum _{j}c_{j}q_{j}}{\varepsilon _{0}\varepsilon _{r}k_{\text{B}}T}}\right)}^{3/2}.} Therefore, the total pressure is the sum of the excess osmotic pressure and the ideal pressure P id = k B T ∑ i c i {\textstyle P^{\text{id}}=k_{\text{B}}T\sum _{i}c_{i}} . The osmotic coefficient is then given by ϕ = P id + P ex P id = 1 + P ex P id . {\displaystyle \phi ={\frac {P^{\text{id}}+P^{\text{ex}}}{P^{\text{id}}}}=1+{\frac {P^{\text{ex}}}{P^{\text{id}}}}.} == Nondimensionalization == Taking the differential equation from earlier (as stated above, the equation only holds for low concentrations): ∂ 2 ∂ r 2 φ ( r ) + 2 r ∂ ∂ r φ ( r ) = I q φ ( r ) ε r ε 0 k B T = κ 2 φ ( r ) . {\displaystyle {\frac {\partial ^{2}}{\partial r^{2}}}\varphi (r)+{\frac {2}{r}}{\frac {\partial }{\partial r}}\varphi (r)={\frac {Iq\varphi (r)}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=\kappa ^{2}\varphi (r).} Using the Buckingham π theorem on this problem results in the following dimensionless groups: π 1 = q φ ( r ) k B T = Φ ( R ( r ) ) , π 2 = ε r , π 3 = a k B T ε 0 q 2 , π 4 = a 3 I , π 5 = z 0 , π 6 = r a = R ( r ) . {\displaystyle {\begin{aligned}\pi _{1}&={\frac {q\varphi (r)}{k_{\text{B}}T}}=\Phi (R(r)),&\pi _{2}&=\varepsilon _{r},\\[1ex]\pi _{3}&={\frac {ak_{\text{B}}T\varepsilon _{0}}{q^{2}}},&\pi _{4}&=a^{3}I,\\[1ex]\pi _{5}&=z_{0},&\pi _{6}&={\frac {r}{a}}=R(r).\end{aligned}}} Φ {\displaystyle \Phi } is called the reduced scalar electric potential field. R {\displaystyle R} is called the reduced radius. The existing groups may be recombined to form two other dimensionless groups for substitution into the differential equation. The first is what could be called the square of the reduced inverse screening length, ( κ a ) 2 {\displaystyle (\kappa a)^{2}} . The second could be called the reduced central ion charge, Z 0 {\displaystyle Z_{0}} (with a capital Z). Note that, though z 0 {\displaystyle z_{0}} is already dimensionless, without the substitution given below, the differential equation would still be dimensional. π 4 π 2 π 3 = a 2 q 2 I ε r ε 0 k B T = ( κ a ) 2 {\displaystyle {\frac {\pi _{4}}{\pi _{2}\pi _{3}}}={\frac {a^{2}q^{2}I}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=(\kappa a)^{2}} π 5 π 2 π 3 = z 0 q 2 4 π a ε r ε 0 k B T = Z 0 {\displaystyle {\frac {\pi _{5}}{\pi _{2}\pi _{3}}}={\frac {z_{0}q^{2}}{4\pi a\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}=Z_{0}} To obtain the nondimensionalized differential equation and initial conditions, use the π {\displaystyle \pi } groups to eliminate φ ( r ) {\displaystyle \varphi (r)} in favor of Φ ( R ( r ) ) {\displaystyle \Phi (R(r))} , then eliminate R ( r ) {\displaystyle R(r)} in favor of r {\displaystyle r} while carrying out the chain rule and substituting R ′ ( r ) = a {\displaystyle {R^{\prime }}(r)=a} , then eliminate r {\displaystyle r} in favor of R {\displaystyle R} (no chain rule needed), then eliminate I {\displaystyle I} in favor of ( κ a ) 2 {\displaystyle (\kappa a)^{2}} , then eliminate z 0 {\displaystyle z_{0}} in favor of Z 0 {\displaystyle Z_{0}} . The resulting equations are as follows: ∂ Φ ( R ) ∂ R | R = 1 = − Z 0 {\displaystyle \left.{\frac {\partial \Phi (R)}{\partial R}}\right|_{R=1}=-Z_{0}} Φ ( ∞ ) = 0 {\displaystyle \Phi (\infty )=0} ∂ 2 Φ ( R ) ∂ R 2 + 2 R ∂ Φ ( R ) ∂ R = ( κ a ) 2 Φ ( R ) . {\displaystyle {\frac {\partial ^{2}\Phi (R)}{\partial R^{2}}}+{\frac {2}{R}}{\frac {\partial \Phi (R)}{\partial R}}=(\kappa a)^{2}\Phi (R).} For table salt in 0.01 M solution at 25 °C, a typical value of ( κ a ) 2 {\displaystyle (\kappa a)^{2}} is 0.0005636, while a typical value of Z 0 {\displaystyle Z_{0}} is 7.017, highlighting the fact that, in low concentrations, ( κ a ) 2 {\displaystyle (\kappa a)^{2}} is a target for a zero order of magnitude approximation such as perturbation analysis. Unfortunately, because of the boundary condition at infinity, regular perturbation does not work. The same boundary condition prevents us from finding the exact solution to the equations. Singular perturbation may work, however. == Limitations and extensions == This equation for log ⁡ γ ± {\displaystyle \log \gamma _{\pm }} gives satisfactory agreement with experimental measurements for low electrolyte concentrations, typically less than 10−3 mol/L. Deviations from the theory occur at higher concentrations and with electrolytes that produce ions of higher charges, particularly unsymmetrical electrolytes. Essentially these deviations occur because the model is oversimplified, so there is little to be gained making small adjustments to the model. The individual assumptions can be challenged in turn. Complete dissociation. Ion association may take place, particularly with ions of higher charge. This was followed up in detail by Niels Bjerrum. The Bjerrum length is the separation at which the electrostatic interaction between two ions is comparable in magnitude to kBT. Weak electrolytes. A weak electrolyte is one that is not fully dissociated. As such it has a dissociation constant. The dissociation constant can be used to calculate the extent of dissociation and hence, make the necessary correction needed to calculate activity coefficients. Ions are spherical, not point charges and are not polarized. Many ions such as the nitrate ion, NO3−, are not spherical. Polyatomic ions are also polarizable. Role of the solvent. The solvent is not a structureless medium but is made up of molecules. The water molecules in aqueous solution are both dipolar and polarizable. Both cations and anions have a strong primary solvation shell and a weaker secondary solvation shell. Ion–solvent interactions are ignored in Debye–Hückel theory. Moreover, ionic radius is assumed to be negligible, but at higher concentrations, the ionic radius becomes comparable to the radius of the ionic atmosphere. Most extensions to Debye–Hückel theory are empirical in nature. They usually allow the Debye–Hückel equation to be followed at low concentration and add further terms in some power of the ionic strength to fit experimental observations. The main extensions are the Davies equation, Pitzer equations and specific ion interaction theory. One such extended Debye–Hückel equation is given by: − log 10 ⁡ ( γ ) = A | z + z − | I 1 + B a I {\displaystyle -\log _{10}(\gamma )={\frac {A|z_{+}z_{-}|{\sqrt {I}}}{1+Ba{\sqrt {I}}}}} where γ {\displaystyle \gamma } as its common logarithm is the activity coefficient, z {\displaystyle z} is the integer charge of the ion (1 for H+, 2 for Mg2+ etc.), I {\displaystyle I} is the ionic strength of the aqueous solution, and a {\displaystyle a} is the size or effective diameter of the ion in angstrom. The effective hydrated radius of the ion, a is the radius of the ion and its closely bound water molecules. Large ions and less highly charged ions bind water less tightly and have smaller hydrated radii than smaller, more highly charged ions. Typical values are 3Å for ions such as H+, Cl−, CN−, and HCOO−. The effective diameter for the hydronium ion is 9Å. A {\displaystyle A} and B {\displaystyle B} are constants with values of respectively 0.5085 and 0.3281 at 25 °C in water [1]. The extended Debye–Hückel equation provides accurate results for μ ≤ 0.1. For solutions of greater ionic strengths, the Pitzer equations should be used. In these solutions the activity coefficient may actually increase with ionic strength. The Debye–Hückel equation cannot be used in the solutions of surfactants where the presence of micelles influences on the electrochemical properties of the system (even rough judgement overestimates γ for ~50%). === Electrolytes mixtures === The theory can be applied also to dilute solutions of mixed electrolytes. Freezing point depression measurements has been used to this purpose. == Conductivity == The treatment given so far is for a system not subject to an external electric field. When conductivity is measured the system is subject to an oscillating external field due to the application of an AC voltage to electrodes immersed in the solution. Debye and Hückel modified their theory in 1926 and their theory was further modified by Lars Onsager in 1927. All the postulates of the original theory were retained. In addition it was assumed that the electric field causes the charge cloud to be distorted away from spherical symmetry. After taking this into account, together with the specific requirements of moving ions, such as viscosity and electrophoretic effects, Onsager was able to derive a theoretical expression to account for the empirical relation known as Kohlrausch's Law, for the molar conductivity, Λm. Λ m = Λ m 0 − K c {\displaystyle \Lambda _{m}=\Lambda _{m}^{0}-K{\sqrt {c}}} Λ m 0 {\displaystyle \Lambda _{m}^{0}} is known as the limiting molar conductivity, K is an empirical constant and c is the electrolyte concentration. Limiting here means "at the limit of the infinite dilution"). Onsager's expression is Λ m = Λ m 0 − ( A + B Λ m 0 ) c {\displaystyle \Lambda _{m}=\Lambda _{m}^{0}-(A+B\Lambda _{m}^{0}){\sqrt {c}}} where A and B are constants that depend only on known quantities such as temperature, the charges on the ions and the dielectric constant and viscosity of the solvent. This is known as the Debye–Hückel–Onsager equation. However, this equation only applies to very dilute solutions and has been largely superseded by other equations due to Fuoss and Onsager, 1932 and 1957 and later. == Summary of Debye and Hückel's first article on the theory of dilute electrolytes == The English title of the article is "On the Theory of Electrolytes. I. Freezing Point Depression and Related Phenomena". It was originally published in 1923 in volume 24 of a German-language journal Physikalische Zeitschrift. An English translation: 217–63  of the article is included in a book of collected papers presented to Debye by "his pupils, friends, and the publishers on the occasion of his seventieth birthday on March 24, 1954".: xv  Another English translation was completed in 2019. The article deals with the calculation of properties of electrolyte solutions that are under the influence of ion-induced electric fields, thus it deals with electrostatics. In the same year they first published this article, Debye and Hückel, hereinafter D&H, also released an article that covered their initial characterization of solutions under the influence of electric fields called "On the Theory of Electrolytes. II. Limiting Law for Electric Conductivity", but that subsequent article is not (yet) covered here. In the following summary (as yet incomplete and unchecked), modern notation and terminology are used, from both chemistry and mathematics, in order to prevent confusion. Also, with a few exceptions to improve clarity, the subsections in this summary are (very) condensed versions of the same subsections of the original article. === Introduction === D&H note that the Guldberg–Waage formula for electrolyte species in chemical reaction equilibrium in classical form is: 221  ∏ i = 1 s x i ν i = K , {\displaystyle \prod _{i=1}^{s}x_{i}^{\nu _{i}}=K,} where ∏ {\textstyle \prod } is a notation for multiplication, i {\displaystyle i} is a dummy variable indicating the species, s {\displaystyle s} is the number of species participating in the reaction, x i {\displaystyle x_{i}} is the mole fraction of species i {\displaystyle i} , ν i {\displaystyle \nu _{i}} is the stoichiometric coefficient of species i {\displaystyle i} , K is the equilibrium constant. D&H say that, due to the "mutual electrostatic forces between the ions", it is necessary to modify the Guldberg–Waage equation by replacing K {\displaystyle K} with γ K {\displaystyle \gamma K} , where γ {\displaystyle \gamma } is an overall activity coefficient, not a "special" activity coefficient (a separate activity coefficient associated with each species)—which is what is used in modern chemistry as of 2007. The relationship between γ {\displaystyle \gamma } and the special activity coefficients γ i {\displaystyle \gamma _{i}} is: 248  log ⁡ ( γ ) = ∑ i = 1 s ν i log ⁡ ( γ i ) . {\displaystyle \log(\gamma )=\sum _{i=1}^{s}\nu _{i}\log(\gamma _{i}).} === Fundamentals === D&H use the Helmholtz and Gibbs free entropies Φ {\displaystyle \Phi } and Ξ {\displaystyle \Xi } to express the effect of electrostatic forces in an electrolyte on its thermodynamic state. Specifically, they split most of the thermodynamic potentials into classical and electrostatic terms: Φ = S − U T = − A T , {\displaystyle \Phi =S-{\frac {U}{T}}=-{\frac {A}{T}},} where Φ {\displaystyle \Phi } is Helmholtz free entropy, S {\displaystyle S} is entropy, U {\displaystyle U} is internal energy, T {\displaystyle T} is temperature, A {\displaystyle A} is Helmholtz free energy. D&H give the total differential of Φ {\displaystyle \Phi } as: 222  d Φ = P T d V + U T 2 d T , {\displaystyle d\Phi ={\frac {P}{T}}\,dV+{\frac {U}{T^{2}}}\,dT,} where P {\displaystyle P} is pressure, V {\displaystyle V} is volume. By the definition of the total differential, this means that P T = ∂ Φ ∂ V , {\displaystyle {\frac {P}{T}}={\frac {\partial \Phi }{\partial V}},} U T 2 = ∂ Φ ∂ T , {\displaystyle {\frac {U}{T^{2}}}={\frac {\partial \Phi }{\partial T}},} which are useful further on. As stated previously, the internal energy is divided into two parts:: 222  U = U k + U e {\displaystyle U=U_{k}+U_{e}} where k {\displaystyle k} indicates the classical part, e {\displaystyle e} indicates the electric part. Similarly, the Helmholtz free entropy is also divided into two parts: Φ = Φ k + Φ e . {\displaystyle \Phi =\Phi _{k}+\Phi _{e}.} D&H state, without giving the logic, that: 222  Φ e = ∫ U e T 2 d T . {\displaystyle \Phi _{e}=\int {\frac {U_{e}}{T^{2}}}\,dT.} It would seem that, without some justification, Φ e = ∫ P e T d V + ∫ U e T 2 d T . {\displaystyle \Phi _{e}=\int {\frac {P_{e}}{T}}\,dV+\int {\frac {U_{e}}{T^{2}}}\,dT.} Without mentioning it specifically, D&H later give what might be the required (above) justification while arguing that Φ e = Ξ e {\displaystyle \Phi _{e}=\Xi _{e}} , an assumption that the solvent is incompressible. The definition of the Gibbs free entropy Ξ {\displaystyle \Xi } is: 222–3  Ξ = S − U + P V T = Φ − P V T = − G T , {\displaystyle \Xi =S-{\frac {U+PV}{T}}=\Phi -{\frac {PV}{T}}=-{\frac {G}{T}},} where G {\displaystyle G} is Gibbs free energy. D&H give the total differential of Ξ {\displaystyle \Xi } as: 222  d Ξ = − V T d P + U + P V T 2 d T . {\displaystyle d\Xi =-{\frac {V}{T}}\,dP+{\frac {U+PV}{T^{2}}}\,dT.} At this point D&H note that, for water containing 1 mole per liter of potassium chloride (nominal pressure and temperature aren't given), the electric pressure P e {\displaystyle P_{e}} amounts to 20 atmospheres. Furthermore, they note that this level of pressure gives a relative volume change of 0.001. Therefore, they neglect change in volume of water due to electric pressure, writing: 223  Ξ = Ξ k + Ξ e , {\displaystyle \Xi =\Xi _{k}+\Xi _{e},} and put Ξ e = Φ e = ∫ U e T 2 d T . {\displaystyle \Xi _{e}=\Phi _{e}=\int {\frac {U_{e}}{T^{2}}}\,dT.} D&H say that, according to Planck, the classical part of the Gibbs free entropy is: 223  Ξ k = ∑ i = 0 s N i ( ξ i − k B ln ⁡ ( x i ) ) , {\displaystyle \Xi _{k}=\sum _{i=0}^{s}N_{i}(\xi _{i}-k_{\text{B}}\ln(x_{i})),} where i {\displaystyle i} is a species, s {\displaystyle s} is the number of different particle types in solution, N i {\displaystyle N_{i}} is the number of particles of species i, ξ i {\displaystyle \xi _{i}} is the particle specific Gibbs free entropy of species i, k B {\displaystyle k_{\text{B}}} is the Boltzmann constant, x i {\displaystyle x_{i}} is the mole fraction of species i. Species zero is the solvent. The definition of ξ i {\displaystyle \xi _{i}} is as follows, where lower-case letters indicate the particle specific versions of the corresponding extensive properties:: 223  ξ i = s i − u i + P v i T . {\displaystyle \xi _{i}=s_{i}-{\frac {u_{i}+Pv_{i}}{T}}.} D&H don't say so, but the functional form for Ξ k {\displaystyle \Xi _{k}} may be derived from the functional dependence of the chemical potential of a component of an ideal mixture upon its mole fraction. D&H note that the internal energy U {\displaystyle U} of a solution is lowered by the electrical interaction of its ions, but that this effect can't be determined by using the crystallographic approximation for distances between dissimilar atoms (the cube root of the ratio of total volume to the number of particles in the volume). This is because there is more thermal motion in a liquid solution than in a crystal. The thermal motion tends to smear out the natural lattice that would otherwise be constructed by the ions. Instead, D&H introduce the concept of an ionic atmosphere or cloud. Like the crystal lattice, each ion still attempts to surround itself with oppositely charged ions, but in a more free-form manner; at small distances away from positive ions, one is more likely to find negative ions and vice versa.: 225  === The potential energy of an arbitrary ion solution === Electroneutrality of a solution requires that: 233  ∑ i = 1 s N i z i = 0 , {\displaystyle \sum _{i=1}^{s}N_{i}z_{i}=0,} where N i {\displaystyle N_{i}} is the total number of ions of species i in the solution, z i {\displaystyle z_{i}} is the charge number of species i. To bring an ion of species i, initially far away, to a point P {\displaystyle P} within the ion cloud requires interaction energy in the amount of z i q φ {\displaystyle z_{i}q\varphi } , where q {\displaystyle q} is the elementary charge, and φ {\displaystyle \varphi } is the value of the scalar electric potential field at P {\displaystyle P} . If electric forces were the only factor in play, the minimal-energy configuration of all the ions would be achieved in a close-packed lattice configuration. However, the ions are in thermal equilibrium with each other and are relatively free to move. Thus they obey Boltzmann statistics and form a Boltzmann distribution. All species' number densities n i {\displaystyle n_{i}} are altered from their bulk (overall average) values n i 0 {\displaystyle n_{i}^{0}} by the corresponding Boltzmann factor e − z i q φ k B T {\displaystyle e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}} , where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant, and T {\displaystyle T} is the temperature. Thus at every point in the cloud: 233  n i = N i V e − z i q φ k B T = n i 0 e − z i q φ k B T . {\displaystyle n_{i}={\frac {N_{i}}{V}}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}=n_{i}^{0}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}.} Note that in the infinite temperature limit, all ions are distributed uniformly, with no regard for their electrostatic interactions.: 227  The charge density is related to the number density:: 233  ρ = ∑ i z i q n i = ∑ i z i q n i 0 e − z i q φ k B T . {\displaystyle \rho =\sum _{i}z_{i}qn_{i}=\sum _{i}z_{i}qn_{i}^{0}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}.} When combining this result for the charge density with the Poisson equation from electrostatics, a form of the Poisson–Boltzmann equation results:: 233  ∇ 2 φ = − ρ ε r ε 0 = − ∑ i z i q n i 0 ε r ε 0 e − z i q φ k B T . {\displaystyle \nabla ^{2}\varphi =-{\frac {\rho }{\varepsilon _{r}\varepsilon _{0}}}=-\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}.} This equation is difficult to solve and does not follow the principle of linear superposition for the relationship between the number of charges and the strength of the potential field. It has been solved analyticallt by the Swedish mathematician Thomas Hakon Gronwall and his collaborators physical chemists V. K. La Mer and Karl Sandved in a 1928 article from Physikalische Zeitschrift dealing with extensions to Debye–Huckel theory. However, for sufficiently low concentrations of ions, a first-order Taylor series expansion approximation for the exponential function may be used ( e x ≈ 1 + x {\displaystyle e^{x}\approx 1+x} for 0 < x ≪ 1 {\displaystyle 0<x\ll 1} ) to create a linear differential equation.: Section 2.4.2  D&H say that this approximation holds at large distances between ions,: 227  which is the same as saying that the concentration is low. Lastly, they claim without proof that the addition of more terms in the expansion has little effect on the final solution.: 227  Thus − ∑ i z i q n i 0 ε r ε 0 e − z i q φ k B T ≈ − ∑ i z i q n i 0 ε r ε 0 ( 1 − z i q φ k B T ) = − ( ∑ i z i q n i 0 ε r ε 0 − ∑ i z i 2 q 2 n i 0 φ ε r ε 0 k B T ) . {\displaystyle -\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}e^{-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}}\approx -\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}\left(1-{\frac {z_{i}q\varphi }{k_{\text{B}}T}}\right)=-\left(\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}-\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}\varphi }{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}\right).} The Poisson–Boltzmann equation is transformed to: 233  ∇ 2 φ = ∑ i z i 2 q 2 n i 0 φ ε r ε 0 k B T , {\displaystyle \nabla ^{2}\varphi =\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}\varphi }{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}},} because the first summation is zero due to electroneutrality.: 234  Factor out the scalar potential and assign the leftovers, which are constant, to κ 2 {\displaystyle \kappa ^{2}} . Also, let I {\displaystyle I} be the ionic strength of the solution:: 234  κ 2 = ∑ i z i 2 q 2 n i 0 ε r ε 0 k B T = 2 I q 2 ε r ε 0 k B T , {\displaystyle \kappa ^{2}=\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}}={\frac {2Iq^{2}}{\varepsilon _{r}\varepsilon _{0}k_{\text{B}}T}},} I = 1 2 ∑ i z i 2 n i 0 . {\displaystyle I={\frac {1}{2}}\sum _{i}z_{i}^{2}n_{i}^{0}.} So, the fundamental equation is reduced to a form of the Helmholtz equation: ∇ 2 φ = κ 2 φ . {\displaystyle \nabla ^{2}\varphi =\kappa ^{2}\varphi .} Today, κ − 1 {\displaystyle \kappa ^{-1}} is called the Debye screening length. D&H recognize the importance of the parameter in their article and characterize it as a measure of the thickness of the ion atmosphere, which is an electrical double layer of the Gouy–Chapman type.: 229  The equation may be expressed in spherical coordinates by taking r = 0 {\displaystyle r=0} at some arbitrary ion:: 229  ∇ 2 φ = 1 r 2 ∂ ∂ r ( r 2 ∂ φ ( r ) ∂ r ) = ∂ 2 φ ( r ) ∂ r 2 + 2 r ∂ φ ( r ) ∂ r = κ 2 φ ( r ) . {\displaystyle \nabla ^{2}\varphi ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial \varphi (r)}{\partial r}}\right)={\frac {\partial ^{2}\varphi (r)}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial \varphi (r)}{\partial r}}=\kappa ^{2}\varphi (r).} The equation has the following general solution (keep in mind that κ {\displaystyle \kappa } is a positive constant):: 229  φ ( r ) = A e − κ 2 r r + A ′ e κ 2 r 2 r κ 2 = A e − κ r r + A ″ e κ r r = A e − κ r r , {\displaystyle \varphi (r)=A{\frac {e^{-{\sqrt {\kappa ^{2}}}r}}{r}}+A'{\frac {e^{{\sqrt {\kappa ^{2}}}r}}{2r{\sqrt {\kappa ^{2}}}}}=A{\frac {e^{-\kappa r}}{r}}+A''{\frac {e^{\kappa r}}{r}}=A{\frac {e^{-\kappa r}}{r}},} where A {\displaystyle A} , A ′ {\displaystyle A'} , and A ″ {\displaystyle A''} are undetermined constants The electric potential is zero at infinity by definition, so A ″ {\displaystyle A''} must be zero.: 229  In the next step, D&H assume that there is a certain radius a i {\displaystyle a_{i}} , beyond which no ions in the atmosphere may approach the (charge) center of the singled out ion. This radius may be due to the physical size of the ion itself, the sizes of the ions in the cloud, and any water molecules that surround the ions. Mathematically, they treat the singled out ion as a point charge to which one may not approach within the radius a i {\displaystyle a_{i}} .: 231  The potential of a point charge by itself is φ pc ( r ) = 1 4 π ε r ε 0 z i q r . {\displaystyle \varphi _{\text{pc}}(r)={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{r}}.} D&H say that the total potential inside the sphere is: 232  φ sp ( r ) = φ pc ( r ) + B i = 1 4 π ε r ε 0 z i q r + B i , {\displaystyle \varphi _{\text{sp}}(r)=\varphi _{\text{pc}}(r)+B_{i}={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{r}}+B_{i},} where B i {\displaystyle B_{i}} is a constant that represents the potential added by the ionic atmosphere. No justification for B i {\displaystyle B_{i}} being a constant is given. However, one can see that this is the case by considering that any spherical static charge distribution is subject to the mathematics of the shell theorem. The shell theorem says that no force is exerted on charged particles inside a sphere (of arbitrary charge). Since the ion atmosphere is assumed to be (time-averaged) spherically symmetric, with charge varying as a function of radius r {\displaystyle r} , it may be represented as an infinite series of concentric charge shells. Therefore, inside the radius a i {\displaystyle a_{i}} , the ion atmosphere exerts no force. If the force is zero, then the potential is a constant (by definition). In a combination of the continuously distributed model which gave the Poisson–Boltzmann equation and the model of the point charge, it is assumed that at the radius a i {\displaystyle a_{i}} , there is a continuity of φ ( r ) {\displaystyle \varphi (r)} and its first derivative. Thus: 232  φ ( a i ) = A i e − κ a i a i = 1 4 π ε r ε 0 z i q a i + B i = φ sp ( a i ) , {\displaystyle \varphi (a_{i})=A_{i}{\frac {e^{-\kappa a_{i}}}{a_{i}}}={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{a_{i}}}+B_{i}=\varphi _{\text{sp}}(a_{i}),} φ ′ ( a i ) = − A i e − κ a i ( 1 + κ a i ) a i 2 = − 1 4 π ε r ε 0 z i q a i 2 = φ sp ′ ( a i ) , {\displaystyle \varphi '(a_{i})=-{\frac {A_{i}e^{-\kappa a_{i}}(1+\kappa a_{i})}{a_{i}^{2}}}=-{\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{a_{i}^{2}}}=\varphi _{\text{sp}}'(a_{i}),} A i = z i q 4 π ε r ε 0 e κ a i 1 + κ a i , {\displaystyle A_{i}={\frac {z_{i}q}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {e^{\kappa a_{i}}}{1+\kappa a_{i}}},} B i = − z i q κ 4 π ε r ε 0 1 1 + κ a i . {\displaystyle B_{i}=-{\frac {z_{i}q\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.} By the definition of electric potential energy, the potential energy associated with the singled out ion in the ion atmosphere is: 230, 232  u i = z i q B i = − z i 2 q 2 κ 4 π ε r ε 0 1 1 + κ a i . {\displaystyle u_{i}=z_{i}qB_{i}=-{\frac {z_{i}^{2}q^{2}\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.} Notice that this only requires knowledge of the charge of the singled out ion and the potential of all the other ions. To calculate the potential energy of the entire electrolyte solution, one must use the multiple-charge generalization for electric potential energy:: 230, 232  U e = 1 2 ∑ i = 1 s N i u i = − ∑ i = 1 s N i z i 2 2 q 2 κ 4 π ε r ε 0 1 1 + κ a i . {\displaystyle U_{e}={\frac {1}{2}}\sum _{i=1}^{s}N_{i}u_{i}=-\sum _{i=1}^{s}{\frac {N_{i}z_{i}^{2}}{2}}{\frac {q^{2}\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.} === The additional electric term to the thermodynamic potential === == Experimental verification of the theory == To verify the validity of the Debye–Hückel theory, many experimental ways have been tried, measuring the activity coefficients: the problem is that we need to go towards very high dilutions. Typical examples are: measurements of vapour pressure, freezing point, osmotic pressure (indirect methods) and measurement of electric potential in cells (direct method). Going towards high dilutions good results have been found using liquid membrane cells, it has been possible to investigate aqueous media 10−4 M and it has been found that for 1:1 electrolytes (as NaCl or KCl) the Debye–Hückel equation is totally correct, but for 2:2 or 3:2 electrolytes it is possible to find negative deviation from the Debye–Hückel limit law: this strange behavior can be observed only in the very dilute area, and in more concentrate regions the deviation becomes positive. It is possible that Debye–Hückel equation is not able to foresee this behavior because of the linearization of the Poisson–Boltzmann equation, or maybe not: studies about this have been started only during the last years of the 20th century because before it wasn't possible to investigate the 10−4 M region, so it is possible that during the next years new theories will be born. == See also == Electrolyte Chemical activity Ionic strength Poisson-Boltzmann equation Debye length Bjerrum length Bates-Guggenheim Convention Ionic atmosphere Electrical double layer Ion association Davies equation Pitzer equation Specific ion Interaction Theory == References ==
Wikipedia/Debye–Hückel_theory
Benson group-increment theory (BGIT), group-increment theory, or Benson group additivity uses the experimentally calculated heat of formation for individual groups of atoms to calculate the entire heat of formation for a molecule under investigation. This can be a quick and convenient way to determine theoretical heats of formation without conducting tedious experiments. The technique was developed by professor Sidney William Benson of the University of Southern California. It is further described in Heat of formation group additivity. Heats of formations are intimately related to bond-dissociation energies and thus are important in understanding chemical structure and reactivity. Furthermore, although the theory is old, it still is practically useful as one of the best group-contribution methods aside from computational methods such as molecular mechanics. However, the BGIT has its limitations, and thus cannot always predict the precise heat of formation. == Origin == Benson and Buss originated the BGIT in a 1958 article. Within this manuscript, Benson and Buss proposed four approximations: A limiting law for additivity rules. Zero-order approximation. Additivity of atomic properties. First-order approximation. Additivity of bond properties. Second-order approximation. Additivity of group properties. These approximations account for the atomic, bond, and group contributions to heat capacity (Cp), enthalpy (ΔH°), and entropy (ΔS°). The most important of these approximations to the group-increment theory is the second-order approximation, because this approximation "leads to the direct method of writing the properties of a compound as the sum of the properties of its group". The second-order approximation accounts for two molecular atoms or structural elements that are within relative proximity to one another (approximately 3–5 ångstroms as proposed in the article). By using a series of disproportionation reactions of symmetrical and asymmetrical framework, Benson and Buss concluded that neighboring atoms within the disproportionation reaction under study are not affected by the change. Symmetrical Cl − CH 2 CH 2 − Cl + H − CH 2 CH 2 − H ⟶ 2 Cl − CH 2 CH 2 − H {\displaystyle {\ce {Cl-CH2CH2-Cl + H-CH2CH2-H -> 2 Cl-CH2CH2-H}}} Asymmetrical H − CH 2 O − H + CH 3 − CH 2 O − CH 3 ⟶ CH 3 − CH 2 O − H + H − CH 2 O − CH 3 {\displaystyle {\ce {H-CH2O-H + CH3-CH2O-CH3 -> CH3-CH2O-H + H-CH2O-CH3}}} In the symmetrical reaction the cleavage between the CH2 in both reactants leads to one product formation.Though difficult to see, one can see that the neighboring carbons are not changed as the rearrangement occurs. In the asymmetrical reaction the hydroxyl–methyl bond is cleaved and rearranged on the ethyl moiety of the methoxyethane. The methoxy and hydroxyl rearrangement display clear evidence that the neighboring groups are not affected in the disproportionation reaction. The "disproportionation" reactions that Benson and Buss refer to are termed loosely as "radical disproportionation" reactions. From this they termed a "group" as a polyvalent atom connected together with its ligands. However, they noted that under all approximations ringed systems and unsaturated centers do not follow additivity rules due to their preservation under disproportionation reactions. A ring must be broken at more than one site to actually undergo a disproportionation reaction. This holds true with double and triple bonds, as they must break multiple times to break their structure. They concluded that these atoms must be considered as distinct entities. Hence we see Cd and CB groups, which take into account these groups as being individual entities. Furthermore, this leaves error for ring strain, as we will see in its limitations. From this Benson and Buss concluded that the ΔfH of any saturated hydrocarbon can be precisely calculated due to the only two groups being a methylene [C−(C)2(H)2] and the terminating methyl group [C−(C)(H)3]. Benson later began to compile actual functional groups from the second-order approximation. Ansylyn and Dougherty explained in simple terms how the group increments, or Benson increments, are derived from experimental calculations. By calculating the ΔΔfH between extended saturated alkyl chains (which is just the difference between two ΔfH values), as shown in the table, one can approximate the value of the C−(C)2(H)2 group by averaging the ΔΔfH values. Once this is determined, all one needs to do is take the total value of ΔfH, subtract the ΔfH caused by the C−(C)2(H)2 group(s), and then divide that number by two (due to two C−(C)(H)3 groups), obtaining the value of the C−(C)(H)3 group. From the knowledge of these two groups, Benson moved forward obtain and list functional groups derived from countless numbers of experimentation from many sources, some of which are displayed below. == Applications == As stated above, BGIT can be used to calculate heats of formation, which are important in understanding the strengths of bonds and entire molecules. Furthermore, the method can be used to quickly estimate whether a reaction is endothermic or exothermic. These values are for gas-phase thermodynamics and typically at 298 K. Benson and coworkers have continued collecting data since their 1958 publication and have since published even more group increments, including strained rings, radicals, halogens, and more. Even though BGIT was introduced in 1958 and would seem to be antiquated in the modern age of advanced computing, the theory still finds practical applications. In a 2006 article, Gronert states: "Aside from molecular mechanics computer packages, the best known additivity scheme is Benson's." Fishtik and Datta also give credit to BGIT: "Despite their empirical character, GA methods continue to remain a powerful and relatively accurate technique for the estimation of thermodynamic properties of the chemical species, even in the era of supercomputers" When calculating the heat of formation, all the atoms in the molecule must be accounted for (hydrogen atoms are not included as specific groups). The figure above displays a simple application for predicting the standard enthalpy of isobutylbenzene. First, it is usually very helpful to start by numbering the atoms. It is much easier then to list the specific groups along with the corresponding number from the table. Each individual atom is accounted for, where CB−(H) accounts for one benzene carbon bound to a hydrogen atom. This would be multiplied by five, since there are five CB−(H) atoms. The CB−(C) molecule further accounts for the other benzene carbon attached to the butyl group. The C−(CB)(C)(H)2 accounts for the carbon linked to the benzene group on the butyl moiety. The 2' carbon of the butyl group would be C−(C)3(H) because it is a tertiary carbon (connecting to three other carbon atoms). The final calculation comes from the CH3 groups connected to the 2' carbon; C−(C)(H)3. The total calculations add to −5.15 kcal/mol (−21.6 kJ/mol), which is identical to the experimental value, which can be found in the National Institute of Standards and Technology Chemistry WebBook. Another example from the literature is when the BGIT was used to corroborate experimental evidence of the enthalpy of formation of benzo[k]fluoranthene. The experimental value was determined to be 296.6 kJ/mol with a standard deviation of 6.4 kJ/mol. This is within the error of the BGIT and is in good agreement with the calculated value. Notice that the carbons at the fused rings are treated differently than regular benzene carbons. Not only can the BGIT be used to confirm experimental values, but can also to confirm theoretical values. BGIT also can be used for comparing the thermodynamics of simplified hydrogenation reactions for alkene (2-methyl-1-butene) and ketone(2-butanone). This is a thermodynamic argument, and kinetics are ignored. As determined by the enthalpies below the corresponding molecules, the enthalpy of reaction for 2-methyl-1-butene going to 2-methyl-butane is −29.07 kcal/mol, which is in great agreement with the value calculated from NIST, −28.31 kcal/mol. For 2-butanone going to 2-butanol, enthalpy of reaction is −13.75 kcal/mol, which again is in excellent agreement with −14.02 kcal/mol. While both reactions are thermodynamically favored, the alkene will be far more exothermic than the corresponding ketone. == Limitations == As powerful as it is, BGIT does have several limitations that restrict its usage. === Inaccuracy === There is an overall 2–3 kcal/mol error using the Benson group-increment theory to calculate the ΔfH. The value of each group is estimated on the base of the average ΔΔfH° shown above and there will be a dispersion around the average ΔΔfH°. Also, it can only be as accurate as the experimental accuracy. That's the derivation of the error, and there is nearly no way to make it more accurate. === Group availability === The BGIT is based on empirical data and heat of formation. Some groups are too hard to measure, so not all the existing groups are available in the table. Sometimes approximation should be made when those unavailable groups are encountered. For example, we need to approximate C as Ct and N as NI in C≡N, which clearly cause more inaccuracy, which is another drawback. === Ring strain, intermolecular and intramolecular interactions === In the BGIT, we assumed that a CH2 always makes a constant contribution to Δf<H° for a molecule. However, a small ring such as cyclobutane leads to a substantial failure for the BGIT, because of its strain energy. A series of correction terms for common ring systems has been developed, with the goal of obtaining accurate ΔfH° values for cyclic system. Note that these are not identically equal to the accepted strain energies for the parent ring system, although they are quite close. The group-increment correction for a cyclobutane is based on ΔfH° values for a number of structures and represents an average value that gives the best agreement with the range of experimental data. In contrast, the strain energy of cyclobutane is specific to the parent compound, with their new corrections, it is now possible to predict ΔfH° values for strained ring system by first adding up all the basic group increments and then adding appropriate ring-strain correction values. The same as ring system, corrections have been made to other situations such as gauche alkane with a 0.8 kcal/mol correction and cis- alkene with a 1.0 kcal/mol correction. Also, the BGIT fails when conjugation and interactions between functional groups exist, such as intermolecular and intramolecular hydrogen bonding, which limits its accuracy and usage in some cases. == References ==
Wikipedia/Benson_group_increment_theory
Medical research (or biomedical research), also known as health research, refers to the process of using scientific methods with the aim to produce knowledge about human diseases, the prevention and treatment of illness, and the promotion of health. Medical research encompasses a wide array of research, extending from "basic research" (also called bench science or bench research), – involving fundamental scientific principles that may apply to a preclinical understanding – to clinical research, which involves studies of people who may be subjects in clinical trials. Within this spectrum is applied research, or translational research, conducted to expand knowledge in the field of medicine. Both clinical and preclinical research phases exist in the pharmaceutical industry's drug development pipelines, where the clinical phase is denoted by the term clinical trial. However, only part of the clinical or preclinical research is oriented towards a specific pharmaceutical purpose. The need for fundamental and mechanism-based understanding, diagnostics, medical devices, and non-pharmaceutical therapies means that pharmaceutical research is only a small part of medical research. Most of the research in the field is pursued by biomedical scientists, but significant contributions are made by other type of biologists. Medical research on humans has to strictly follow the medical ethics sanctioned in the Declaration of Helsinki and the institutional review board where the research is conducted. In all cases, research ethics are expected. == Impact == The increased longevity of humans over the past century can be significantly attributed to advances resulting from medical research. Among the major benefits of medical research have been vaccines for measles and polio, insulin treatment for diabetes, classes of antibiotics for treating a host of maladies, medication for high blood pressure, improved treatments for AIDS, statins and other treatments for atherosclerosis, new surgical techniques such as microsurgery, and increasingly successful treatments for cancer. New, beneficial tests and treatments are expected as a result of the Human Genome Project. Many challenges remain, however, including the appearance of antibiotic resistance and the obesity epidemic. == Phases of medical research == === Basic medical research === Example areas in basic medical research include: cellular and molecular biology, medical genetics, immunology, neuroscience, and psychology. Researchers, mainly in universities or government-funded research institutes, aim to establish an understanding of the cellular, molecular and physiological mechanisms of human health and disease. === Pre-clinical research === Pre-clinical research covers understanding of mechanisms that may lead to clinical research with people. Typically, the work requires no ethical approval, is supervised by scientists rather than physicians, and is carried out in a university or company, rather than a hospital. === Clinical research === Clinical research is carried out with people as the experimental subjects. It is generally supervised by physicians and conducted by nurses in a medical setting, such as a hospital or research clinic, and requires ethical approval. == Role of patients and the public == Besides being participants in a clinical trial, members of the public can actively collaborate with researchers in designing and conducting medical research. This is known as patient and public involvement (PPI). Public involvement involves a working partnership between patients, caregivers, people with lived experience, and researchers to shape and influence what is researcher and how. PPI can improve the quality of research and make it more relevant and accessible. People with current or past experience of illness can provide a different perspective than professionals and complement their knowledge. Through their personal knowledge they can identify research topics that are relevant and important to those living with an illness or using a service. They can also help to make the research more grounded in the needs of the specific communities they are part of. Public contributors can also ensure that the research is presented in plain language that is clear to the wider society and the specific groups it is most relevant for. == Funding == Research funding in many countries derives from research bodies and private organizations which distribute money for equipment, salaries, and research expenses. United States, Europe, Asia, Canada, and Australia combined spent $265.0 billion in 2011, which reflected growth of 3.5% annually from $208.8 billion in 2004. The United States contributed 49% of governmental funding from these regions in 2011 compared to 57% in 2004. In the United Kingdom, funding bodies such as the National Institute for Health and Care Research (NIHR) and the Medical Research Council derive their assets from UK tax payers, and distribute revenues to institutions by competitive research grants. The Wellcome Trust is the UK's largest non-governmental source of funds for biomedical research and provides over £600 million per year in grants to scientists and funds for research centres. In the United States, data from ongoing surveys by the National Science Foundation (NSF) show that federal agencies provided only 44% of the $86 billion spent on basic research in 2015. The National Institutes of Health and pharmaceutical companies collectively contribute $26.4 billion and $27 billion, which constitute 28% and 29% of the total, respectively. Other significant contributors include biotechnology companies ($17.9 billion, 19% of total), medical device companies ($9.2 billion, 10% of total), other federal sources, and state and local governments. Foundations and charities, led by the Bill and Melinda Gates Foundation, contributed about 3% of the funding. These funders are attempting to maximize their return on investment in public health. One method proposed to maximize the return on investment in medicine is to fund the development of open source hardware for medical research and treatment. The enactment of orphan drug legislation in some countries has increased funding available to develop drugs meant to treat rare conditions, resulting in breakthroughs that previously were uneconomical to pursue. === Government-funded biomedical research === Since the establishment of the National Institutes of Health (NIH) in the mid-1940s, the main source of U.S. federal support of biomedical research, investment priorities and levels of funding have fluctuated. From 1995 to 2010, NIH support of biomedical research increased from 11 billion to 27 billion Despite the jump in federal spending, advancements measured by citations to publications and the number of drugs passed by the FDA remained stagnant over the same time span. Financial projections indicate federal spending will remain constant in the near future. === US federal funding trends === The National Institutes of Health (NIH) is the agency that is responsible for management of the lion's share of federal funding of biomedical research. It funds over 280 areas directly related to health. Over the past century there were two notable periods of NIH support. From 1995 to 1996 funding increased from $8.877 billion to $9.366 billion, years which represented the start of what is considered the "doubling period" of rapid NIH support. The second notable period started in 1997 and ended in 2010, a period where the NIH moved to organize research spending for engagement with the scientific community. === Privately (industry) funded biomedical research === Since 1980 the share of biomedical research funding from industry sources has grown from 32% to 62%, which has resulted in the development of numerous life-saving medical advances. The relationship between industry and government-funded research in the US has seen great movement over the years. The 1980 Bayh–Dole Act was passed by Congress to foster a more constructive relationship between the collaboration of government and industry funded biomedical research. The Bayh Doyle Act gave private corporations the option of applying for government funded grants for biomedical research which in turn allowed the private corporations to license the technology. Both government and industry research funding increased rapidly from between the years of 1994–2003; industry saw a compound average annual growth rate of 8.1% a year and slowed only slightly to a compound average annual growth rate of 5.8% from 2003 to 2008. === Conflicts of interests === "Conflict of interest" in the field of medical research has been defined as "a set of conditions in which professional judgment concerning a primary interest (such as a person's welfare or the validity of research) tends to be unduly influenced by a secondary interest (such as financial gain)." Regulation on industry funded biomedical research has seen great changes since Samuel Hopkins Adams declaration. In 1906 congress passed the Pure Food and Drugs Act of 1906. In 1912 Congress passed the Shirley Amendment to prohibit the wide dissemination of false information on pharmaceuticals. The Food and Drug Administration was formally created in 1930 under the McNarey Mapes Amendment to oversee the regulation of Food and Drugs in the United States. In 1962 the Kefauver-Harris Amendments to the Food, Drug and Cosmetics Act made it so that before a drug was marketed in the United States the FDA must first approve that the drug was safe. The Kefauver-Harris amendments also mandated that more stringent clinical trials must be performed before a drug is brought to the market. The Kefauver-Harris amendments were met with opposition from industry due to the requirement of lengthier clinical trial periods that would lessen the period of time in which the investor is able to see return on their money. In the pharmaceutical industry patents are typically granted for a 20-year period of time, and most patent applications are submitted during the early stages of the product development. According to Ariel Katz on average after a patent application is submitted it takes an additional 8 years before the FDA approves a drug for marketing. As such this would leave a company with only 12 years to market the drug to see a return on their investments. After a sharp decline of new drugs entering the US market following the 1962 Kefauver-Harris amendments economist Sam Petlzman concluded that cost of loss of innovation was greater than the savings recognized by consumers no longer purchasing ineffective drugs. In 1984 the Hatch-Waxman Act or the Drug Price Competition and Patent Term Restoration Act of 1984 was passed by congress. The Hatch-Waxman Act was passed with the idea that giving brand manufacturers the ability to extend their patent by an additional 5 years would create greater incentives for innovation and private sector funding for investment. The relationship that exists with industry funded biomedical research is that of which industry is the financier for academic institutions which in turn employ scientific investigators to conduct research. A fear that exists wherein a project is funded by industry is that firms might negate informing the public of negative effects to better promote their product. A list of studies shows that public fear of the conflicts of interest that exist when biomedical research is funded by industry can be considered valid after a 2003 publication of "Scope and Impact of Financial Conflicts of Interest in Biomedical Research" in The Journal of American Association of Medicine. This publication included 37 different studies that met specific criteria to determine whether or not an academic institution or scientific investigator funded by industry had engaged in behavior that could be deduced to be a conflict of interest in the field of biomedical research. Survey results from one study concluded that 43% of scientific investigators employed by a participating academic institution had received research related gifts and discretionary funds from industry sponsors. Another participating institution surveyed showed that 7.6% of investigators were financially tied to research sponsors, including paid speaking engagements (34%), consulting arrangements (33%), advisory board positions (32%) and equity (14%). A 1994 study concluded that 58% out of 210 life science companies indicated that investigators were required to withhold information pertaining to their research as to extend the life of the interested companies' patents. Rules and regulations regarding conflict of interest disclosures are being studied by experts in the biomedical research field to eliminate conflicts of interest that could possibly affect the outcomes of biomedical research. === Transparency laws === Two laws which are both still in effect, one passed in 2006 and the other in 2010, were instrumental in defining funding reporting standards for biomedical research, and defining for the first time reporting regulations that were previously not required. The 2006 Federal Funding Accountability and Transparency Act mandates that all entities receiving over $25,000 in federal funds must report annual spending reports, including disclosure of executive salaries. The 2010 amendment to the act mandates that progress reports be submitted along with financial reporting. Data from the federal mandate is managed and made publicly available on usaspending.gov. Aside from the main source, usaspending.gov, other reporting mechanisms exist: Data specifically on biomedical research funding from federal sources is made publicly available by the National Health Expenditure Accounts (NHEA), data on health services research, approximately 0.1% of federal funding on biomedical research, is available through the Coalition of Health Services Research, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, the Centers for Medicare & Medicaid Services, and the Veterans Health Administration. Currently, there are not any funding reporting requirements for industry sponsored research, but there has been voluntary movement toward this goal. In 2014, major pharmaceutical stakeholders such as Roche and Johnson and Johnson have made financial information publicly available and Pharmaceutical Research and Manufacturers of America (PhRMA), the most prominent professional association for biomedical research companies, has recently begun to provide limited public funding reports. == History == === Ancient to 20th century in other regions === The earliest narrative describing a medical trial is found in the Book of Daniel, which says that Babylonian king Nebuchadnezzar ordered youths of royal blood to eat only red meat and wine for three years, while another group of youths ate only beans and water. The experiment was intended to determine if a diet of vegetables and water was healthier than a diet of wine and red meat. At the experiment endpoint, the trial accomplished its prerogative: the youths who ate only beans and water were noticeably healthier. Scientific curiosity to understand health outcomes from varying treatments has been present for centuries, but it was not until the mid-19th century when an organizational platform was created to support and regulate this curiosity. In 1945, Vannevar Bush said that biomedical scientific research was "the pacemaker of technological progress", an idea which contributed to the initiative to found the National Institutes of Health (NIH) in 1948, a historical benchmark that marked the beginning of a near century substantial investment in biomedical research. === 20th and 21st century in the United States === The NIH provides more financial support for medical research than any other agency in the world to date and claims responsibility for numerous innovations that have improved global health. The historical funding of biomedical research has undergone many changes over the past century. Innovations such as the polio vaccine, antibiotics and antipsychotic agents, developed in the early years of the NIH lead to social and political support of the agency. Political initiatives in the early 1990s lead to a doubling of NIH funding, spurring an era of great scientific progress. There have been dramatic changes in the era since the turn of the 21st century to date; roughly around the start of the century, the cost of trials dramatically increased while the rate of scientific discoveries did not keep pace. Biomedical research spending increased substantially faster than GDP growth over the past decade in the US, between the years of 2003 and 2007 spending increased 14% per year, while GDP growth increased 1% over the same period (both measures adjusted for inflation). Industry, not-for-profit entities, state and federal funding spending combined accounted for an increase in funding from $75.5 billion in 2003 to $101.1 billion in 2007. Due to the immediacy of federal financing priorities and stagnant corporate spending during the recession, biomedical research spending decreased 2% in real terms in 2008. Despite an overall increase of investment in biomedical research, there has been stagnation, and in some areas a marked decline in the number of drug and device approvals over the same time period. As of 2010, industry sponsored research accounts for 58% of expenditures, NIH for 27% of expenditures, state governments for 5% of expenditures, non NIH-federal sources for 5% of expenditures and not-for-profit entities accounted for 4% of support. Federally funded biomedical research expenditures increased nominally, 0.7% (adjusted for inflation), from 2003 to 2007. Previous reports showed a stark contrast in federal investment, from 1994 to 2003, federal funding increased 100% (adjusted for inflation). The NIH manages the majority, over 85%, of federal biomedical research expenditures. NIH support for biomedical research decreased from $31.8 billion in 2003, to $29.0 billion in 2007, a 25% decline (in real terms adjusted for inflation), while non-NIH federal funding allowed for the maintenance of government financial support levels through the era (the 0.7% four-year increase). Spending from industry-initiated research increased 25% (adjusted for inflation) over the same time period of time, from 2003 to 2007, an increase from $40 billion in 2003, to $58.6 billion in 2007. Industry sourced expenditures from 1994 to 2003 showed industry sponsored research funding increased 8.1%, a stark contrast to 25% increase in recent years. Of industry sponsored research, pharmaceutical firm spending was the greatest contributor from all industry sponsored biomedical research spending, but only increased 15% (adjusted for inflation) from 2003 to 2007, while device and biotechnology firms accounted for the majority of the spending. The stock performance, a measure that can be an indication of future firm growth or technological direction, has substantially increased for both predominantly medical device and biotechnology producers. Contributing factors to this growth are thought to be less rigorous FDA approval requirements for devices as opposed to drugs, lower cost of trials, lower pricing and profitability of products and predictable influence of new technology due to a limited number of competitors. Another visible shift during the era was a shift in focus to late stage research trials; formerly dispersed, since 1994 an increasingly large portion of industry-sponsored research was late phase trials rather than early-experimental phases now accounting for the majority of industry sponsored research. This shift is attributable to a lower risk investment and a shorter development to market schedule. The low risk preference is also reflected in the trend of large pharmaceutical firms acquiring smaller companies that hold patents to newly developed drug or device discoveries which have not yet passed federal regulation (large companies are mitigating their risk by purchasing technology created by smaller companies in early-phase high-risk studies). Medical research support from universities increased from $22 billion in 2003 to $27.7 billion in 2007, a 7.8% increase (adjusted for inflation). In 2007 the most heavily funded institutions received 20% of HIN medical research funding, and the top 50 institutions received 58% of NIH medical research funding, the percent of funding allocated to the largest institutions is a trend which has increased only slightly over data from 1994. Relative to federal and private funding, health policy and service research accounted for a nominal amount of sponsored research; health policy and service research was funded $1.8 billion in 2003, which increased to $2.2 billion in 2008. Stagnant rates of investment from the US government over the past decade may be in part attributable to challenges that plague the field. To date, only two-thirds of published drug trial findings have results that can be re-produced, which raises concerns from a US regulatory standpoint where great investment has been made in research ethics and standards, yet trial results remain inconsistent. Federal agencies have called upon greater regulation to address these problems; a spokesman from the National Institute of Neurological Disorders and Stroke, an agency of the NIH, stated that there is "widespread poor reporting of experimental design in articles and grant applications, that animal research should follow a core set of research parameters, and that a concerted effort by all stakeholders is needed to disseminate best reporting practices and put them into practice". == Regulations and guidelines == Medical research is highly regulated. National regulatory authorities are appointed in most countries to oversee and monitor medical research, such as for the development and distribution of new drugs. In the United States, the Food and Drug Administration oversees new drug development; in Europe, the European Medicines Agency (see also EudraLex); and in Japan, the Ministry of Health, Labour and Welfare. The World Medical Association develops the ethical standards for medical professionals involved in medical research. The most fundamental of them is the Declaration of Helsinki. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) works on the creation of rules and guidelines for the development of new medication, such as the guidelines for Good Clinical Practice (GCP). All ideas of regulation are based on a country's ethical standards code. This is why treatment of a particular disease in one country may not be allowed, but is in another. == Flaws and vulnerabilities == A major flaw and vulnerability in biomedical research appears to be the hypercompetition for the resources and positions that are required to conduct science. The competition seems to suppress the creativity, cooperation, risk-taking, and original thinking required to make fundamental discoveries. Other consequences of today's highly pressured environment for research appear to be a substantial number of research publications whose results cannot be replicated, and perverse incentives in research funding that encourage grantee institutions to grow without making sufficient investments in their own faculty and facilities. Other risky trends include a decline in the share of key research grants going to younger scientists, as well as a steady rise in the age at which investigators receive their first funding. A significant flaw in biomedical research is the toxic culture that particularly impacts medical students and early career researchers. They face challenges such as bullying, harassment, and unethical authorship practices. Intense competition for funding and publication pressures fosters a climate of secrecy and self-protection, stifling creativity and collaboration. The power imbalance in academic hierarchies exacerbates these issues, with junior researchers often subjected to exploitative practices and denied proper recognition for their contributions. == Commercialization == After clinical research, medical therapies are typically commercialized by private companies such as pharmaceutical companies or medical device company. In the United States, one estimate found that in 2011, one-third of Medicare physician and outpatient hospital spending was on new technologies unavailable in the prior decade. Medical therapies are constantly being researched, so the difference between a therapy which is investigational versus standard of care is not always clear, particularly given cost-effectiveness considerations. Payers have utilization management clinical guidelines which do not pay for "experimental or investigational" therapies, or may require that the therapy is medically necessary or superior to cheaper treatments. For example, proton therapy was approved by the FDA, but private health insurers in the United States considered it unproven or unnecessary given its high cost, although it was ultimately covered for certain cancers. == Fields of research == Fields of biomedical research include: == See also == == References ==
Wikipedia/Medical_theory
Theory is a type of abstract or generalizing thinking, or its result. Theory may also refer to: Scientific theory, a well-substantiated explanation of some aspect of the natural world Social theory, an analytical framework or paradigm that is used to study and interpret social phenomena Philosophical theory, a position that explains or accounts for a general philosophy or specific branch of philosophy Literary theory, the systematic study of the nature of literature, or any of a variety of scholarly approaches to reading texts Mathematical theory, an area of mathematical research that is relatively self-contained Theory (mathematical logic), a set of sentences (theorems) in a formal language Theory, a type of argument in policy debate and Lincoln–Douglas debate Theory (chess), consensus and literature on how the game should be played Theory (clothing retailer), a New York-based fashion label Theory Eatery, an American cuisine restaurant in Portland, Oregon Theory (poem), a poem from Wallace Stevens's first book of poetry, Harmonium, published in 1917 Ki:Theory, the American recording artist and producer Joel Burleson Theory of a Deadman, also known as Theory, a rock band The former stage name of hip hop and smooth jazz artist Dax Reynosa A computer algebra system software, predecessor to LiveMath. Austin Theory, American professional wrestler == See also == All pages with titles beginning with theories All pages with titles beginning with theorem All pages with titles beginning with theory All pages with titles containing theories All pages with titles containing theorem All pages with titles containing theory Theorem List of notable theories Theoria, theological contemplation Mathematical theory (disambiguation) Thiery (surname) Thierry, given name and surname Theorema (disambiguation) Teorema (disambiguation)
Wikipedia/Theory_(disambiguation)
A philosophical theory or philosophical position is a view that attempts to explain or account for a particular problem in philosophy. The use of the term "theory" is a statement of colloquial English and not a technical term. While any sort of thesis or opinion may be termed a position, in analytic philosophy it is thought best to reserve the word "theory" for systematic, comprehensive attempts to solve problems. == Overview == The elements that comprise a philosophical position consist of statements which are believed to be true by the thinkers who accept them, and which may or may not be empirical. The sciences have a very clear idea of what a theory is; however in the arts such as philosophy, the definition is more hazy. Philosophical positions are not necessarily scientific theories, although they may consist of both empirical and non-empirical statements. The collective statements of all philosophical movements, schools of thought, and belief systems consist of philosophical positions. Also included among philosophical positions are many principles, dogmas, doctrines, hypotheses, rules, paradoxes, laws, as well as 'ologies, 'isms, 'sis's, and effects. Some examples of philosophical positions include: Metatheory; positions about the formation and content of theorems, such as Kurt Gödel's incompleteness theorem. Political theory; positions that underlie a political philosophy, such as John Rawls' theory of justice. Ethical theory and meta-ethics; positions about the nature and purpose of ethical statements, such as the ethical theory of Immanuel Kant. Critical theory; in its narrow sense, a Western European body of Frankfurt School Marxist thought that aims at criticizing and transforming, rather than merely explaining, social structures. In a broader sense, "critical theory" relates to a wide variety of political, literary, and philosophical positions that take at least some of their inspiration from the Frankfurt School and its dialectic, and that typically contest the possibility of objectivity or aloofness from political positions and privileges. Philosophical positions may also take the form of a religion, philosophy of life, ideology, world view, or life stance. == See also == Glossary of philosophy List of philosophies Metaphilosophy == References ==
Wikipedia/Philosophical_theory
In chemistry, transition state theory (TST) explains the reaction rates of elementary chemical reactions. The theory assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated transition state complexes. TST is used primarily to understand qualitatively how chemical reactions take place. TST has been less successful in its original goal of calculating absolute reaction rate constants because the calculation of absolute reaction rates requires precise knowledge of potential energy surfaces, but it has been successful in calculating the standard enthalpy of activation (ΔH‡, also written Δ‡Hɵ), the standard entropy of activation (ΔS‡ or Δ‡Sɵ), and the standard Gibbs energy of activation (ΔG‡ or Δ‡Gɵ) for a particular reaction if its rate constant has been experimentally determined (the ‡ notation refers to the value of interest at the transition state; ΔH‡ is the difference between the enthalpy of the transition state and that of the reactants). This theory was developed simultaneously in 1935 by Henry Eyring, then at Princeton University, and by Meredith Gwynne Evans and Michael Polanyi of the University of Manchester. TST is also referred to as "activated-complex theory", "absolute-rate theory", and "theory of absolute reaction rates". Before the development of TST, the Arrhenius rate law was widely used to determine energies for the reaction barrier. The Arrhenius equation derives from empirical observations and ignores any mechanistic considerations, such as whether one or more reactive intermediates are involved in the conversion of a reactant to a product. Therefore, further development was necessary to understand the two parameters associated with this law, the pre-exponential factor (A) and the activation energy (Ea). TST, which led to the Eyring equation, successfully addresses these two issues; however, 46 years elapsed between the publication of the Arrhenius rate law, in 1889, and the Eyring equation derived from TST, in 1935. During that period, many scientists and researchers contributed significantly to the development of the theory. == Theory == The basic ideas behind transition state theory are as follows: Rates of reaction can be studied by examining activated complexes near the saddle point of a potential energy surface. The details of how these complexes are formed are not important. The saddle point itself is called the transition state. The activated complexes are in a special equilibrium (quasi-equilibrium) with the reactant molecules. The activated complexes can convert into products, and kinetic theory can be used to calculate the rate of this conversion. == Development == In the development of TST, three approaches were taken as summarized below. === Thermodynamic treatment === In 1884, Jacobus van 't Hoff proposed the Van 't Hoff equation describing the temperature dependence of the equilibrium constant for a reversible reaction: A ↽ − − ⇀ B {\displaystyle {\ce {{A}<=> {B}}}} d ln ⁡ K d T = Δ U R T 2 {\displaystyle {\frac {d\ln K}{dT}}={\frac {\Delta U}{RT^{2}}}} where ΔU is the change in internal energy, K is the equilibrium constant of the reaction, R is the universal gas constant, and T is thermodynamic temperature. Based on experimental work, in 1889, Svante Arrhenius proposed a similar expression for the rate constant of a reaction, given as follows: d ln ⁡ k d T = Δ E R T 2 {\displaystyle {\frac {d\ln k}{dT}}={\frac {\Delta E}{RT^{2}}}} Integration of this expression leads to the Arrhenius equation k = A e − E a / R T {\displaystyle k=Ae^{-E_{a}/RT}} where k is the rate constant. A was referred to as the frequency factor (now called the pre-exponential coefficient), and Ea is regarded as the activation energy. By the early 20th century many had accepted the Arrhenius equation, but the physical interpretation of A and Ea remained vague. This led many researchers in chemical kinetics to offer different theories of how chemical reactions occurred in an attempt to relate A and Ea to the molecular dynamics directly responsible for chemical reactions. In 1910, French chemist René Marcelin introduced the concept of standard Gibbs energy of activation. His relation can be written as k ∝ exp ⁡ ( − Δ ‡ G ⊖ R T ) {\displaystyle k\propto \exp \left({\frac {-\Delta ^{\ddagger }G^{\ominus }}{RT}}\right)} At about the same time as Marcelin was working on his formulation, Dutch chemists Philip Abraham Kohnstamm, Frans Eppo Cornelis Scheffer, and Wiedold Frans Brandsma introduced standard entropy of activation and the standard enthalpy of activation. They proposed the following rate constant equation k ∝ exp ⁡ ( Δ ‡ S ⊖ R ) exp ⁡ ( − Δ ‡ H ⊖ R T ) {\displaystyle k\propto \exp \left({\frac {\Delta ^{\ddagger }S^{\ominus }}{R}}\right)\exp \left({\frac {-\Delta ^{\ddagger }H^{\ominus }}{RT}}\right)} However, the nature of the constant was still unclear. === Kinetic-theory treatment === In early 1900, Max Trautz and William Lewis studied the rate of the reaction using collision theory, based on the kinetic theory of gases. Collision theory treats reacting molecules as hard spheres colliding with one another; this theory neglects entropy changes, since it assumes that the collision between molecules are completely elastic. Lewis applied his treatment to the following reaction and obtained good agreement with experimental result. 2 HI → H2 + I2 However, later when the same treatment was applied to other reactions, there were large discrepancies between theoretical and experimental results. === Statistical-mechanical treatment === Statistical mechanics played a significant role in the development of TST. However, the application of statistical mechanics to TST was developed very slowly given the fact that in mid-19th century, James Clerk Maxwell, Ludwig Boltzmann, and Leopold Pfaundler published several papers discussing reaction equilibrium and rates in terms of molecular motions and the statistical distribution of molecular speeds. It was not until 1912 when the French chemist A. Berthoud used the Maxwell–Boltzmann distribution law to obtain an expression for the rate constant. d ln ⁡ k d T = a − b T R T 2 {\displaystyle {\frac {d\ln k}{dT}}={\frac {a-bT}{RT^{2}}}} where a and b are constants related to energy terms. Two years later, René Marcelin made an essential contribution by treating the progress of a chemical reaction as a motion of a point in phase space. He then applied Gibbs' statistical-mechanical procedures and obtained an expression similar to the one he had obtained earlier from thermodynamic consideration. In 1915, another important contribution came from British physicist James Rice. Based on his statistical analysis, he concluded that the rate constant is proportional to the "critical increment". His ideas were further developed by Richard Chace Tolman. In 1919, Austrian physicist Karl Ferdinand Herzfeld applied statistical mechanics to the equilibrium constant and kinetic theory to the rate constant of the reverse reaction, k−1, for the reversible dissociation of a diatomic molecule. AB ⇌ k − 1 k 1 A + B {\displaystyle {\ce {AB <=>[k_1][k_{-1}] {A}+ {B}}}} He obtained the following equation for the rate constant of the forward reaction k 1 = k B T h ( 1 − e − h ν k B T ) exp ⁡ ( − E ⊖ R T ) {\displaystyle k_{1}={\frac {k_{\mathrm {B} }T}{h}}\left(1-e^{-{\frac {h\nu }{k_{\text{B}}T}}}\right)\exp \left({\frac {-E^{\ominus }}{RT}}\right)} where E ⊖ {\displaystyle \textstyle E^{\ominus }} is the dissociation energy at absolute zero, kB is the Boltzmann constant, h is the Planck constant, T is thermodynamic temperature, ν {\displaystyle \nu } is vibrational frequency of the bond. This expression is very important since it is the first time that the factor kBT/h, which is a critical component of TST, has appeared in a rate equation. In 1920, the American chemist Richard Chace Tolman further developed Rice's idea of the critical increment. He concluded that critical increment (now referred to as activation energy) of a reaction is equal to the average energy of all molecules undergoing reaction minus the average energy of all reactant molecules. === Potential energy surfaces === The concept of potential energy surface was very important in the development of TST. The foundation of this concept was laid by René Marcelin in 1913. He theorized that the progress of a chemical reaction could be described as a point in a potential energy surface with coordinates in atomic momenta and distances. In 1931, Henry Eyring and Michael Polanyi constructed a potential energy surface for the reaction below. This surface is a three-dimensional diagram based on quantum-mechanical principles as well as experimental data on vibrational frequencies and energies of dissociation. H + H2 → H2 + H A year after the Eyring and Polanyi construction, Hans Pelzer and Eugene Wigner made an important contribution by following the progress of a reaction on a potential energy surface. The importance of this work was that it was the first time that the concept of col or saddle point in the potential energy surface was discussed. They concluded that the rate of a reaction is determined by the motion of the system through that col. === Kramers theory of reaction rates === By modeling reactions as Langevin motion along a one dimensional reaction coordinate, Hendrik Kramers was able to derive a relationship between the shape of the potential energy surface along the reaction coordinate and the transition rates of the system. The formulation relies on approximating the potential energy landscape as a series of harmonic wells. In a two state system, there will be three wells; a well for state A, an upside-down well representing the potential energy barrier, and a well for state B. In the overdamped (or "diffusive") regime, the transition rate from state A to B is related to the resonant frequency of the wells via k A → B = ω a ω H 2 π γ exp ⁡ ( − E H − E A k B T ) {\displaystyle k^{A\rightarrow B}={\frac {\omega _{a}\omega _{H}}{2\pi \gamma }}\exp \left(-{\frac {E_{H}-E_{A}}{k_{\text{B}}T}}\right)} where ω a {\displaystyle \omega _{a}} is the frequency of the well for state A, ω H {\displaystyle \omega _{H}} is the frequency of the barrier well, γ {\displaystyle \gamma } is the viscous damping, E H {\displaystyle E_{H}} is the energy of the top of the barrier, E A {\displaystyle E_{A}} is the energy of bottom of the well for state A, and k B T {\displaystyle k_{\text{B}}T} is the temperature of the system times the Boltzmann constant. For general damping (overdamped or underdamped), there is a similar formula. == Justification for the Eyring equation == One of the most important features introduced by Eyring, Polanyi and Evans was the notion that activated complexes are in quasi-equilibrium with the reactants. The rate is then directly proportional to the concentration of these complexes multiplied by the frequency (kBT/h) with which they are converted into products. Below, a non-rigorous plausibility argument is given for the functional form of the Eyring equation. However, the key statistical mechanical factor kBT/h will not be justified, and the argument presented below does not constitute a true "derivation" of the Eyring equation. === Quasi-equilibrium assumption === Quasi-equilibrium is different from classical chemical equilibrium, but can be described using a similar thermodynamic treatment. Consider the reaction below A + B ↽ − − ⇀ [ AB ] ‡ ⟶ P {\displaystyle {\ce {{A}+{B}<=>{[AB]^{\ddagger }}->{P}}}} where complete equilibrium is achieved between all the species in the system including activated complexes, [AB]‡ . Using statistical mechanics, concentration of [AB]‡ can be calculated in terms of the concentration of A and B. TST assumes that even when the reactants and products are not in equilibrium with each other, the activated complexes are in quasi-equilibrium with the reactants. As illustrated in Figure 2, at any instant of time, there are a few activated complexes, and some were reactant molecules in the immediate past, which are designated [ABl]‡ (since they are moving from left to right). The remainder of them were product molecules in the immediate past ([ABr]‡). In TST, it is assumed that the flux of activated complexes in the two directions are independent of each other. That is, if all the product molecules were suddenly removed from the reaction system, the flow of [ABr]‡ stops, but there is still a flow from left to right. Hence, to be technically correct, the reactants are in equilibrium only with [ABl]‡, the activated complexes that were reactants in the immediate past. === Plausibility argument === The activated complexes do not follow a Boltzmann distribution of energies, but an "equilibrium constant" can still be derived from the distribution they do follow. The equilibrium constant K‡ for the quasi-equilibrium can be written as K ‡ = [ AB ] ‡ [ A ] [ B ] {\displaystyle K^{\ddagger }={\frac {\ce {[AB]^{\ddagger }}}{\ce {[A][B]}}}} . So, the chemical activity of the transition state AB‡ is [ AB ] ‡ = K ‡ [ A ] [ B ] {\displaystyle [{\ce {AB}}]^{\ddagger }=K^{\ddagger }[{\ce {A}}][{\ce {B}}]} . Therefore, the rate equation for the production of product is d [ P ] d t = k ‡ [ AB ] ‡ = k ‡ K ‡ [ A ] [ B ] = k [ A ] [ B ] {\displaystyle {\frac {d[{\ce {P}}]}{dt}}=k^{\ddagger }[{\ce {AB}}]^{\ddagger }=k^{\ddagger }K^{\ddagger }[{\ce {A}}][{\ce {B}}]=k[{\ce {A}}][{\ce {B}}]} , where the rate constant k is given by k = k ‡ K ‡ {\displaystyle k=k^{\ddagger }K^{\ddagger }} . Here, k‡ is directly proportional to the frequency of the vibrational mode responsible for converting the activated complex to the product; the frequency of this vibrational mode is ν {\displaystyle \nu } . Every vibration does not necessarily lead to the formation of product, so a proportionality constant κ {\displaystyle \kappa } , referred to as the transmission coefficient, is introduced to account for this effect. So k‡ can be rewritten as k ‡ = κ ν {\displaystyle k^{\ddagger }=\kappa \nu } . For the equilibrium constant K‡ , statistical mechanics leads to a temperature dependent expression given as K ‡ = k B T h ν K ‡ ′ {\displaystyle K^{\ddagger }={\frac {k_{\text{B}}T}{h\nu }}K^{\ddagger '}} ( K ‡ ′ =: e − Δ G ‡ R T {\displaystyle K^{\ddagger '}=:e^{\frac {-\Delta G^{\ddagger }}{RT}}} ). Combining the new expressions for k‡ and K‡, a new rate constant expression can be written, which is given as k = k ‡ K ‡ = κ k B T h e − Δ G ‡ R T = κ k B T h K ‡ ′ {\displaystyle k=k^{\ddagger }K^{\ddagger }=\kappa {\frac {k_{\text{B}}T}{h}}e^{\frac {-\Delta G^{\ddagger }}{RT}}=\kappa {\frac {k_{\text{B}}T}{h}}K^{\ddagger '}} . Since, by definition, ΔG‡ = ΔH‡ –TΔS‡, the rate constant expression can be expanded, to give an alternative form of the Eyring equation: k = κ k B T h e Δ S ‡ R e − Δ H ‡ R T {\displaystyle k=\kappa {\frac {k_{\text{B}}T}{h}}e^{\frac {\Delta S^{\ddagger }}{R}}e^{\frac {-\Delta H^{\ddagger }}{RT}}} . For correct dimensionality, the equation needs to have an extra factor of (c⊖)1–m for reactions that are not unimolecular: k = κ k B T h e Δ S ‡ R e − Δ H ‡ R T ( c ⊖ ) 1 − m {\displaystyle k=\kappa {\frac {k_{\text{B}}T}{h}}e^{\frac {\Delta S^{\ddagger }}{R}}e^{\frac {-\Delta H^{\ddagger }}{RT}}(c^{\ominus })^{1-m}} , where c⊖ is the standard concentration 1 mol⋅L−1 and m is the molecularity. == Inferences from TST and relationship with Arrhenius theory == The rate constant expression from transition state theory can be used to calculate the ΔG‡, ΔH‡, ΔS‡, and even ΔV‡ (the volume of activation) using experimental rate data. These so-called activation parameters give insight into the nature of a transition state, including energy content and degree of order, compared to the starting materials and has become a standard tool for elucidation of reaction mechanisms in physical organic chemistry. The free energy of activation, ΔG‡, is defined in transition state theory to be the energy such that Δ G ‡ = − R T ln ⁡ K ‡ ′ {\displaystyle \Delta G^{\ddagger }=-RT\ln K^{\ddagger '}} holds. The parameters ΔH‡ and ΔS‡ can then be inferred by determining ΔG‡ = ΔH‡ – TΔS‡ at different temperatures. Because the functional form of the Eyring and Arrhenius equations are similar, it is tempting to relate the activation parameters with the activation energy and pre-exponential factors of the Arrhenius treatment. However, the Arrhenius equation was derived from experimental data and models the macroscopic rate using only two parameters, irrespective of the number of transition states in a mechanism. In contrast, activation parameters can be found for every transition state of a multistep mechanism, at least in principle. Thus, although the enthalpy of activation, ΔH‡, is often equated with Arrhenius's activation energy Ea, they are not equivalent. For a condensed-phase (e.g., solution-phase) or unimolecular gas-phase reaction step, Ea = ΔH‡ + RT. For other gas-phase reactions, Ea = ΔH‡ + (1 − Δn‡)RT, where Δn‡ is the change in the number of molecules on forming the transition state. (Thus, for a bimolecular gas-phase process, Ea = ΔH‡ + 2RT.) The entropy of activation, ΔS‡, gives the extent to which transition state (including any solvent molecules involved in or perturbed by the reaction) is more disordered compared to the starting materials. It offers a concrete interpretation of the pre-exponential factor A in the Arrhenius equation; for a unimolecular, single-step process, the rough equivalence A = (kBT/h) exp(1 + ΔS‡/R) (or A = (kBT/h) exp(2 + ΔS‡/R) for bimolecular gas-phase reactions) holds. For a unimolecular process, a negative value indicates a more ordered, rigid transition state than the ground state, while a positive value reflects a transition state with looser bonds and/or greater conformational freedom. It is important to note that, for reasons of dimensionality, reactions that are bimolecular or higher have ΔS‡ values that depend on the standard state chosen (standard concentration, in particular). For most recent publications, 1 mol L−1 or 1 molar is chosen. Since this choice is a human construct, based on our definitions of units for molar quantity and volume, the magnitude and sign of ΔS‡ for a single reaction is meaningless by itself; only comparisons of the value with that of a reference reaction of "known" (or assumed) mechanism, made at the same standard state, is valid. The volume of activation is found by taking the partial derivative of ΔG‡ with respect to pressure (holding temperature constant): Δ V ‡ := ( ∂ Δ G ‡ / ∂ P ) T {\displaystyle \Delta V^{\ddagger }:=(\partial \Delta G^{\ddagger }/\partial P)_{T}} . It gives information regarding the size, and hence, degree of bonding at the transition state. An associative mechanism will likely have a negative volume of activation, while a dissociative mechanism will likely have a positive value. Given the relationship between equilibrium constant and the forward and reverse rate constants, K = k 1 / k − 1 {\displaystyle K=k_{1}/k_{-1}} , the Eyring equation implies that Δ G ∘ = Δ G 1 ‡ − Δ G − 1 ‡ {\displaystyle \Delta G^{\circ }=\Delta G_{1}^{\ddagger }-\Delta G_{-1}^{\ddagger }} . Another implication of TST is the Curtin–Hammett principle: the product ratio of a kinetically-controlled reaction from R to two products A and B will reflect the difference in the energies of the respective transition states leading to product, assuming there is a single transition state to each one: [ A ] [ B ] = e − Δ Δ G ‡ / R T {\displaystyle {\frac {[\mathrm {A} ]}{[\mathrm {B} ]}}=e^{-\Delta \Delta G^{\ddagger }/RT}} ( Δ Δ G ‡ = Δ G A ‡ − Δ G B ‡ + Δ G ∘ {\displaystyle \Delta \Delta G^{\ddagger }=\Delta G_{\mathrm {A} }^{\ddagger }-\Delta G_{\mathrm {B} }^{\ddagger }+\Delta G^{\circ }} ). (In the expression for ΔΔG‡ above, there is an extra Δ G ∘ = G S A ∘ − G S B ∘ {\displaystyle \Delta G^{\circ }=G_{\mathrm {S} _{\mathrm {A} }}^{\circ }-G_{\mathrm {S} _{\mathrm {B} }}^{\circ }} term if A and B are formed from two different species SA and SB in equilibrium.) For a thermodynamically-controlled reaction, every difference of RT ln 10 ≈ (1.987 × 10−3 kcal/mol K)(298 K)(2.303) ≈ 1.36 kcal/mol in the free energies of products A and B results in a factor of 10 in selectivity at room temperature (298 K), a principle known as the "1.36 rule": [ A ] [ B ] = 10 − Δ G ∘ / ( 1.36 k c a l / m o l ) {\displaystyle {\frac {[\mathrm {A} ]}{[\mathrm {B} ]}}=10^{-\Delta G^{\circ }/(1.36\ \mathrm {kcal/mol} )}} ( Δ G ∘ = G A ∘ − G B ∘ {\displaystyle \Delta G^{\circ }=G_{\mathrm {A} }^{\circ }-G_{\mathrm {B} }^{\circ }} ). Analogously, every 1.36 kcal/mol difference in the free energy of activation results in a factor of 10 in selectivity for a kinetically-controlled process at room temperature: [ A ] [ B ] = 10 − Δ Δ G ‡ / ( 1.36 k c a l / m o l ) {\displaystyle {\frac {[\mathrm {A} ]}{[\mathrm {B} ]}}=10^{-\Delta \Delta G^{\ddagger }/(1.36\ \mathrm {kcal/mol} )}} ( Δ Δ G ‡ = Δ G A ‡ − Δ G B ‡ {\displaystyle \Delta \Delta G^{\ddagger }=\Delta G_{\mathrm {A} }^{\ddagger }-\Delta G_{\mathrm {B} }^{\ddagger }} ). Using the Eyring equation, there is a straightforward relationship between ΔG‡, first-order rate constants, and reaction half-life at a given temperature. At 298 K, a reaction with ΔG‡ = 23 kcal/mol has a rate constant of k ≈ 8.4 × 10−5 s−1 and a half life of t1/2 ≈ 2.3 hours, figures that are often rounded to k ~ 10−4 s−1 and t1/2 ~ 2 h. Thus, a free energy of activation of this magnitude corresponds to a typical reaction that proceeds to completion overnight at room temperature. For comparison, the cyclohexane chair flip has a ΔG‡ of about 11 kcal/mol with k ~ 105 s−1, making it a dynamic process that takes place rapidly (faster than the NMR timescale) at room temperature. At the other end of the scale, the cis/trans isomerization of 2-butene has a ΔG‡ of about 60 kcal/mol, corresponding to k ~ 10−31 s−1 at 298 K. This is a negligible rate: the half-life is 12 orders of magnitude longer than the age of the universe. == Limitations == In general, TST has provided researchers with a conceptual foundation for understanding how chemical reactions take place. Even though the theory is widely applicable, it does have limitations. For example, when applied to each elementary step of a multi-step reaction, the theory assumes that each intermediate is long-lived enough to reach a Boltzmann distribution of energies before continuing to the next step. When the intermediates are very short-lived, TST fails. In such cases, the momentum of the reaction trajectory from the reactants to the intermediate can carry forward to affect product selectivity. An example of such a reaction is the ring closure of cyclopentane biradicals generated from the gas-phase thermal decomposition of 2,3-diazabicyclo[2.2.1]hept-2-ene. Transition state theory is also based on the assumption that atomic nuclei behave according to classical mechanics. It is assumed that unless atoms or molecules collide with enough energy to form the transition structure, then the reaction does not occur. However, according to quantum mechanics, for any barrier with a finite amount of energy, there is a possibility that particles can still tunnel across the barrier. With respect to chemical reactions this means that there is a chance that molecules will react, even if they do not collide with enough energy to overcome the energy barrier. While this effect is negligible for reactions with large activation energies, it becomes an important phenomenon for reactions with relatively low energy barriers, since the tunneling probability increases with decreasing barrier height. Transition state theory fails for some reactions at high temperature. The theory assumes the reaction system will pass over the lowest energy saddle point on the potential energy surface. While this description is consistent for reactions occurring at relatively low temperatures, at high temperatures, molecules populate higher energy vibrational modes; their motion becomes more complex and collisions may lead to transition states far away from the lowest energy saddle point. This deviation from transition state theory is observed even in the simple exchange reaction between diatomic hydrogen and a hydrogen radical. Given these limitations, several alternatives to transition state theory have been proposed. A brief discussion of these theories follows. == Generalized transition state theory == Any form of TST, such as microcanonical variational TST, canonical variational TST, and improved canonical variational TST, in which the transition state is not necessarily located at the saddle point, is referred to as generalized transition state theory. === Microcanonical variational TST === A fundamental flaw of transition state theory is that it counts any crossing of the transition state as a reaction from reactants to products or vice versa. In reality, a molecule may cross this "dividing surface" and turn around, or cross multiple times and only truly react once. As such, unadjusted TST is said to provide an upper bound for the rate coefficients. To correct for this, variational transition state theory varies the location of the dividing surface that defines a successful reaction in order to minimize the rate for each fixed energy. The rate expressions obtained in this microcanonical treatment can be integrated over the energy, taking into account the statistical distribution over energy states, so as to give the canonical, or thermal rates. === Canonical variational TST === A development of transition state theory in which the position of the dividing surface is varied so as to minimize the rate constant at a given temperature. === Improved canonical variational TST === A modification of canonical variational transition state theory in which, for energies below the threshold energy, the position of the dividing surface is taken to be that of the microcanonical threshold energy. This forces the contributions to rate constants to be zero if they are below the threshold energy. A compromise dividing surface is then chosen so as to minimize the contributions to the rate constant made by reactants having higher energies. === Nonadiabatic TST === An expansion of TST to the reactions when two spin-states are involved simultaneously is called nonadiabatic transition state theory (NA-TST). === Semiclassical TST === Using vibrational perturbation theory, effects such as tunnelling and variational effects can be accounted for within the SCTST formalism. == Applications == === Enzymatic reactions === Enzymes catalyze chemical reactions at rates that are astounding relative to uncatalyzed chemistry at the same reaction conditions. Each catalytic event requires a minimum of three or often more steps, all of which occur within the few milliseconds that characterize typical enzymatic reactions. According to transition state theory, the smallest fraction of the catalytic cycle is spent in the most important step, that of the transition state. The original proposals of absolute reaction rate theory for chemical reactions defined the transition state as a distinct species in the reaction coordinate that determined the absolute reaction rate. Soon thereafter, Linus Pauling proposed that the powerful catalytic action of enzymes could be explained by specific tight binding to the transition state species Because reaction rate is proportional to the fraction of the reactant in the transition state complex, the enzyme was proposed to increase the concentration of the reactive species. This proposal was formalized by Wolfenden and coworkers at University of North Carolina at Chapel Hill, who hypothesized that the rate increase imposed by enzymes is proportional to the affinity of the enzyme for the transition state structure relative to the Michaelis complex. Because enzymes typically increase the non-catalyzed reaction rate by factors of 106-1026, and Michaelis complexes often have dissociation constants in the range of 10−3-10−6 M, it is proposed that transition state complexes are bound with dissociation constants in the range of 10−14 -10−23 M. As substrate progresses from the Michaelis complex to product, chemistry occurs by enzyme-induced changes in electron distribution in the substrate. Enzymes alter the electronic structure by protonation, proton abstraction, electron transfer, geometric distortion, hydrophobic partitioning, and interaction with Lewis acids and bases. Analogs that resemble the transition state structures should therefore provide the most powerful noncovalent inhibitors known. All chemical transformations pass through an unstable structure called the transition state, which is poised between the chemical structures of the substrates and products. The transition states for chemical reactions are proposed to have lifetimes near 10−13 seconds, on the order of the time of a single bond vibration. No physical or spectroscopic method is available to directly observe the structure of the transition state for enzymatic reactions, yet transition state structure is central to understanding enzyme catalysis since enzymes work by lowering the activation energy of a chemical transformation. It is now accepted that enzymes function to stabilize transition states lying between reactants and products, and that they would therefore be expected to bind strongly any inhibitor that closely resembles such a transition state. Substrates and products often participate in several enzyme catalyzed reactions, whereas the transition state tends to be characteristic of one particular enzyme, so that such an inhibitor tends to be specific for that particular enzyme. The identification of numerous transition state inhibitors supports the transition state stabilization hypothesis for enzymatic catalysis. Currently there is a large number of enzymes known to interact with transition state analogs, most of which have been designed with the intention of inhibiting the target enzyme. Examples include HIV-1 protease, racemases, β-lactamases, metalloproteinases, cyclooxygenases and many others. === Adsorption on surfaces and reactions on surfaces === Desorption as well as reactions on surfaces are straightforward to describe with transition state theory. Analysis of adsorption to a surface from a liquid phase can present a challenge due to lack of ability to assess the concentration of the solute near the surface. When full details are not available, it has been proposed that reacting species' concentrations should be normalized to the concentration of active surface sites, an approximation called the surface reactant equi-density approximation (SREA). == See also == Curtin–Hammett principle Electron transfer Marcus theory == Notes == == References == Anslyn, Eric V.; Doughtery, Dennis A., Transition State Theory and Related Topics. In Modern Physical Organic Chemistry University Science Books: 2006; pp 365–373 Cleland, W.W., Isotope Effects: Determination of Enzyme Transition State Structure. Methods in Enzymology 1995, 249, 341–373 Laidler, K.; King, C., Development of transition-state theory. The Journal of Physical Chemistry 1983, 87, (15), 2657 Laidler, K., A lifetime of transition-state theory. The Chemical Intelligencer 1998, 4, (3), 39 Radzicka, A.; Woldenden, R., Transition State and Multisubstrate Analog Inhibitors. Methods in Enzymology 1995, 249, 284–312 Schramm, VL., Enzymatic Transition States and Transition State Analog Design. Annual Review of Biochemistry 1998, 67, 693–720 Schramm, V.L., Enzymatic Transition State Theory and Transition State Analogue Design. Journal of Biological Chemistry 2007, 282, (39), 28297–28300 == External links == Simple application of TST
Wikipedia/Transition_state_theory
Color theory, or more specifically traditional color theory, is a historical body of knowledge describing the behavior of colors, namely in color mixing, color contrast effects, color harmony, color schemes and color symbolism. Modern color theory is generally referred to as color science. While there is no clear distinction in scope, traditional color theory tends to be more subjective and have artistic applications, while color science tends to be more objective and have functional applications, such as in chemistry, astronomy or color reproduction. Color theory dates back at least as far as Aristotle's treatise On Colors and Bharata's Nāṭya Shāstra. A formalization of "color theory" began in the 18th century, initially within a partisan controversy over Isaac Newton's theory of color (Opticks, 1704) and the nature of primary colors. By the end of the 19th century, a schism had formed between traditional color theory and color science. == History == Color theory is rooted in antiquity, with early musings on color in Aristotle's (d. 322 BCE) On Colors and Claudius Ptolemy's (d. 168 CE) Optics. The Nāṭya Shāstra (d. 200 BCE) composed in Ancient India, had an early, functional theory of color, considering four colours as primary, black, blue, yellow and red. It also describes the production of derived colors from primary colors. The influence of light on color was investigated and revealed further by al-Kindi (d. 873) and Ibn al-Haytham (d. 1039). Ibn Sina (d. 1037), Nasir al-Din al-Tusi (d. 1274), and Robert Grosseteste (d. 1253) discovered that contrary to the teachings of Aristotle, there are multiple color paths to get from black to white. More modern approaches to color theory principles can be found in the writings of Leone Battista Alberti (c. 1435) and the notebooks of Leonardo da Vinci (c. 1490). Isaac Newton (d. 1727) worked extensively on color theory, helping and developing his own theory from stating the fact that white light is composed of a spectrum of colors, and that color is not intrinsic to objects, but rather arises from the way an object reflects or absorbs different wavelengths. His 1672 paper on the nature of white light and colours forms the basis for all work that followed on colour and colour vision. The RYB primary colors became the foundation of 18th-century theories of color vision, as the fundamental sensory qualities that are blended in the perception of all physical colors, and conversely, in the physical mixture of pigments or dyes. These theories were enhanced by 18th-century investigations of a variety of purely psychological color effects, in particular the contrast between "complementary" or opposing hues that are produced by color afterimages and in the contrasting shadows in colored light. These ideas and many personal color observations were summarized in two founding documents in color theory: the Theory of Colours (1810) by the German poet Johann Wolfgang von Goethe, and The Law of Simultaneous Color Contrast (1839) by the French industrial chemist Michel Eugène Chevreul. Charles Hayter published A New Practical Treatise on the Three Primitive Colours Assumed as a Perfect System of Rudimentary Information (London 1826), in which he described how all colors could be obtained from just three. Subsequently, German and English scientists established in the late 19th century that color perception is best described in terms of a different set of primary colors—red, green and blue-violet (RGB)—modeled through the additive mixture of three monochromatic lights. Subsequent research anchored these primary colors in the differing responses to light by three types of color receptors or cones in the retina (trichromacy). On this basis the quantitative description of the color mixture or colorimetry developed in the early 20th century, along with a series of increasingly sophisticated models of color space and color perception, such as the opponent process theory. Across the same period, industrial chemistry radically expanded the color range of lightfast synthetic pigments, allowing for substantially improved saturation in color mixtures of dyes, paints, and inks. It also created the dyes and chemical processes necessary for color photography. As a result, three-color printing became aesthetically and economically feasible in mass printed media, and the artists' color theory was adapted to primary colors most effective in inks or photographic dyes: cyan, magenta, and yellow (CMY). (In printing, dark colors are supplemented by black ink, called "key," to make the CMYK system; in both printing and photography, white is provided by the color of the paper.) These CMY primary colors were reconciled with the RGB primaries, and subtractive color mixing with additive color mixing, by defining the CMY primaries as substances that absorbed only one of the retinal primary colors: cyan absorbs only red (−R+G+B), magenta only green (+R−G+B), and yellow only blue-violet (+R+G−B). It is important to add that the CMYK, or process, color printing is meant as an economical way of producing a wide range of colors for printing, but is deficient in reproducing certain colors, notably orange and slightly deficient in reproducing purples. A wider range of colors can be obtained with the addition of other colors to the printing process, such as in Pantone's Hexachrome printing ink system (six colors), among others. For much of the 19th century artistic color theory either lagged behind scientific understanding or was augmented by science books written for the lay public, in particular Modern Chromatics (1879) by the American physicist Ogden Rood, and early color atlases developed by Albert Munsell (Munsell Book of Color, 1915, see Munsell color system) and Wilhelm Ostwald (Color Atlas, 1919). Major advances were made in the early 20th century by artists teaching or associated with the German Bauhaus, in particular Wassily Kandinsky, Johannes Itten, Faber Birren and Josef Albers, whose writings mix speculation with an empirical or demonstration-based study of color design principles. == Color mixing == One of the earliest purposes of color theory was to establish rules governing the mixing of pigments. Traditional color theory was built around "pure" or ideal colors, characterized by different sensory experiences rather than attributes of the physical world. This has led to several inaccuracies in traditional color theory principles that are not always remedied in modern formulations. Another issue has been the tendency to describe color effects holistically or categorically, for example as a contrast between "yellow" and "blue" conceived as generic colors instead of the three color attributes generally considered by color science: hue, colorfulness and lightness. These confusions are partly historical and arose in scientific uncertainty about color perception that was not resolved until the late 19th century when artistic notions were already entrenched. They also arise from the attempt to describe the highly contextual and flexible behavior of color perception in terms of abstract color sensations that can be generated equivalently by any visual media. === Primary colors === Color theory asserts three pure primary colors that can be used to mix all possible colors. These are sometimes considered as red, yellow and blue (RYB) or as red, green and blue (RGB). Ostensibly, any failure of specific paints or inks to match this ideal performance is due to the impurity or imperfection of the colorants. In contrast, modern color science does not recognize universal primary colors (no finite combination of colors can produce all other colors) and only uses primary colors to define a given color space. Any three primary colors can mix only a limited range of colors, called a gamut, which is always smaller (contains fewer colors) than the full range of colors humans can perceive. Primary colors also can't be made from other colors as they are inherently pure and distinct. === Complementary colors === For the mixing of colored light, Isaac Newton's color wheel is often used to describe complementary colors, which are colors that cancel each other's hue to produce an achromatic (white, gray or black) light mixture. Newton offered as a conjecture that colors exactly opposite one another on the hue circle cancel out each other's hue; this concept was demonstrated more thoroughly in the 19th century. An example of complementary colors would be magenta and green. A key assumption in Newton's hue circle was that the "fiery" or maximum saturated hues are located on the outer circumference of the circle, while achromatic white is at the center. Then the saturation of the mixture of two spectral hues was predicted by the straight line between them; the mixture of three colors was predicted by the "center of gravity" or centroid of three triangle points, and so on. According to traditional color theory based on subtractive primary colors and the RYB color model, yellow mixed with purple, orange mixed with blue, or red mixed with green produces an equivalent gray and are the painter's complementary colors. One reason the artist's primary colors work at all is due to the imperfect pigments being used have sloped absorption curves and change color with concentration. A pigment that is pure red at high concentrations can behave more like magenta at low concentrations. This allows it to make purples that would otherwise be impossible. Likewise, a blue that is ultramarine at high concentrations appears cyan at low concentrations, allowing it to be used to mix green. Chromium red pigments can appear orange, and then yellow, as the concentration is reduced. It is even possible to mix very low concentrations of the blue mentioned and the chromium red to get a greenish color. This works much better with oil colors than it does with watercolors and dyes. The old primaries depend on sloped absorption curves and pigment leakages to work, while newer scientifically derived ones depend solely on controlling the amount of absorption in certain parts of the spectrum. === Tints and shades === When mixing pigments, a color is produced which is always darker and lower in chroma, or saturation, than the parent colors. This moves the mixed color toward a neutral color—a gray or near-black. Lights are made brighter or dimmer by adjusting their brightness, or energy level; in painting, lightness is adjusted through mixture with white, black, or a color's complement. It is common among some painters to darken a paint color by adding black paint—producing colors called shades—or lighten a color by adding white—producing colors called tints. However, it is not always the best way for representational painting, as an unfortunate result is for colors to also shift in hue. For instance, darkening a color by adding black can cause colors such as yellows, reds, and oranges, to shift toward the greenish or bluish part of the spectrum. Lightening a color by adding white can cause a shift towards blue when mixed with reds and oranges. Another practice when darkening a color is to use its opposite, or complementary, color (e.g. purplish-red added to yellowish-green) to neutralize it without a shift in hue and darken it if the additive color is darker than the parent color. When lightening a color this hue shift can be corrected with the addition of a small amount of an adjacent color to bring the hue of the mixture back in line with the parent color (e.g. adding a small amount of orange to a mixture of red and white will correct the tendency of this mixture to shift slightly towards the blue end of the spectrum). === Split primary palette === The split-primary palette is a color-wheel model that relies on misconceptions to attempt to explain the unsatisfactory results produced when mixing the traditional primary colors, red, yellow, and blue. Painters have long considered red, yellow, and blue to be primary colors. In practice, however, some of the mixtures produced from these colors lack chromatic intensity. Rather than adopt a more effective set of primary colors, proponents of split-primary theory explain this lack of chroma by the purported presence of impurities, small amounts of other colors in the paints, or biases away from the ideal primary toward one or the other of the adjacent colors. Every red paint, for example, is said to be tainted with, or biased toward, either blue or yellow, every blue paint toward either red or green, and every yellow toward either green or orange. These biases are said to result in mixtures that contain sets of complementary colors, darkening the resulting color. To obtain vivid mixed colors, according to split-primary theory, it is necessary to employ two primary colors whose biases both fall in the direction, on the color wheel, of the color to be mixed, combining, for example, green-biased blue and green-biased yellow to make bright green. Based on this reasoning, proponents of split-primary theory conclude that two versions of each primary color, often called "cool" and "warm," are needed in order to mix a wide gamut of high-chroma colors. In fact, the perceived bias of colors is not due to impurity. Rather, the appearance of any given colorant is inherent to its chemical and physical properties, and its purity unrelated to whether it conforms to our arbitrary conception of an ideal hue. Moreover, the identity of gamut-optimizing primary colors is determined by the physiology of human color vision. Although no set of three primary paints can be mixed to obtain the complete color gamut perceived by humans, red, yellow, and blue are a poor choice if high-chroma mixtures are desired. This is because painting is a subtractive color process, for which red and blue are secondary, not primary, colors. Although flawed in principle, the split-primary system can be successful in practice, because the recommended blue-biased red and green-biased blue positions are often filled by near approximations of magenta and cyan, respectively, while orange-biased red and violet-biased blue serve as secondary colors, tending to further widen the mixable gamut. This system is in effect a simplified version of Newton's geometrical rule that colors closer together on the hue circle will produce more vibrant mixtures. A mixture produced from two primary colors, however, will be much more highly saturated than one produced from two secondary colors, even though the pairs are the same distance apart on the hue circle, revealing the limitations of the circular model in the prediction of color-mixing results. For example, a mixture of magenta and cyan inks or paints will produce vivid blues and violets, whereas a mixture of red and blue inks or paints will produce darkened violets and purples, even though the angular distance separating magenta and cyan is the same as that separating red and blue. == Color contrast == In Chevreul's 1839 book The principles of harmony and contrast of colours, he introduced the law of color contrast, stating that colors that appear together (spatially or temporally) will be altered as if mixed with the complementary color of the other color, functionally boosting the color contrast between them. For example, a piece of yellow fabric placed on a blue background will appear tinted orange because orange is the complementary color to blue. Chevreul formalized three types of contrast: simultaneous contrast, which appears in two colors viewed side by side successive contrast, for the afterimage left on an achromatic background after viewing a color mixed contrast, for the afterimage left on another color === Warm vs. cool colors === The distinction between "warm" and "cool" colors has been important since at least the late 18th century. The difference (as traced by etymologies in the Oxford English Dictionary), seems related to the observed contrast in landscape light, between the "warm" colors associated with daylight or sunset, and the "cool" colors associated with a gray or overcast day. Warm colors are often said to be hues from red through yellow, browns, and tans included; cool colors are often said to be the hues from blue-green through blue violet, most grays included. There is a historical disagreement about the colors that anchor the polarity, but 19th-century sources put the peak contrast between red-orange and greenish-blue. Color theory has described perceptual and psychological effects to this contrast. Warm colors are said to advance or appear more active in a painting, while cool colors tend to recede; used in interior design or fashion, warm colors are said to arouse or stimulate the viewer, while cool colors calm and relax. Most of these effects, to the extent they are real, can be attributed to the higher saturation and lighter value of warm pigments in contrast to cool pigments; brown is a dark, unsaturated warm color that few people think of as visually active or psychologically arousing. == Color harmony and color schemes == It has been suggested that "Colors seen together to produce a pleasing affective response are said to be in harmony". However, color harmony is a complex notion because human responses to color are both affective and cognitive, involving emotional response and judgment. Hence, our responses to color and the notion of color harmony is open to the influence of a range of different factors. These factors include individual differences (such as age, gender, personal preference, affective state, etc.) as well as cultural, sub-cultural, and socially-based differences which gives rise to conditioning and learned responses about color. In addition, context always has an influence on responses about color and the notion of color harmony, and this concept is also influenced by temporal factors (such as changing trends) and perceptual factors (such as simultaneous contrast) which may impinge on human response to color. The following conceptual model illustrates this 21st-century approach to color harmony: Color harmony = f ( Col ⁡ 1 , 2 , 3 , … , n ) ⋅ ( I D + C E + C X + P + T ) {\displaystyle {\text{Color harmony}}=f(\operatorname {Col} 1,2,3,\dots ,n)\cdot (ID+CE+CX+P+T)} wherein color harmony is a function (f) of the interaction between color/s (Col 1, 2, 3, …, n) and the factors that influence positive aesthetic response to color: individual differences (ID) such as age, gender, personality and affective state; cultural experiences (CE), the prevailing context (CX) which includes setting and ambient lighting; intervening perceptual effects (P) and the effects of time (T) in terms of prevailing social trends. In addition, given that humans can perceive around 2.3 million different colors, it has been suggested that the number of possible color combinations is virtually infinite thereby implying that predictive color harmony formulae are fundamentally unsound. Despite this, many color theorists have devised formulae, principles or guidelines for color combination with the aim being to predict or specify positive aesthetic response or "color harmony". Color wheel models have often been used as a basis for color combination guidelines and for defining relationships between colors. Some theorists and artists believe juxtapositions of complementary color will produce strong contrast, a sense of visual tension as well as "color harmony"; while others believe juxtapositions of analogous colors will elicit a positive aesthetic response. Color combination guidelines (or formulas) suggest that colors next to each other on the color wheel model (analogous colors) tend to produce a single-hued or monochromatic color experience and some theorists also refer to these as "simple harmonies". In addition, split complementary color schemes usually depict a modified complementary pair, with instead of the "true" second color being chosen, a range of analogous hues around it are chosen, i.e. the split complements of red are blue-green and yellow-green. A triadic color scheme adopts any three colors approximately equidistant around a color wheel model. Feisner and Mahnke are among a number of authors who provide color combination guidelines in greater detail. Color combination formulae and principles may provide some guidance but have limited practical application. This is due to the influence of contextual, perceptual, and temporal factors which will influence how color/s are perceived in any given situation, setting, or context. Such formulae and principles may be useful in fashion, interior and graphic design, but much depends on the tastes, lifestyle, and cultural norms of the viewer or consumer. Black and white have long been known to combine "well" with almost any other colors; black decreases the apparent saturation or brightness of colors paired with it and white shows off all hues to equal effect. == Color symbolism == A major underpinning of traditional color theory is that colors carry significant cultural symbolism, or even have immutable, universal meaning. As early as the ancient Greek philosophers, many theorists have devised color associations and linked particular connotative meanings to specific colors. However, connotative color associations and color symbolism tends to be culture-bound and may also vary across different contexts and circumstances. For example, red has many different connotative and symbolic meanings from exciting, arousing, sensual, romantic, and feminine; to a symbol of good luck; and also acts as a signal of danger. Such color associations tend to be learned and do not necessarily hold irrespective of individual and cultural differences or contextual, temporal or perceptual factors. It is important to note that while color symbolism and color associations exist, their existence does not provide evidential support for color psychology or claims that color has therapeutic properties. == See also == == Notes == == References == == External links == Understanding Color Theory by University of Colorado Boulder – Coursera Handprint.com: Color – A comprehensive site about color perception, color psychology, color theory, and color mixing The Dimensions of Colour – Color theory for artists using digital/traditional media
Wikipedia/Color_theory
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data. The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science. == History == The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. == Etymology and scope == Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM, in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline. His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases. In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain." A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic. Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra. The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines. The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research. == Philosophy == === Epistemology of computer science === Despite the word science in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Nonetheless, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available. It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena. Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs that can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems. === Paradigms of computer science === A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence). Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems. == Fields == As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science. === Theoretical computer science === Theoretical computer science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. It aims to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. ==== Theory of computation ==== According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems. The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation. ==== Information and coding theory ==== Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods. ==== Data structures and algorithms ==== Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency. ==== Programming language theory and formal methods ==== Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. === Applied computer science === ==== Computer graphics and visualization ==== Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games. ==== Image and sound processing ==== Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of the unsolved problems in theoretical computer science. ==== Computational science, finance and engineering ==== Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, societies and social situations (notably war games) along with their habitats, and interactions among biological cells. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits. ==== Human–computer interaction ==== Human–computer interaction (HCI) is the field of study and research concerned with the design and use of computer systems, mainly based on the analysis of the interaction between humans and computer interfaces. HCI has several subfields that focus on the relationship between emotions, social behavior and brain activity with computers. ==== Software engineering ==== Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes. ==== Artificial intelligence ==== Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data. === Computer systems === ==== Computer architecture and microarchitecture ==== Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks Jr., members of the Machine Organization department in IBM's main research center in 1959. ==== Concurrent, parallel and distributed computing ==== Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the parallel random access machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals. ==== Computer networks ==== This branch of computer science aims studies the construction and behavior of computer networks. It addresses their performance, resilience, security, scalability, and cost-effectiveness, along with the variety of services they can provide. ==== Computer security and cryptography ==== Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits. ==== Databases and data mining ==== A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets. == Discoveries == The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science: Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything". All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.). Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything". Every algorithm can be expressed in a language for a computer consisting of only five basic instructions: move left one location; move right one location; read symbol at current location; print 0 at current location; print 1 at current location. Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything". Only three rules are needed to combine any set of basic instructions into more complex ones: sequence: first do this, then do that; selection: IF such-and-such is the case, THEN do this, ELSE do that; repetition: WHILE such-and-such is the case, DO this. The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming). == Programming paradigms == Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include: Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another. Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs. Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities. == Research == Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals. == See also == == Notes == == References == == Further reading == == External links == DBLP Computer Science Bibliography Association for Computing Machinery Institute of Electrical and Electronics Engineers
Wikipedia/Computer_Science
Virtue ethics (also aretaic ethics, from Greek ἀρετή [aretḗ]) is a philosophical approach that treats virtue and character as the primary subjects of ethics, in contrast to other ethical systems that put consequences of voluntary acts, principles or rules of conduct, or obedience to divine authority in the primary role. Virtue ethics is usually contrasted with two other major approaches in ethics, consequentialism and deontology, which make the goodness of outcomes of an action (consequentialism) and the concept of moral duty (deontology) central. While virtue ethics does not necessarily deny the importance to ethics of goodness of states of affairs or of moral duties, it emphasizes virtue, and sometimes other concepts, like eudaimonia, to an extent that other ethics theories do not. == Key concepts == === Virtue and vice === In virtue ethics, a virtue is a characteristic disposition to think, feel, and act well in some domain of life. In contrast, a vice is a characteristic disposition to think, feel, and act poorly in some domain of life. Virtues are not everyday habits; they are character traits, in the sense that they are central to someone’s personality and what they are like as a person. In early versions and some modern versions of virtue ethics, a virtue is defined as a character trait that promotes or exhibits human "flourishing and well being" in the person who exhibits it. Some modern versions of virtue ethics do not define virtues in terms of well being or flourishing, and some go so far as to define virtues as traits that tend to promote some other good that is defined independently of the virtues, thereby subsuming virtue ethics under (or somehow merging it with) consequentialist ethics. To Aristotle, a virtue was not a skill that made you better able to achieve eudaimonia but was itself an expression of eudaimonia — eudaimonia in activity. In contrast with consequentialist and deontological ethical systems, in which one may be called upon to do the right thing even though it is not in one's own interests (one is to do it instead for the greater good, or out of duty), in virtue ethics, one does the right thing because it is in one's own interests. Part of training in practical virtue ethics is to come to see the coincidence of one's enlightened self-interest and the practice of the virtues, so that one is virtuous willingly, gladly, and enthusiastically because one knows that being virtuous is the best thing one can do with oneself.: I  === Virtue and emotion === In ancient Greek and modern eudaimonic virtue ethics, virtues and vices are complex dispositions that involve both affective and intellectual components. That is, they are dispositions that involve both being able to reason well about the right thing to do (see below on phronesis), and also to engage emotions and feelings correctly. For example, a generous person can reason well about when and how to help people, and such a person also helps people with pleasure and without conflict. In this, virtuous people are contrasted not only with vicious people (who reason poorly about what to do and are emotionally attached to the wrong things) and with the incontinent (who are tempted by their feelings into doing the wrong thing even though they know what is right), but also with the merely continent (whose emotions tempt them toward doing the wrong thing but whose strength of will lets them do what they know is right). According to Rosalind Hursthouse, in Aristotelian virtue ethics, the emotions have moral significance because "virtues (and vices) are all dispositions not only to act, but to feel emotions, as reactions as well as impulses to action... [and] In the person with the virtues, these emotions will be felt on the right occasions, toward the right people or objects, for the right reasons, where 'right' means 'correct'..." === Phronesis and eudaimonia === Phronesis (φρόνησις; prudence, practical virtue, or practical wisdom) is an acquired trait that enables its possessor to identify the best thing to do in any given situation. Unlike theoretical wisdom, practical reason results in action or decision. As John McDowell puts it, practical wisdom involves a "perceptual sensitivity" to what a situation requires. Eudaimonia (εὐδαιμονία) is a state variously translated from Greek as 'well-being', 'happiness', 'blessedness', and in the context of virtue ethics, 'human flourishing'. Eudaimonia in this sense is not a subjective, but an objective, state. It characterizes the well-lived life. According to Aristotle, the most prominent exponent of eudaimonia in the Western philosophical tradition, eudaimonia defines the goal of human life. It consists of exercising the characteristic human quality—reason—as the soul's most proper and nourishing activity. In his Nicomachean Ethics, Aristotle, like Plato before him, argued that the pursuit of eudaimonia is an "activity of the soul in accordance with perfect virtue",: I  which further could only properly be exercised in the characteristic human community—the polis or city-state. Although eudaimonia was first popularized by Aristotle, it now belongs to the tradition of virtue theories generally. For the virtue theorist, eudaimonia describes that state achieved by the person who lives the proper human life, an outcome that can be reached by practicing the virtues. A virtue is a habit or quality that allows the bearer to succeed at his, her, or its purpose. The virtue of a knife, for example, is sharpness; among the virtues of a racehorse is speed. Thus, to identify the virtues for human beings, one must have an account of what is the human purpose. Not all modern virtue ethics theories are eudaimonic; some place another end in place of eudaimonia, while others are non-teleological: that is, they do not account for virtues in terms of the results that the practice of the virtues produce or tend to produce. == History of virtue == Like much of the Western tradition, virtue theory originated in ancient Greek philosophy. Virtue ethics began with Socrates, and was subsequently developed further by Plato, Aristotle, and the Stoics. Virtue ethics concentrates on the character of the individual, rather than the acts (or consequences thereof) of the individual. There is debate among adherents of virtue ethics concerning what specific virtues are praiseworthy. However, most theorists agree that ethics is demonstrated by the practice of virtues. Plato and Aristotle's treatments of virtues are not the same. Plato believes virtue is effectively an end to be sought, for which a friend might be a useful means. Aristotle states that the virtues function more as means to safeguard human relations, particularly authentic friendship, without which one's quest for happiness is frustrated. Discussion of what were known as the four cardinal virtues—wisdom, justice, fortitude, and temperance—can be found in Plato's Republic. The virtues also figure prominently in Aristotle's ethical theory found in Nicomachean Ethics. Virtue theory was inserted into the study of history by moralistic historians such as Livy, Plutarch, and Tacitus. The Greek idea of the virtues was passed on in Roman philosophy through Cicero and later incorporated into Christian moral theology by Ambrose of Milan. During the scholastic period, the most comprehensive consideration of the virtues from a theological perspective was provided by Thomas Aquinas in his Summa Theologiae and his Commentaries on the Nicomachean Ethics. After the Reformation, Aristotle's Nicomachean Ethics continued to be the main authority for the discipline of ethics at Protestant universities until the late seventeenth century, with over fifty Protestant commentaries published on the Nicomachean Ethics before 1682. Though the tradition receded into the background of European philosophical thought in the past few centuries, the term "virtue" remained current during this period, and in fact appears prominently in the tradition of classical republicanism or classical liberalism. This tradition was prominent in the intellectual life of 16th-century Italy, as well as 17th- and 18th-century Britain and America; indeed the term "virtue" appears frequently in the work of Tomás Fernández de Medrano, Niccolò Machiavelli, David Hume, the republicans of the English Civil War period, the 18th-century English Whigs, and the prominent figures among the Scottish Enlightenment and the American Founding Fathers. === Contemporary "aretaic turn" === Although some Enlightenment philosophers (e.g. Hume) continued to emphasise the virtues, with the ascendancy of utilitarianism and deontological ethics, virtue theory moved to the margins of Western philosophy. The contemporary revival of virtue theory is frequently traced to the philosopher Elizabeth Anscombe's 1958 essay "Modern Moral Philosophy". Following this: In the 1976 paper "The Schizophrenia of Modern Ethical Theories", Michael Stocker summarises the main aretaic criticisms of deontological and consequentialist ethics. Philosopher, psychologist, and encyclopedist Mortimer Adler appealed to Aristotelian ethics, and the virtue theory of happiness or eudaimonia throughout his published work. Philippa Foot, published a collection of essays in 1978 entitled Virtues and Vices. Alasdair MacIntyre made an effort to reconstruct a virtue-based theory in dialogue with the problems of modern and postmodern thought; his works include After Virtue and Three Rival Versions of Moral Enquiry. Paul Ricoeur accorded an important place to Aristotelian teleological ethics in his hermeneutical phenomenology of the subject, most notably in his book Oneself as Another. Theologian Stanley Hauerwas found the language of virtue helpful in his own project. Richard Clyde Taylor argues for the restoration of classical virtues as the basis for morality in Virtue Ethics An Introduction (1991) Roger Crisp and Michael Slote edited a collection of important essays titled Virtue Ethics. Martha Nussbaum and Amartya Sen employed virtue theory in theorising the capability approach to international development. Julia Annas wrote The Morality of Happiness (1993). Lawrence C. Becker identified current virtue theory with Greek Stoicism in A New Stoicism. (1998). Rosalind Hursthouse published On Virtue Ethics (1999). Psychologist Martin Seligman drew on classical virtue ethics in conceptualizing positive psychology. Psychologist Daniel Goleman opens his book on Emotional Intelligence with a challenge from Aristotle's Nicomachean Ethics. Michael Sandel discusses Aristotelian ethics to support his ethical theory of justice in his book Justice: What's the Right Thing to Do? The aretaic turn in moral philosophy is paralleled by analogous developments in other philosophical disciplines. One of these is epistemology, where a distinctive virtue epistemology was developed by Linda Zagzebski and others. In political theory, there has been discussion of "virtue politics", and in legal theory, there is a small but growing body of literature on virtue jurisprudence. The aretaic turn also exists in American constitutional theory, where proponents argue for an emphasis on virtue and vice of constitutional adjudicators. Aretaic approaches to morality, epistemology, and jurisprudence have been the subject of intense debates. One criticism focuses on the problem of guidance; one opponent, Robert Louden in his article "Some Vices of Virtue Ethics", questions whether the idea of a virtuous moral actor, believer, or judge can provide the guidance necessary for action, belief formation, or the resolution of legal disputes. == Lists of virtues == There are several lists of virtues. Socrates argued that virtue is knowledge, which suggests that there is really only one virtue. The Stoics identified four cardinal virtues: wisdom, justice, courage, and temperance. Wisdom is subdivided into good sense, good calculation, quick-wittedness, discretion, and resourcefulness. Justice is subdivided into piety, honesty, equity, and fair dealing. Courage is subdivided into endurance, confidence, high-mindedness, cheerfulness, and industriousness. Temperance or moderation is subdivided into good discipline, seemliness, modesty, and self-control. John McDowell argues that virtue is a "perceptual capacity" to identify how one ought to act, and that all particular virtues are merely "specialized sensitivities" to a range of reasons for acting. === Aristotle's list === Aristotle identifies 12 virtues that demonstrate a person is performing their human function well. He distinguished virtues pertaining to emotion and desire from those relating to the mind.: II  The first he calls moral virtues, and the second intellectual virtues (though both are "moral" in the modern sense of the word). ==== Moral virtues ==== Aristotle suggested that each moral virtue was a golden mean between two corresponding vices, one of excess and one of deficiency. Each intellectual virtue is a mental skill or habit by which the mind arrives at truth, affirming what is or denying what is not.: VI  In the Nicomachean Ethics. he discusses 11 moral virtues: ==== Intellectual virtues ==== Nous (intelligence), which apprehends fundamental truths (such as definitions, self-evident principles): VI.11  Episteme (science), which is skill with inferential reasoning (such as proofs, syllogisms, demonstrations): VI.6  Sophia (theoretical wisdom), which combines fundamental truths with valid, necessary inferences to reason well about unchanging truths.: VI.5  Aristotle also mentions several other traits: Gnome (good sense) – passing judgment, "sympathetic understanding": VI.11  Synesis (understanding) – comprehending what others say, does not issue commands Phronesis (practical wisdom) – knowledge of what to do, knowledge of changing truths, issues commands: VI.8  Techne (art, craftsmanship): VI.4  Aristotle's list is not the only list, however. As Alasdair MacIntyre observed in After Virtue, thinkers as diverse as Homer, the authors of the New Testament, Thomas Aquinas, and Benjamin Franklin have all proposed lists. Walter Kaufmann proposed as the four cardinal virtues: ambition/humility ("humbition"), love, courage, and honesty. == Criticisms == Proponents of virtue theory sometimes argue that a central feature of a virtue is its universal applicability. In other words, any character trait defined as a virtue must reasonably be universally regarded as a virtue for all people. According to this view, it is inconsistent to claim, for example, servility as a female virtue, while at the same time not proposing it as a male one. Other proponents of virtue theory, notably Alasdair MacIntyre, respond to this objection by arguing that any account of the virtues must indeed be generated out of the community in which those virtues are to be practiced: the very word ethics implies ethos. That is to say that the virtues are, and necessarily must be, grounded in a particular time and place. What counts as a virtue in 4th-century BCE Athens would be a ludicrous guide to proper behaviour in 21st-century CE Toronto and vice versa. To take this view does not necessarily commit one to the argument that accounts of the virtues must therefore be static: moral activity—that is, attempts to contemplate and practice the virtues—can provide the cultural resources that allow people to change, albeit slowly, the ethos of their own societies. MacIntyre appears to take this position in his seminal work on virtue ethics, After Virtue. Another objection to virtue theory is that virtue ethics does not focus on what sorts of actions are morally permitted and which ones are not, but rather on what sort of qualities someone ought to foster in order to become a good person. In other words, while some virtue theorists may not condemn, for example, murder as an inherently immoral or impermissible sort of action, they may argue that someone who commits a murder is severely lacking in several important virtues, such as compassion and fairness. Still, antagonists of the theory often object that this particular feature of the theory makes virtue ethics useless as a universal norm of acceptable conduct suitable as a base for legislation. Some virtue theorists concede this point, but respond by opposing the very notion of legitimate legislative authority instead, effectively advocating some form of anarchism as the political ideal. Other virtue theorists argue that laws should be made by virtuous legislators, and still another group argue that it is possible to base a judicial system on the moral notion of virtues rather than rules. Aristotle himself saw his Nicomachean Ethics as a prequel for his Politics and felt that the point of politics was to create the fertile soil for a virtuous citizenry to develop in, and that one purpose of virtue was that it helps you to contribute to a healthy polis.: X.9  Some virtue theorists might respond to this overall objection with the notion of a "bad act" also being an act characteristic of vice. That is to say that those acts that do not aim at virtue, or that stray from virtue, would constitute our conception of "bad behavior". Although not all virtue ethicists agree to this notion, this is one way the virtue ethicist can re-introduce the concept of the "morally impermissible". One could raise an objection that he is committing an argument from ignorance by postulating that what is not virtuous is unvirtuous. In other words, just because an action or person 'lacks of evidence' for virtue does not, all else constant, imply that said action or person is unvirtuous. === Subsumed in deontology and utilitarianism === Martha Nussbaum suggested that while virtue ethics is often considered to be anti-Enlightenment, "suspicious of theory and respectful of the wisdom embodied in local practices", it is actually neither fundamentally distinct from, nor does it qualify as a rival approach to deontology and utilitarianism. She argues that philosophers from these two Enlightenment traditions often include theories of virtue. She pointed out that Kant's "Doctrine of Virtue" (in The Metaphysics of Morals) "covers most of the same topics as do classical Greek theories", "that he offers a general account of virtue, in terms of the strength of the will in overcoming wayward and selfish inclinations; that he offers detailed analyses of standard virtues such as courage and self-control, and of vices, such as avarice, mendacity, servility, and pride; that, although in general, he portrays inclination as inimical to virtue, he also recognizes that sympathetic inclinations offer crucial support to virtue, and urges their deliberate cultivation." Nussbaum also points to considerations of virtue by utilitarians such as Henry Sidgwick (The Methods of Ethics), Jeremy Bentham (The Principles of Morals and Legislation), and John Stuart Mill, who writes of moral development as part of an argument for the moral equality of women (The Subjection of Women). She argues that contemporary virtue ethicists such as Alasdair MacIntyre, Bernard Williams, Philippa Foot, and John McDowell have few points of agreement and that the common core of their work does not represent a break from Kant. === Kantian critique === Immanuel Kant's position on virtue ethics is contested. Those who argue that Kantian deontology conflicts with virtue ethics include Alasdair MacIntyre, Philippa Foot, and Bernard Williams. In the Groundwork of the Metaphysics of Morals and the Critique of Practical Reason, Immanuel Kant offers many different criticisms of ethical frameworks and against moral theories before him. Kant rarely mentioned Aristotle by name but did not exclude his moral philosophy of virtue ethics from his critique. Many Kantian arguments against virtue ethics claim that virtue ethics is inconsistent, or sometimes that it is not a real moral theory at all. In "What Is Virtue Ethics All About?", Gregory Velazco y Trianosky identified the key points of divergence between virtue ethicists and what he called "neo-Kantianism", in the form these nine neo-Kantian moral assertions: The crucial moral question is "what is it right/obligatory to do?" Moral judgments are those that concern the rightness of actions. Such judgments take the form of rules or principles. Such rules or principles are universal, not respecting persons. They are not based on some concept of human good that is independent of moral goodness. They take the form of categorical imperatives that can be justified independently of the desires of the person they apply to. They are motivating; they can compel action in an agent, also independently of that agent's desires. An action, in order to be morally virtuous, must be motivated by this sort of moral judgment (not, for example, merely coincidentally aligned with it). The virtuousness of a character trait, or virtue, derives from the relationship that trait has to moral judgments, rules, and principles. Trianosky says that modern sympathizers with virtue ethics almost all reject neo-Kantian claim #1, and many of them also reject certain of the other claims. === Utopianism and pluralism === Robert B. Louden criticizes virtue ethics on the basis that it promotes a form of unsustainable utopianism. Trying to arrive at a single set of virtues is immensely difficult in contemporary societies as, according to Louden, they contain "more ethnic, religious, and class groups than did the moral community which Aristotle theorized about" with each of these groups having "not only its own interests but its own set of virtues as well". Louden notes in passing that MacIntyre, a supporter of virtue-based ethics, has grappled with this in After Virtue but that ethics cannot dispense with building rules around acts and rely only on discussing the moral character of persons. == Topics in virtue ethics == === Virtue ethics as a category === Virtue contrasts with deontological and consequentialist ethics; the three are the most predominant contemporary normative-ethical theories. Deontological ethics, sometimes referred to as duty ethics, emphasizes adherence to ethical principles or duties. How these duties are defined, however, is often a subject of debate. One rule scheme used by deontologists is divine command theory. Deontology also depends upon meta-ethical realism in postulating the existence of moral absolutes, regardless of circumstances. Immanuel Kant is considered a foremost theorist of deontological ethics. The next predominant school of thought in normative ethics is consequentialism. While deontology emphasizes doing one's duty, consequentialism bases the morality of an action on its outcome. Instead of saying that one has a moral duty to abstain from murder, a consequentialist would say that we should abstain from murder because it has undesirable effects. The main contention is what outcomes should (or can) be identified as objectively desirable. John Stuart Mill's greatest happiness principle is a commonly-adopted criterion of what is objectively desirable. Mill asserts that the desirability of an action is the net amount of happiness it brings, the number of people it brings happiness to, and the duration of that happiness. He tries to delineate classes of happiness, some preferable to others, but classifying such a concept is difficult. A virtue ethicist identifies virtues (also known as desirable characteristics) that a good person embodies. Exhibiting these virtues is the aim of ethics, and one's actions are a reflection of one's virtues. To the virtue philosopher, action cannot be used as a demarcation of morality because a virtue encompasses more than a selection of an action; it is a way of being that leads the person exhibiting the virtue to consistently make certain types of choices. There is disagreement in virtue ethics about what are, and what are not, virtues. There are also difficulties in identifying the "virtuous" action to take in all circumstances, and how to define a virtue. Consequentialist and deontological theories often still employ the term virtue in a restricted sense: as a tendency (or disposition) to adhere to the system's principles or rules. In those theories, virtue is secondary and the principles (or rules) are primary. These differing senses of what constitutes virtue are a potential source of confusion. Dogmatic claims about the purpose of human life, or about what a good life is for human beings, are typically controversial. === Virtue and politics === Virtue theory emphasizes Aristotle's belief in the polis as the acme of political organization, and the role of the virtues in enabling human beings to flourish in that environment. In contrast, classical republicanism emphasizes Tacitus's concern that power and luxury can corrupt individuals and destroy liberty, as Tacitus perceived in the transformation of the Roman Republic into the Roman Empire. Virtue for classical republicans is a shield against this sort of corruption and a means to preserve the good life one has, rather than a means by which to achieve the good life one does not yet have. Another way to put the distinction between the two traditions is that virtue ethics relies on Aristotle's fundamental distinction between the human-being-as-he-is from the human-being-as-he-should-be, while classical republicanism relies on the Tacitean distinction of the risk-of-becoming. Virtue ethics has a number of contemporary applications: Social and political philosophy Within the field of social ethics, Deirdre McCloskey argues that virtue ethics can provide a basis for a balanced approach to understanding capitalism and capitalist societies. Education Within the field of philosophy of education, James Page argues that virtue ethics can provide a rationale and foundation for peace education. Health care and medical ethics Thomas Alured Faunce argued that whistleblowing in healthcare settings would be more respected within clinical governance pathways if it had a firmer academic foundation in virtue ethics. He called for whistleblowing to be expressly supported in the UNESCO Universal Declaration on Bioethics and Human Rights. Barry Schwartz argues that "practical wisdom" is an antidote to much of the inefficient and inhumane bureaucracy of modern health care systems. Technology and the virtues In her book Technology and the Virtues, Shannon Vallor proposed a series of "technomoral" virtues that people need to cultivate in order to flourish in our socio-technological world: Honesty (Respecting Truth), Self-control (Becoming the Author of Our Desires), Humility (Knowing What We Do Not Know), Justice (Upholding Rightness), Courage (Intelligent Fear and Hope), Empathy (Compassionate Concern for Others), Care (Loving Service to Others), Civility (Making Common Cause), Flexibility (Skillful Adaptation to Change), Perspective (Holding on to the Moral Whole), and Magnanimity (Moral Leadership and Nobility of Spirit). == See also == == Notes == == References == == Further reading == Yu, Jiyuan (1998). "Virtue: Confucius and Aristotle". Philosophy East and West. 48 (2): 323–47. doi:10.2307/1399830. JSTOR 1399830. Devettere, Raymond J. (2002). Introduction to Virtue Ethics. Washington, D.C.: Georgetown University Press. Taylor, Richard (2002). An Introduction to Virtue Ethics. Amherst: Prometheus Books. Darwall, Stephen, ed. (2003). Virtue Ethics. Oxford: B. Blackwell. Swanton, Christine (2003). Virtue Ethics: a Pluralistic View. Oxford: Oxford University Press. Gardiner, Stephen M., ed. (2005). Virtue Ethics, Old and New. Ithaca: Cornell University Press. Russell, Daniel C., ed. (2013). The Cambridge Companion to Virtue Ethics. New York: Cambridge University Press. == External links == "Virtue Ethics". Internet Encyclopedia of Philosophy. Hursthouse, Rosalind. "Virtue Ethics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Homiak, Marcia. "Moral Character". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Virtue Ethics – summary, criticisms and how to apply the theory Legal theory lexicon: Virtue ethics by Larry Solum. The Virtue Ethics Research Hub The Four Stoic Virtues
Wikipedia/Virtue_theory
Møller–Plesset perturbation theory (MP) is one of several quantum chemistry post-Hartree–Fock ab initio methods in the field of computational chemistry. It improves on the Hartree–Fock method by adding electron correlation effects by means of Rayleigh–Schrödinger perturbation theory (RS-PT), usually to second (MP2), third (MP3) or fourth (MP4) order. Its main idea was published as early as 1934 by Christian Møller and Milton S. Plesset. == Rayleigh–Schrödinger perturbation theory == The MP perturbation theory is a special case of RS perturbation theory. In RS theory one considers an unperturbed Hamiltonian operator H ^ 0 {\displaystyle {\hat {H}}_{0}} , to which a small (often external) perturbation V ^ {\displaystyle {\hat {V}}} is added: H ^ = H ^ 0 + λ V ^ . {\displaystyle {\hat {H}}={\hat {H}}_{0}+\lambda {\hat {V}}.} Here, λ is an arbitrary real parameter that controls the size of the perturbation. In MP theory the zeroth-order wave function is an exact eigenfunction of the Fock operator, which thus serves as the unperturbed operator. The perturbation is the correlation potential. In RS-PT the perturbed wave function and perturbed energy are expressed as a power series in λ: Ψ = lim m → ∞ ∑ i = 0 m λ i Ψ ( i ) , {\displaystyle \Psi =\lim _{m\to \infty }\sum _{i=0}^{m}\lambda ^{i}\Psi ^{(i)},} E = lim m → ∞ ∑ i = 0 m λ i E ( i ) . {\displaystyle E=\lim _{m\to \infty }\sum _{i=0}^{m}\lambda ^{i}E^{(i)}.} Substitution of these series into the time-independent Schrödinger equation gives a new equation as m → ∞ {\displaystyle m\to \infty } : ( H ^ 0 + λ V ) ( ∑ i = 0 m λ i Ψ ( i ) ) = ( ∑ i = 0 m λ i E ( i ) ) ( ∑ i = 0 m λ i Ψ ( i ) ) . {\displaystyle \left({\hat {H}}_{0}+\lambda V\right)\left(\sum _{i=0}^{m}\lambda ^{i}\Psi ^{(i)}\right)=\left(\sum _{i=0}^{m}\lambda ^{i}E^{(i)}\right)\left(\sum _{i=0}^{m}\lambda ^{i}\Psi ^{(i)}\right).} Equating the factors of λ k {\displaystyle \lambda ^{k}} in this equation gives a kth-order perturbation equation, where k = 0, 1, 2, ..., m. See perturbation theory for more details. == Møller–Plesset perturbation == === Original formulation === The MP-energy corrections are obtained from Rayleigh–Schrödinger (RS) perturbation theory with the unperturbed Hamiltonian defined as the shifted Fock operator, H ^ 0 ≡ F ^ + ⟨ Φ 0 | ( H ^ − F ^ ) | Φ 0 ⟩ {\displaystyle {\hat {H}}_{0}\equiv {\hat {F}}+\langle \Phi _{0}|({\hat {H}}-{\hat {F}})|\Phi _{0}\rangle } and the perturbation defined as the correlation potential, V ^ ≡ H ^ − H ^ 0 = H ^ − ( F ^ + ⟨ Φ 0 | ( H ^ − F ^ ) | Φ 0 ⟩ ) , {\displaystyle {\hat {V}}\equiv {\hat {H}}-{\hat {H}}_{0}={\hat {H}}-\left({\hat {F}}+\langle \Phi _{0}|({\hat {H}}-{\hat {F}})|\Phi _{0}\rangle \right),} where the normalized Slater determinant Φ0 is the lowest eigenstate of the Fock operator: F ^ Φ 0 ≡ ∑ k = 1 N f ^ ( k ) Φ 0 = 2 ∑ i = 1 N / 2 ε i Φ 0 . {\displaystyle {\hat {F}}\Phi _{0}\equiv \sum _{k=1}^{N}{\hat {f}}(k)\Phi _{0}=2\sum _{i=1}^{N/2}\varepsilon _{i}\Phi _{0}.} Here N is the number of electrons in the molecule under consideration (a factor of 2 in the energy arises from the fact that each orbital is occupied by a pair of electrons with opposite spin), H ^ {\displaystyle {\hat {H}}} is the usual electronic Hamiltonian, f ^ ( k ) {\displaystyle {\hat {f}}(k)} is the one-electron Fock operator, and εi is the orbital energy belonging to the doubly occupied spatial orbital φi. Since the Slater determinant Φ0 is an eigenstate of F ^ {\displaystyle {\hat {F}}} , it follows readily that F ^ Φ 0 − ⟨ Φ 0 | F ^ | Φ 0 ⟩ Φ 0 = 0 ⟹ H ^ 0 Φ 0 = ⟨ Φ 0 | H ^ | Φ 0 ⟩ Φ 0 , {\displaystyle {\hat {F}}\Phi _{0}-\langle \Phi _{0}|{\hat {F}}|\Phi _{0}\rangle \Phi _{0}=0\implies {\hat {H}}_{0}\Phi _{0}=\langle \Phi _{0}|{\hat {H}}|\Phi _{0}\rangle \Phi _{0},} i.e. the zeroth-order energy is the expectation value of H ^ {\displaystyle {\hat {H}}} with respect to Φ0, the Hartree-Fock energy. Similarly, it can be seen that in this formulation the MP1 energy E MP1 ≡ ⟨ Φ 0 | V ^ | Φ 0 ⟩ = 0 {\displaystyle E_{\text{MP1}}\equiv \langle \Phi _{0}|{\hat {V}}|\Phi _{0}\rangle =0} . Hence, the first meaningful correction appears at MP2 energy. In order to obtain the MP2 formula for a closed-shell molecule, the second order RS-PT formula is written in a basis of doubly excited Slater determinants. (Singly excited Slater determinants do not contribute because of the Brillouin theorem). After application of the Slater–Condon rules for the simplification of N-electron matrix elements with Slater determinants in bra and ket and integrating out spin, it becomes E MP2 = 2 ∑ i , j , a , b ⟨ φ i φ j | v ~ ^ | φ a φ b ⟩ ⟨ φ a φ b | v ~ ^ | φ i φ j ⟩ ε i + ε j − ε a − ε b − ∑ i , j , a , b ⟨ φ i φ j | v ~ ^ | φ a φ b ⟩ ⟨ φ a φ b | v ~ ^ | φ j φ i ⟩ ε i + ε j − ε a − ε b {\displaystyle {\begin{aligned}E_{\text{MP2}}&=2\sum _{i,j,a,b}{\frac {\langle \varphi _{i}\varphi _{j}|{\hat {\tilde {v}}}|\varphi _{a}\varphi _{b}\rangle \langle \varphi _{a}\varphi _{b}|{\hat {\tilde {v}}}|\varphi _{i}\varphi _{j}\rangle }{\varepsilon _{i}+\varepsilon _{j}-\varepsilon _{a}-\varepsilon _{b}}}-\sum _{i,j,a,b}{\frac {\langle \varphi _{i}\varphi _{j}|{\hat {\tilde {v}}}|\varphi _{a}\varphi _{b}\rangle \langle \varphi _{a}\varphi _{b}|{\hat {\tilde {v}}}|\varphi _{j}\varphi _{i}\rangle }{\varepsilon _{i}+\varepsilon _{j}-\varepsilon _{a}-\varepsilon _{b}}}\\\end{aligned}}} where 𝜑i and 𝜑j are canonical occupied orbitals and 𝜑a and 𝜑b are virtual (or unoccupied) orbitals. The quantities εi, εj, εa, and εb are the corresponding orbital energies. Clearly, through second-order in the correlation potential, the total electronic energy is given by the Hartree–Fock energy plus second-order MP correction: E ≈ EHF + EMP2. The solution of the zeroth-order MP equation (which by definition is the Hartree–Fock equation) gives the Hartree–Fock energy. The first non-vanishing perturbation correction beyond the Hartree–Fock treatment is the second-order energy. === Alternative formulation === Equivalent expressions are obtained by a slightly different partitioning of the Hamiltonian, which results in a different division of energy terms over zeroth- and first-order contributions, while for second- and higher-order energy corrections the two partitionings give identical results. The formulation is commonly used by chemists, who are now large users of these methods. This difference is due to the fact, well known in Hartree–Fock theory, that ⟨ Φ 0 | ( H ^ − F ^ ) | Φ 0 ⟩ ≠ 0 ⟺ E HF ≠ 2 ∑ i = 1 N / 2 ε i . {\displaystyle \langle \Phi _{0}|({\hat {H}}-{\hat {F}})|\Phi _{0}\rangle \neq 0\qquad \Longleftrightarrow \qquad E_{\text{HF}}\neq 2\sum _{i=1}^{N/2}\varepsilon _{i}.} (The Hartree–Fock energy is not equal to the sum of occupied-orbital energies). In the alternative partitioning, one defines H ^ 0 ≡ F ^ , V ^ ≡ H ^ − F ^ . {\displaystyle {\hat {H}}_{0}\equiv {\hat {F}},\qquad {\hat {V}}\equiv {\hat {H}}-{\hat {F}}.} Clearly, in this partitioning, E MP0 = 2 ∑ i = 1 N / 2 ε i , E MP1 = E HF − 2 ∑ i = 1 N / 2 ε i . {\displaystyle E_{\text{MP0}}=2\sum _{i=1}^{N/2}\varepsilon _{i},\qquad E_{\text{MP1}}=E_{\text{HF}}-2\sum _{i=1}^{N/2}\varepsilon _{i}.} Obviously, with this alternative formulation, the Møller–Plesset theorem does not hold in the literal sense that EMP1 ≠ 0. The solution of the zeroth-order MP equation is the sum of orbital energies. The zeroth plus first-order correction yields the Hartree–Fock energy. As with the original formulation, the first non-vanishing perturbation correction beyond the Hartree–Fock treatment is the second-order energy. To reiterate, the second- and higher-order corrections are the same in both formulations. == Methods == Second (MP2), third (MP3), and fourth (MP4) order Møller–Plesset calculations are standard levels used in calculating small systems and are implemented in many computational chemistry codes. Higher level MP calculations, generally only MP5, are possible in some codes. However, they are rarely used because of their cost. Systematic studies of MP perturbation theory have shown that it is not necessarily a convergent theory at high orders. Convergence can be slow, rapid, oscillatory, regular, highly erratic or simply non-existent, depending on the precise chemical system or basis set. The density matrix for the first-order and higher MP2 wavefunction is of the type known as response density, which differs from the more usual expectation value density. The eigenvalues of the response density matrix (which are the occupation numbers of the MP2 natural orbitals) can therefore be greater than 2 or negative. Unphysical numbers are a sign of a divergent perturbation expansion. Additionally, various important molecular properties calculated at MP3 and MP4 level are no better than their MP2 counterparts, even for small molecules. For open shell molecules, MPn-theory can directly be applied only to unrestricted Hartree–Fock reference functions (since UHF states are not in general eigenvectors of the Fock operator). However, the resulting energies often suffer from severe spin contamination, leading to large errors. A possible better alternative is to use one of the MP2-like methods based on restricted open-shell Hartree–Fock (ROHF). There are many ROHF based MP2-like methods because of arbitrariness in the ROHF wavefunction(for example HCPT, ROMP, RMP (also called ROHF-MBPT2), OPT1 and OPT2, ZAPT, IOPT, etc.). Some of the ROHF based MP2-like theories suffer from spin-contamination in their perturbed density and energies beyond second-order. These methods, Hartree–Fock, unrestricted Hartree–Fock and restricted Hartree–Fock use a single determinant wave function. Multi-configurational self-consistent field (MCSCF) methods use several determinants and can be used for the unperturbed operator, although not uniquely, so many methods, such as complete active space perturbation theory (CASPT2), and Multi-Configuration Quasi-Degenerate Perturbation Theory (MCQDPT), have been developed. MCSCF based methods are not without perturbation series divergences. The analogue to what MP pertubation theory is in HF theory is Görling-Levy (GL) pertubation theory in Kohn-Sham (KS) density functional theory (DFT). == See also == Electron correlation Perturbation theory (quantum mechanics) Post-Hartree–Fock List of quantum chemistry and solid state physics software Görling-Levy pertubation theory == References == == Further reading == Cramer, Christopher J. (2002). Essentials of Computational Chemistry. Chichester: John Wiley & Sons, Ltd. pp. 207–211. ISBN 978-0-471-48552-0. Foresman, James B.; Æleen Frisch (1996). Exploring Chemistry with Electronic Structure Methods. Pittsburgh, PA: Gaussian Inc. pp. 267–271. ISBN 978-0-9636769-4-8. Leach, Andrew R. (1996). Molecular Modelling. Harlow: Longman. pp. 83–85. ISBN 978-0-582-23933-3. Levine, Ira N. (1991). Quantum Chemistry. Englewood Cliffs, New jersey: Prentice Hall. pp. 511–515. ISBN 978-0-205-12770-2. Szabo, Attila; Neil S. Ostlund (1996). Modern Quantum Chemistry. Mineola, New York: Dover Publications, Inc. pp. 350–353. ISBN 978-0-486-69186-2.
Wikipedia/Møller–Plesset_perturbation_theory
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior. As an illustration, suppose that we are interested in the properties of a function f (n) as n becomes very large. If f(n) = n2 + 3n, then as n becomes very large, the term 3n becomes insignificant compared to n2. The function f(n) is said to be "asymptotically equivalent to n2, as n → ∞". This is often written symbolically as f (n) ~ n2, which is read as "f(n) is asymptotic to n2". An example of an important asymptotic result is the prime number theorem. Let π(x) denote the prime-counting function (which is not directly related to the constant pi), i.e. π(x) is the number of prime numbers that are less than or equal to x. Then the theorem states that π ( x ) ∼ x ln ⁡ x . {\displaystyle \pi (x)\sim {\frac {x}{\ln x}}.} Asymptotic analysis is commonly used in computer science as part of the analysis of algorithms and is often expressed there in terms of big O notation. == Definition == Formally, given functions f (x) and g(x), we define a binary relation f ( x ) ∼ g ( x ) ( as x → ∞ ) {\displaystyle f(x)\sim g(x)\quad ({\text{as }}x\to \infty )} if and only if (de Bruijn 1981, §1.4) lim x → ∞ f ( x ) g ( x ) = 1. {\displaystyle \lim _{x\to \infty }{\frac {f(x)}{g(x)}}=1.} The symbol ~ is the tilde. The relation is an equivalence relation on the set of functions of x; the functions f and g are said to be asymptotically equivalent. The domain of f and g can be any set for which the limit is defined: e.g. real numbers, complex numbers, positive integers. The same notation is also used for other ways of passing to a limit: e.g. x → 0, x ↓ 0, |x| → 0. The way of passing to the limit is often not stated explicitly, if it is clear from the context. Although the above definition is common in the literature, it is problematic if g(x) is zero infinitely often as x goes to the limiting value. For that reason, some authors use an alternative definition. The alternative definition, in little-o notation, is that f ~ g if and only if f ( x ) = g ( x ) ( 1 + o ( 1 ) ) . {\displaystyle f(x)=g(x)(1+o(1)).} This definition is equivalent to the prior definition if g(x) is not zero in some neighbourhood of the limiting value. == Properties == If f ∼ g {\displaystyle f\sim g} and a ∼ b {\displaystyle a\sim b} , then, under some mild conditions, the following hold: f r ∼ g r {\displaystyle f^{r}\sim g^{r}} , for every real r log ⁡ ( f ) ∼ log ⁡ ( g ) {\displaystyle \log(f)\sim \log(g)} if lim g ≠ 1 {\displaystyle \lim g\neq 1} f × a ∼ g × b {\displaystyle f\times a\sim g\times b} f / a ∼ g / b {\displaystyle f/a\sim g/b} Such properties allow asymptotically equivalent functions to be freely exchanged in many algebraic expressions. Also, if we further have g ∼ h {\displaystyle g\sim h} , then, because the asymptote is a transitive relation, then we also have f ∼ h {\displaystyle f\sim h} . == Examples of asymptotic formulas == Factorial n ! ∼ 2 π n ( n e ) n {\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}} —this is Stirling's approximation Partition function For a positive integer n, the partition function, p(n), gives the number of ways of writing the integer n as a sum of positive integers, where the order of addends is not considered. p ( n ) ∼ 1 4 n 3 e π 2 n 3 {\displaystyle p(n)\sim {\frac {1}{4n{\sqrt {3}}}}e^{\pi {\sqrt {\frac {2n}{3}}}}} Airy function The Airy function, Ai(x), is a solution of the differential equation y″ − xy = 0; it has many applications in physics. Ai ⁡ ( x ) ∼ e − 2 3 x 3 2 2 π x 1 / 4 {\displaystyle \operatorname {Ai} (x)\sim {\frac {e^{-{\frac {2}{3}}x^{\frac {3}{2}}}}{2{\sqrt {\pi }}x^{1/4}}}} Hankel functions H α ( 1 ) ( z ) ∼ 2 π z e i ( z − 2 π α − π 4 ) H α ( 2 ) ( z ) ∼ 2 π z e − i ( z − 2 π α − π 4 ) {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {2\pi \alpha -\pi }{4}}\right)}\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {2\pi \alpha -\pi }{4}}\right)}\end{aligned}}} == Asymptotic expansion == An asymptotic expansion of a function f(x) is in practice an expression of that function in terms of a series, the partial sums of which do not necessarily converge, but such that taking any initial partial sum provides an asymptotic formula for f. The idea is that successive terms provide an increasingly accurate description of the order of growth of f. In symbols, it means we have f ∼ g 1 , {\displaystyle f\sim g_{1},} but also f − g 1 ∼ g 2 {\displaystyle f-g_{1}\sim g_{2}} and f − g 1 − ⋯ − g k − 1 ∼ g k {\displaystyle f-g_{1}-\cdots -g_{k-1}\sim g_{k}} for each fixed k. In view of the definition of the ∼ {\displaystyle \sim } symbol, the last equation means f − ( g 1 + ⋯ + g k ) = o ( g k ) {\displaystyle f-(g_{1}+\cdots +g_{k})=o(g_{k})} in the little o notation, i.e., f − ( g 1 + ⋯ + g k ) {\displaystyle f-(g_{1}+\cdots +g_{k})} is much smaller than g k . {\displaystyle g_{k}.} The relation f − g 1 − ⋯ − g k − 1 ∼ g k {\displaystyle f-g_{1}-\cdots -g_{k-1}\sim g_{k}} takes its full meaning if g k + 1 = o ( g k ) {\displaystyle g_{k+1}=o(g_{k})} for all k, which means the g k {\displaystyle g_{k}} form an asymptotic scale. In that case, some authors may abusively write f ∼ g 1 + ⋯ + g k {\displaystyle f\sim g_{1}+\cdots +g_{k}} to denote the statement f − ( g 1 + ⋯ + g k ) = o ( g k ) . {\displaystyle f-(g_{1}+\cdots +g_{k})=o(g_{k}).} One should however be careful that this is not a standard use of the ∼ {\displaystyle \sim } symbol, and that it does not correspond to the definition given in § Definition. In the present situation, this relation g k = o ( g k − 1 ) {\displaystyle g_{k}=o(g_{k-1})} actually follows from combining steps k and k−1; by subtracting f − g 1 − ⋯ − g k − 2 = g k − 1 + o ( g k − 1 ) {\displaystyle f-g_{1}-\cdots -g_{k-2}=g_{k-1}+o(g_{k-1})} from f − g 1 − ⋯ − g k − 2 − g k − 1 = g k + o ( g k ) , {\displaystyle f-g_{1}-\cdots -g_{k-2}-g_{k-1}=g_{k}+o(g_{k}),} one gets g k + o ( g k ) = o ( g k − 1 ) , {\displaystyle g_{k}+o(g_{k})=o(g_{k-1}),} i.e. g k = o ( g k − 1 ) . {\displaystyle g_{k}=o(g_{k-1}).} In case the asymptotic expansion does not converge, for any particular value of the argument there will be a particular partial sum which provides the best approximation and adding additional terms will decrease the accuracy. This optimal partial sum will usually have more terms as the argument approaches the limit value. === Examples of asymptotic expansions === Gamma function e x x x 2 π x Γ ( x + 1 ) ∼ 1 + 1 12 x + 1 288 x 2 − 139 51840 x 3 − ⋯ ( x → ∞ ) {\displaystyle {\frac {e^{x}}{x^{x}{\sqrt {2\pi x}}}}\Gamma (x+1)\sim 1+{\frac {1}{12x}}+{\frac {1}{288x^{2}}}-{\frac {139}{51840x^{3}}}-\cdots \ (x\to \infty )} Exponential integral x e x E 1 ( x ) ∼ ∑ n = 0 ∞ ( − 1 ) n n ! x n ( x → ∞ ) {\displaystyle xe^{x}E_{1}(x)\sim \sum _{n=0}^{\infty }{\frac {(-1)^{n}n!}{x^{n}}}\ (x\to \infty )} Error function π x e x 2 erfc ⁡ ( x ) ∼ 1 + ∑ n = 1 ∞ ( − 1 ) n ( 2 n − 1 ) ! ! n ! ( 2 x 2 ) n ( x → ∞ ) {\displaystyle {\sqrt {\pi }}xe^{x^{2}}\operatorname {erfc} (x)\sim 1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{n!(2x^{2})^{n}}}\ (x\to \infty )} where m!! is the double factorial. === Worked example === Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of its domain of convergence. For example, we might start with the ordinary series 1 1 − w = ∑ n = 0 ∞ w n {\displaystyle {\frac {1}{1-w}}=\sum _{n=0}^{\infty }w^{n}} The expression on the left is valid on the entire complex plane w ≠ 1 {\displaystyle w\neq 1} , while the right hand side converges only for | w | < 1 {\displaystyle |w|<1} . Multiplying by e − w / t {\displaystyle e^{-w/t}} and integrating both sides yields ∫ 0 ∞ e − w t 1 − w d w = ∑ n = 0 ∞ t n + 1 ∫ 0 ∞ e − u u n d u {\displaystyle \int _{0}^{\infty }{\frac {e^{-{\frac {w}{t}}}}{1-w}}\,dw=\sum _{n=0}^{\infty }t^{n+1}\int _{0}^{\infty }e^{-u}u^{n}\,du} The integral on the left hand side can be expressed in terms of the exponential integral. The integral on the right hand side, after the substitution u = w / t {\displaystyle u=w/t} , may be recognized as the gamma function. Evaluating both, one obtains the asymptotic expansion e − 1 t Ei ⁡ ( 1 t ) = ∑ n = 0 ∞ n ! t n + 1 {\displaystyle e^{-{\frac {1}{t}}}\operatorname {Ei} \left({\frac {1}{t}}\right)=\sum _{n=0}^{\infty }n!\;t^{n+1}} Here, the right hand side is clearly not convergent for any non-zero value of t. However, by keeping t small, and truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value of Ei ⁡ ( 1 / t ) {\displaystyle \operatorname {Ei} (1/t)} . Substituting x = − 1 / t {\displaystyle x=-1/t} and noting that Ei ⁡ ( x ) = − E 1 ( − x ) {\displaystyle \operatorname {Ei} (x)=-E_{1}(-x)} results in the asymptotic expansion given earlier in this article. == Asymptotic distribution == In mathematical statistics, an asymptotic distribution is a hypothetical distribution that is in a sense the "limiting" distribution of a sequence of distributions. A distribution is an ordered set of random variables Zi for i = 1, …, n, for some positive integer n. An asymptotic distribution allows i to range without bound, that is, n is infinite. A special case of an asymptotic distribution is when the late entries go to zero—that is, the Zi go to 0 as i goes to infinity. Some instances of "asymptotic distribution" refer only to this special case. This is based on the notion of an asymptotic function which cleanly approaches a constant value (the asymptote) as the independent variable goes to infinity; "clean" in this sense meaning that for any desired closeness epsilon there is some value of the independent variable after which the function never differs from the constant by more than epsilon. An asymptote is a straight line that a curve approaches but never meets or crosses. Informally, one may speak of the curve meeting the asymptote "at infinity" although this is not a precise definition. In the equation y = 1 x , {\displaystyle y={\frac {1}{x}},} y becomes arbitrarily small in magnitude as x increases. == Applications == Asymptotic analysis is used in several mathematical sciences. In statistics, asymptotic theory provides limiting approximations of the probability distribution of sample statistics, such as the likelihood ratio statistic and the expected value of the deviance. Asymptotic theory does not provide a method of evaluating the finite-sample distributions of sample statistics, however. Non-asymptotic bounds are provided by methods of approximation theory. Examples of applications are the following. In applied mathematics, asymptotic analysis is used to build numerical methods to approximate equation solutions. In mathematical statistics and probability theory, asymptotics are used in analysis of long-run or large-sample behaviour of random variables and estimators. In computer science in the analysis of algorithms, considering the performance of algorithms. The behavior of physical systems, an example being statistical mechanics. In accident analysis when identifying the causation of crash through count modeling with large number of crash counts in a given time and space. Asymptotic analysis is a key tool for exploring the ordinary and partial differential equations which arise in the mathematical modelling of real-world phenomena. An illustrative example is the derivation of the boundary layer equations from the full Navier-Stokes equations governing fluid flow. In many cases, the asymptotic expansion is in power of a small parameter, ε: in the boundary layer case, this is the nondimensional ratio of the boundary layer thickness to a typical length scale of the problem. Indeed, applications of asymptotic analysis in mathematical modelling often center around a nondimensional parameter which has been shown, or assumed, to be small through a consideration of the scales of the problem at hand. Asymptotic expansions typically arise in the approximation of certain integrals (Laplace's method, saddle-point method, method of steepest descent) or in the approximation of probability distributions (Edgeworth series). The Feynman graphs in quantum field theory are another example of asymptotic expansions which often do not converge. === Asymptotic versus Numerical Analysis === De Bruijn illustrates the use of asymptotics in the following dialog between Dr. N.A., a Numerical Analyst, and Dr. A.A., an Asymptotic Analyst: N.A.: I want to evaluate my function f ( x ) {\displaystyle f(x)} for large values of x {\displaystyle x} , with a relative error of at most 1%. A.A.: f ( x ) = x − 1 + O ( x − 2 ) ( x → ∞ ) {\displaystyle f(x)=x^{-1}+\mathrm {O} (x^{-2})\qquad (x\to \infty )} . N.A.: I am sorry, I don't understand. A.A.: | f ( x ) − x − 1 | < 8 x − 2 ( x > 10 4 ) . {\displaystyle |f(x)-x^{-1}|<8x^{-2}\qquad (x>10^{4}).} N.A.: But my value of x {\displaystyle x} is only 100. A.A.: Why did you not say so? My evaluations give | f ( x ) − x − 1 | < 57000 x − 2 ( x > 100 ) . {\displaystyle |f(x)-x^{-1}|<57000x^{-2}\qquad (x>100).} N.A.: This is no news to me. I know already that 0 < f ( 100 ) < 1 {\displaystyle 0<f(100)<1} . A.A.: I can gain a little on some of my estimates. Now I find that | f ( x ) − x − 1 | < 20 x − 2 ( x > 100 ) . {\displaystyle |f(x)-x^{-1}|<20x^{-2}\qquad (x>100).} N.A.: I asked for 1%, not for 20%. A.A.: It is almost the best thing I possibly can get. Why don't you take larger values of x {\displaystyle x} ? N.A.: !!! I think it's better to ask my electronic computing machine. Machine: f(100) = 0.01137 42259 34008 67153 A.A.: Haven't I told you so? My estimate of 20% was not far off from the 14% of the real error. N.A.: !!! . . . ! Some days later, Miss N.A. wants to know the value of f(1000), but her machine would take a month of computation to give the answer. She returns to her Asymptotic Colleague, and gets a fully satisfactory reply. == See also == == Notes == == References == Balser, W. (1994), From Divergent Power Series To Analytic Functions, Springer-Verlag, ISBN 9783540485940 de Bruijn, N. G. (1981), Asymptotic Methods in Analysis, Dover Publications, ISBN 9780486642215 Estrada, R.; Kanwal, R. P. (2002), A Distributional Approach to Asymptotics, Birkhäuser, ISBN 9780817681302 Miller, P. D. (2006), Applied Asymptotic Analysis, American Mathematical Society, ISBN 9780821840788 Murray, J. D. (1984), Asymptotic Analysis, Springer, ISBN 9781461211228 Paris, R. B.; Kaminsky, D. (2001), Asymptotics and Mellin-Barnes Integrals, Cambridge University Press == External links == Asymptotic Analysis —home page of the journal, which is published by IOS Press A paper on time series analysis using asymptotic distribution
Wikipedia/Asymptotic_theory
Attribution is a term used in psychology which deals with how individuals perceive the causes of everyday experience, as being either external or internal. Models to explain this process are called Attribution theory. Psychological research into attribution began with the work of Fritz Heider in the early 20th century, and the theory was further advanced by Harold Kelley and Bernard Weiner. Heider first introduced the concept of perceived 'locus of causality' to define the perception of one's environment. For instance, an experience may be perceived as being caused by factors outside the person's control (external) or it may be perceived as the person's own doing (internal). These initial perceptions are called attributions. Psychologists use these attributions to better understand an individual's motivation and competence. The theory is of particular interest to employers who use it to increase worker motivation, goal orientation, and productivity. Psychologists have identified various biases in the way people attribute causation, especially when dealing with others. The fundamental attribution error describes the tendency to attribute dispositional or personality-based explanations for behavior, rather than considering external factors. In other words, a person tends to assume that other people are each responsible for their own misfortunes, while blaming external factors for the person's own misfortunes. Culture bias is when someone makes an assumption about the behavior of a person based on their own cultural practices and beliefs. Attribution theory has been criticised as being mechanistic and reductionist for assuming that people are rational, logical, and systematic thinkers. It also fails to address the social, cultural, and historical factors that shape attributions of cause. == Background == Fritz Heider discovered Attribution theory during a time when psychologists were furthering research on personality, social psychology, and human motivation. Heider worked alone in his research, but stated that he wished for Attribution theory not to be attributed to him because many different ideas and people were involved in the process. Weiner argued that Heider was too modest, and the openness of the theory keeps its presence functional today. Attribution theory is the original parent theory with Harold Kelley's covariation model and Bernard Weiner's three-dimensional model branching from Attribution theory. Attribution theory also influenced several other theories as well such as Heider's Perceived Locus of Causality which eventually led to Deci and Ryan's Theory of Self-determination. == Key theorists == === Fritz Heider === Gestalt psychologist Fritz Heider is often described as the early-20th-century "father of Attribution theory". In his 1920 dissertation, Heider addressed the problem of phenomenology: why do perceivers attribute the properties such as color to perceived objects, when those properties are mental constructs? Heider's answer that perceivers attribute that which they "directly" sense – vibrations in the air for instance – to an object they construe as causing those to sense data. "Perceivers faced with sensory data thus see the perceptual object as 'out there', because they attribute the sensory data to their underlying causes in the world." Heider extended this idea to attributions about people: "motives, intentions, sentiments ... the core processes which manifest themselves in overt behavior". Fritz Heider's most famous contribution to psychology started in the 1940s when he began studying and accumulating knowledge on interpersonal behavior and social perception. He compiled these findings into his 1958 book “The Psychology of Interpersonal Relations,” and Heider's work became widely recognized as the best source of knowledge on Attribution theory. In this book, Heider outlines two key goals that he planned to achieve in his studies. His first goal was to develop a scientific theory that was based on a “conceptual network suitable to some of the problems in this field.” Theorists that attempt to follow in Heider's footsteps widely misinterpret this goal, as many falsely assume that the core of human behavior is person-dichotomy rather than what Heider actually suggested in his book. Heider's second goal was to redefine the understanding of “common-sense psychology” in order to develop his own scientific theory that explains social perception in humans. This second goal more clearly defined Heider's theory on attribution. Through Heider's research of Attribution Theory, he concerned himself with the reasons a person achieved success or failed. To organize the research, Heider broke the reasonings down into three different subjects, the first being ability, second being effort, and third being task difficulty. Heider saw both ability and effort being internal factors and task difficulty being an external factor. === Bernard Weiner === Bernard Weiner was not the theory's originator; however, he expanded on Attribution theory in several ways to help keep it relevant to today's society. The most influential aspect of Weiner's work consists of the motivational aspect of Attribution theory, which he introduced around the year 1968. This means that how one perceives past events and actions determines what actions a person will take in their future because the past experiences motivated them to do so. Weiner built his contribution of Attribution theory off of other well-known theories such as Atkinsons' Theory of Motivation, Drive theory, and Thorndike's Law of Effect which describes how rewarded behaviors will more than likely be repeated. Weiner argued that Attribution theory is subjective meaning a person's thoughts and feelings drive this theory. This means that researchers do not have to remain objective in their research and can explore the emotions, biases, motivations, and behaviors of their participants. === Harold Kelley === Harold Kelley, a social psychologist, expanded upon Heider's Attribution theory. Kelley's main research goal was to emphasize the central ideas Heider discovered in Attribution theory. The first focus of Kelley's research was a look at external and internal attributions. His second focus was determining whether the procedure to arrive at external and internal attributes was related to experimental methodology. Kelley later turned this idea into his covariation model/principle. Kelley describes this principle as “the effect that is attributed to that condition which is present when the effect is present and which is absent when the effect is absent”. Kelley looked at causal inferences and attempted to elaborate on Heider's model by explaining the effects of certain factors. == Types == === External === External attribution, also called situational attribution, refers to interpreting someone's behavior as being caused by the individual's environment. For example, if one's car tire is punctured, it may be attributed to a hole in the road; by making attributions to the poor condition of the highway, one can make sense of the event without any discomfort that it may in reality have been the result of their own bad driving. Individuals are more likely to associate unfortunate events with external factors than with internal factors. For example, consider someone who uses external attributions as a way not to use hearing aids. Examples of this are: A patient does not have the money to afford hearing aids, so they do not purchase them. A person believes using hearing aids would make them a burden to people they are around, so they do not wear them. A person does not trust the doctor that is prescribing them hearing aids. Lastly, a person believes that other health conditions, either about themselves or someone else in their life, take priority over their need for hearing aids. Fangfang Wen examined how people react when viewing situations from an external perspective, focusing on how an attribution of blame affects emotions and behavior. The study tested whether individuals who assign blame to people rather than external factors are more likely to experience anger, which can then lead to aggressive behavior or social avoidance. To investigate this, the researchers analyzed reactions to two real-life events: a private data leak involving Wuhan returnees and the refusal of workers returning from Hubei. In the third part of the study, they explained whether changing how people would recognize the blame (internal versus external) would influence their emotions and upcoming actions. The findings show that when individuals attribute blame to others (an internal attribution), they are more likely to feel anger and disrespect, resulting in either aggression or avoidance. The study shows how internal attributions can intensify negative emotional responses and shape social behaviors. === Internal === Internal attribution, or dispositional attribution, refers to the process of assigning the cause of behavior to some internal characteristic, likeability and motivation, rather than to outside forces. This concept has overlap with the locus of control, in which individuals feel they are personally responsible for everything that happens to them. Consider the example of a person who uses internal attributions to justify not wearing their prescribed hearing aids. Examples of this are: A patient believes the hearing aids not to be necessary, so they choose not to wear them. A patient fears being stigmatized for having a disability and requiring hearing aids to hear correctly, so they decide not to wear them. A patient is struggling with adding hearing aids into their everyday life and believes it to be easier not to wear them. Lastly, a patient does not fully understand the benefits that hearing aids will give them, so they choose not to wear them despite the benefits hearing aids would grant them. Fangfang Wen explained how third-party observers reacted to discrimination against returning workers from Hubei, focusing on how their ability to assign blame (internal attribution) influenced their emotions and behaviors. The study found that when observers are blamed , they feel angrier, leading to either avoidance or aggressive behavior. Other emotions, such as sadness and tension, remained the same. This finding supports the cognition-emotion-action model, which shows how individuals interpretations of a situation influence their emotional responses and following actions. In contrast to external attribution, where the environmental factors are to blame, internal attribution leads to stronger negative emotions and more intense reactions. == Other dimensions of attribution == The distinction between internal and external attributions was supplemented by other dimensions of attribution once attribution theory was applied to understanding clinical depression. These are whether a cause is perceived as being stable or unstable, i.e. whether it lasts over time or is short-term, and whether a cause is perceived as being global (i.e. affecting all situations in a person's life) or situation-specific. == Theories and models == === Common sense psychology === From the book The Psychology of Interpersonal Relations (1958), Fritz Heider tried to explore the nature of interpersonal relationship, and espoused the concept of what he called "common sense" or "naïve psychology". In his theory, he believed that people observe, analyze, and explain behaviors with explanations. Although people have different kinds of explanations for the events of human behaviors, Heider found it is very useful to group explanation into two categories; Internal (personal) and external (situational) attributions. When an internal attribution is made, the cause of the given behavior is assigned to the individual's characteristics such as ability, personality, mood, efforts, attitudes, or disposition. When an external attribution is made, the cause of the given behavior is assigned to the situation in which the behavior was seen such as the task, other people, or luck (that the individual producing the behavior did so because of the surrounding environment or the social situation). These two types lead to very different perceptions of the individual engaging in a behavior. === Perceived locus of causality === Heider first introduced the concept of perceived locus of causality using it to define interpersonal perception of one's environment. This theory explains how individuals perceive the causality of different events whether being external or internally based. These initial perceptions are called attributions. These attributions are viewed on a continuum of external to internal motivation. Understanding an individual's perception of causality also opens doors to a better understanding of how to better motivate an individual in specific tasks by increasing levels of autonomy, relatedness, and competence. The theory of perceived locus of causality lead to Deci and Ryan's theory of self-determination. Self-determination theory uses perceived locus of causality to measure feelings of autonomy from behaviors performed by the individual. For this reason perceived locus of causality has caught the eye of employers and psychologists to help determine how to increase an individual's motivation and goal orientation to increase effectiveness within their respective fields. Research has shown that spectators at an athletic event often attribute their team's victory to internal causes and their team's losses to external causes. This is an example of self-serving attribution error or fundamental attribution error and is more common than one might think. === Correspondent inference === Correspondent inferences state that people make inferences about a person when their actions are freely chosen, are unexpected, and result in a small number of desirable effects. According to Edward E. Jones and Keith Davis' correspondent inference theory, people make correspondent inferences by reviewing the context of behavior. It describes how people try to find out an individual's personal characteristics from the behavioral evidence. People make inferences on the basis of three factors; degree of choice, expectedness of behavior, and effects of someone's behaviors. For example, we believe we can make stronger assumptions about a man who gives half of his money to charity, than we can about one who gives $5 to charity. An average person would not want to donate as much as the first man because they would lose a lot of money. By donating half of his money, it is easier for someone to figure out what the first man's personality is like. The second factor, that affects correspondence of action and inferred characteristic, is the number of differences between the choices made and the previous alternatives. If there are not many differences, the assumption made will match the action because it is easy to guess the important aspect between each choice. === Covariation model === The covariation model states that people attribute behavior to the factors that are present when a behavior occurs and absent when it does not. Thus, the theory assumes that people make causal attributions in a rational, logical fashion, and that they assign the cause of an action to the factor that co-varies most closely with that action. Harold Kelley's covariation model of attribution looks to three main types of information from which to make an attribution decision about an individual's behavior. The first is consensus information, or information on how other people in the same situation and with the same stimulus behave. The second is distinctive information, or how the individual responds to different stimuli. The third is consistency information, or how frequent the individual's behavior can be observed with similar stimulus but varied situations. From these three sources of affirmation observers make attribution decisions on the individual's behavior as either internal or external. There have been claims that people under-utilise consensus information, although there has been some dispute over this. There are several levels in the covariation model: high and low. Each of these levels influences the three covariation model criteria. High consensus is when many people can agree on an event or area of interest. Low consensus is when very few people can agree. High distinctiveness is when the event or area of interest is very unusual, whereas low distinctness is when the event or area of interest is fairly common. High consistency is when the event or area of interest continues for a length of time and low consistency is when the event or area of interest goes away quickly. === Three-dimensional model === Bernard Weiner proposed that individuals have initial affective responses to the potential consequences of the intrinsic or extrinsic motives of the actor, which in turn influence future behavior. That is, a person's own perceptions or attributions as to why they succeeded or failed at an activity determine the amount of effort the person will engage in activities in the future. Weiner suggests that individuals exert their attribution search and cognitively evaluate casual properties on the behaviors they experience. When attributions lead to positive affect and high expectancy of future success, such attributions should result in greater willingness to approach to similar achievement tasks in the future than those attributions that produce negative affect and low expectancy of future success. Eventually, such affective and cognitive assessment influences future behavior when individuals encounter similar situations. Weiner's achievement attribution has three categories: stability (stable and unstable) locus of causality (internal and external) controllability (controllable or uncontrollable) Stability influences individuals' expectancy about their future; control is related with individuals' persistence on mission; causality influences emotional responses to the outcome of task. == Bias and errors == While people strive to find reasons for behaviors, they fall into many traps of biases and errors. As Fritz Heider says, "our perceptions of causality are often distorted by our needs and certain cognitive biases". The following are examples of attributional biases. === Fundamental attribution error === The fundamental attribution error describes the habit to misunderstand dispositional or personality-based explanations for behavior, rather than considering external factors. The fundamental attribution error is most visible when people explain and assume the behavior of others. For example, if a person is overweight, a person's first assumption might be that they have a problem with overeating or are lazy, and not that they might have a medical reason for being heavier set. When evaluating others' behaviors, the situational context is often ignored in favor of assuming the disposition of the actor to be the cause of an observed behavior. This is because, when a behavior occurs, attention is most often focused on the person performing the behavior. Thus the individual is more salient than the environment, and dispositional attributions are made more often than situational attributions to explain the behavior of others. However, when evaluating one's own behavior, the situational factors are often exaggerated when there is a negative outcome, while dispositional factors are exaggerated when there is a positive outcome. The core process assumptions of attitude construction models are mainstays of social cognition research and are not controversial—as long as we talk about "judgment". Once the particular judgment made can be thought of as a person's "attitude", however, construal assumptions elicit discomfort, presumably because they dispense with the intuitively appealing attitude concept. Sociocultural disparities are a main source for the propensity of the fundamental attribution error caused by an augment of inferring dispositional attribution while ignoring situational attribution. === Culture bias === Culture bias is when someone makes an assumption about the behavior of a person based on their own cultural practices and beliefs. An example of culture bias is the dichotomy of "individualistic" and "collectivistic cultures". People in individualist cultures, generally Anglo-America and Anglo-Saxon European, are characterized as societies which value individualism, personal goals, and independence. People in collectivist cultures are thought to regard individuals as members of groups such as families, tribes, work units, and nations, and tend to value conformity and interdependence. In other words, working together and being involved as a group is more common in certain cultures that view each person as a part of the community. This cultural trait is common in Asia, traditional Native American societies, and Africa. Research shows that culture, either individualist or collectivist, affects how people make attributions. People from individualist cultures are more inclined to make fundamental-attribution error than people from collectivist cultures. Individualist cultures tend to attribute a person's behavior due to their internal factors whereas collectivist cultures tend to attribute a person's behavior to his external factors. Research suggests that individualist cultures engage in self-serving bias more than do collectivist cultures, i.e. individualist cultures tend to attribute success to internal factors and to attribute failure to external factors. In contrast, collectivist cultures engage in the opposite of self-serving bias i.e. self-effacing bias, which is: attributing success to external factors and blaming failure on internal factors (the individual). Further research suggests that in the United States in particular, culture bias implies a hyperbolized function of culture within the social environments dominated by minorities. These research findings are further supported by aggravation of the perception that there is less of a role in the presence of psychological development of minorities as opposed to their Caucasian counterparts. === Actor/observer difference === People tend to attribute other people's behaviors to their dispositional factors while attributing their own actions to situational factors. In the same situation, people's attribution can differ depending on their role as actor or observer. Actors express their behavior differently from an observer. For example, when a person scores a low grade on a test, they find situational factors to justify the negative event such as saying that the teacher asked a question that he/she never went over in class. However, if another person scores poorly on a test, the person will attribute the results to internal factors such as laziness and inattentiveness in classes. The theory of the actor-observer bias was first developed by E. Jones and R. Nisbett in 1971, whose explanation for the effect was that when we observe other people, we tend to focus on the person, whereas when we are actors, our attention is focused towards situational factors. The actor/observer bias is used less frequently with people one knows well such as friends and family since one knows how his/her close friends and family will behave in certain situation, leading him/her to think more about the external factors rather than internal factors. === Dispositional attributions === Dispositional attribution is a tendency to attribute people's behaviors to their dispositions; that is, to their personality, character, and ability. For example, when a normally pleasant waiter is being rude to his/her customer, the customer may assume he/she has a bad character. The customer, looking at the attitude that the waiter is giving him/her, instantly decides that the waiter is a bad person. The customer oversimplifies the situation by not taking into account all the unfortunate events that might have happened to the waiter which made him/her become rude at that moment. Therefore, the customer made dispositional attribution by attributing the waiter's behavior directly to his/her personality rather than considering situational factors that might have caused the whole "rudeness". The degree of dispositional attribution varies greatly within people. As seen within culture bias, dispositional attribution is impacted by personal beliefs and individual perspectives. Research has shown that dispositional attribution can be influenced by explicit inferences (i.e. instructions or information provided to an individual) that can essentially "guide" a person's judgement. === Self-serving bias === Self-serving bias is attributing dispositional and internal factors for success, while external and uncontrollable factors are used to explain the reason for failure. For example, if a person gets promoted, it is because of his/her ability and competence whereas if he/she does not get promoted, it is because his/her manager does not like him/her (external, uncontrollable factor). Originally, researchers assumed that self-serving bias is strongly related to the fact that people want to protect their self-esteem. However, an alternative information processing explanation is that when the outcomes match people's expectations, they make attributions to internal factors; for example, someone who passes a test might believe it was because of their intelligence. Whereas when the outcome does not match their expectations, they make external attributions or excuses; the same person might excuse failing a test by saying that they did not have enough time to study. People also use defensive attribution to avoid feelings of vulnerability and to differentiate themselves from a victim of a tragic accident. An alternative version of the theory of self-serving bias states that the bias does not arise because people wish to protect their private self-esteem, but to protect their self-image (a self-presentational bias). This version of the theory, which is in line with social desirability bias, would predict that people attribute their successes to situational factors, for fear that others will disapprove of them looking overly vain if they should attribute successes to themselves. For example, there is a hypothesis that coming to believe that "good things happen to good people and bad things happen to bad people" will reduce feelings of vulnerability. However, this just-world bias has a critical drawback, which is having a tendency to blame victims, even in tragic situations. When a mudslide destroys several houses in a rural neighborhood, a person living in a more urban setting might blame the victims for choosing to live in a certain area or not building a safer, stronger house. Another example of attributional bias is optimism bias in which most people believe positive events happen to them more often than to others and that negative events happen to them less often than to others. For example, smokers on average believe they are less likely to get lung cancer than other smokers. === Defensive attribution hypothesis === The defensive attribution hypothesis is a social psychological term referring to a set of beliefs held by an individual with the function of defending themselves from concern that they will be the cause or victim of a mishap. Commonly, defensive attributions are made when individuals witness or learn of a mishap happening to another person. In these situations, attributions of responsibility to the victim or harm-doer for the mishap will depend upon the severity of the outcomes of the mishap and the level of personal and situational similarity between the individual and victim. More responsibility will be attributed to the harm-doer as the outcome becomes more severe, and as personal or situational similarity decreases. An example of defensive attribution is the just-world fallacy, which is where "good things happen to good people and bad things happen to bad people". People believe in this in order to avoid feeling vulnerable to situations that they have no control over. However, this also leads to blaming the victim even in a tragic situation. When people hear someone died from a car accident, they decide that the driver was drunk at the time of the accident, and so they reassure themselves that an accident will never happen to them. Despite the fact there was no other information provided, people will automatically attribute that the accident was the driver's fault due to an internal factor (in this case, deciding to drive while drunk), and thus they would not allow it to happen to themselves. Another example of defensive attribution is optimism bias, in which people believe positive events happen to them more often than to others and that negative events happen to them less often than to others. Too much optimism leads people to ignore some warnings and precautions given to them. For example, smokers believe that they are less likely to get lung cancer than other smokers. === Cognitive dissonance theory === Cognitive dissonance theory refers to a situation involving conflicting attitudes, beliefs or behaviors that cause arousal within the individual. The arousal often produces a feeling of mental or even physical discomfort either leading the individual to alter their own attitudes, beliefs, or behaviors or attributions of the situation. It is much harder for a person to change their behaviors or beliefs than it is to change how they perceive a situation. For example, if someone perceives themselves as being very capable in a sport but perform poorly during a game, they are more likely to attribute or blame the poor performance on an external factor than on internal factors such as their skill and ability. This is done in an effort to preserve their current held beliefs and perceptions about themselves; otherwise, they are left to face the thought that they are not as good at the sport as they originally thought, causing a feeling of dissonance and arousal. == Application == === In court and law === Attribution theory can be applied to juror decision making. Jurors use attributions to explain the cause of the defendant's intent and actions related to the criminal behavior. The attribution made (situational or dispositional) might affect a juror's punitiveness towards the defendant. When jurors attribute a defendant's behavior to dispositional attributions they tend to be more punitive and are more likely find a defendant guilty and to recommend a death sentence compared to a life sentence. Black youth are 1.4x more likely to be given secure confinement, the most severe sanction for a juvenile, when compared to white youth. A study done by Patrick Lowery and John Burrow found that many judicial actors subconsciously attempt to justify simplifications of complex cases by using societal "norms and values" that "include evaluations of stability, consistency, or volatility." Other factors for juveniles include the state of their homes and the state of their communities. Juveniles that come from single-parent homes are more likely to be prosecuted and charged with crimes; this information is known to jurors or judges and could add bias into a decision made by them. The same study brought socio-economic status into question as potential bias. Arrest rates have been shown to be higher in poorer areas when compared to areas of greater wealth. === In marketing communication === The Attribution theories have been used as a tool to analyze causal attributions made by consumers and its effectiveness in marketing communication. Attribution theory has also been utilized to examine external and internal factors of corporate social responsibility (CSR), and the affects the different social movements corporations endorsed have on consumers and their emotions. Companies have moved to illustrate their different CSR efforts in their marketing and advertisements. However, people are beginning to question the companies real motivations and involvement in the different social movements that certain companies market. This concern arises due to the practice of CSR-washing, which is when the company promotes itself that it is more involved in a specific movement than the company claims to be. Attributions for companies that perform CSR activities may be external such as environmental or situational factors. Companies can have internal factors like a CEO's personal values. Studies find that companies that market CSR communications, whether they practice CSR-washing or not, are seen to be more motivated to make a difference outside of their organization than companies that remain discreet about their CSR involvement. When customers began to become suspicious of a company, then that company tended to become more involved in their CSR communications and attributed their behavior to the company's commitment to the movement. === In clinical psychology === Attribution theory has had a big application in clinical psychology. Abramson, Seligman, and Teasdale developed a theory of the depressive attributional style, claiming that individuals who tend to attribute their failures to internal, stable and global factors are more vulnerable to clinical depression. This style is correlated with self-reported rates of depression, as well as posttraumatic stress disorder, anxiety, and higher risks of developing depression. The Depressive attributional style is defined by high levels of pessimism, rumination, hopelessness, self-criticism, poorer academic performance, and a tendency to believe negative outcomes and events are one's own fault. People with this attributional style may place high levels of importance on their own reputation and social status. They may be sensitive to rejection by peers and may often interpret actions as more hostile than they really are. This explanatory style may be caused by depressive symptoms in the patient's parents. Some research has suggested that this attributional style might not result in increased levels of depression amongst certain cultures. A study conducted by researchers at Tsinghua University found that this style was common amongst Buddhists due to cultural beliefs in ideas such as Karma yet they did not demonstrate increased levels of depression. The Attributional Style Questionnaire (ASQ) was developed back in 1996 to assess whether individuals have the depressogenic attributional style. However, the ASQ has been criticized, with some researchers preferring to use a technique called Content Analysis of Verbatim Explanation (CAVE) in which an individual's ordinary writings are analyzed to assess whether s/he is vulnerable to the depressive attributional style. The key advantage of using content analysis is its non-invasive nature, in contrast to collecting survey answers or simulating social experiences. === In sports and health === Attribution theory has been applied to a variety of sports and exercise contexts, such as children's motivation for physical activity and African soccer, where attributions are placed toward magic and rituals, such as what magicians are consulted before the game begins, rather than the technical and mechanical aspects of playing football. Using Heider's classifications for causal attribution, being the locus of causality, stability, and controllability is another way to explain Attribution theory's role in health. Older women make up the largest percentage of inactive people for health reasons. A study was conducted to explain the factors behind low motivation in older women. This study was made up of 37 elderly women with a mean age of 80. Low motivation to exercise and be healthy has been noted to be caused by internal factors such as old age. The combination of internal factors, mixed with a stable response, complimented by the fact that old age is uncontrollable, causes low motivation, especially in elderly women, which leads to health problems. Attributional retraining allowed these women to reconsider external factors as controllable, which decreased their feelings of helplessness by 50% and increased their perceived control over their health. In sports psychology, attribution theory is like a tool that helps us understand why people think and act the way they do, especially when it comes to sports. Back in the 1970s and 1980s, lots of researchers were really interested in attribution theory, but since then, not as many studies have been done on it. Still, it's important because it helps us figure out why athletes think certain things about their performance. Heider started it all by showing how people try to explain why things happen, like why someone does well or badly in a game. This idea is super important in sports because athletes are always trying to understand why they did well or not so well. Other researchers like Jones, Davis, and Kelley built on Heider's work. They came up with ideas about how we figure out what other people are like based on what they do. This is important in sports, too, because coaches and teammates are always trying to understand each other. And then there are other ideas, like Rotter's work on how what we expect to happen affects how we behave. This is important for understanding why some athletes feel like they can improve while others don't. One big idea in attribution theory is about how we think about problems. Weiner talks about how we see problems as either something we can change or something we can't. This affects how we feel and how we think we can do in the future. For example, if we see a problem as something we can't change, we might feel like we can't do anything to get better. But if we see it as something we can work on, we might feel more hopeful about improving. Attribution theory, which explores how individuals interpret events like winning and losing, is vital for understanding sports performance. Weiner's model of attributions offers a framework, highlighting dimensions such as locus, stability, and controllability. Combat sport athletes tend to attribute successes more internally and stably, while their attributions for failures are less internalized. The research also uncovers attribution biases like the self-serving bias, where successes are attributed internally and failures externally. Through qualitative and quantitative analyses, the study emphasizes situational factors and individual differences in attributions. By examining Croatian combat sport competitors, the research enriches the attribution literature and provides insights for optimizing athletic performance. The study by Vanek and Hosek in 1970 compared abstract figures and realistic pictures in assessing athletes' judgments. Their findings favored realistic pictures for accuracy. Similarly, the current study employs visual stimuli to evaluate perceptual skills, with a focus on dynamic elements in videos. Factors like viewing time and distance are considered, with accuracy being the primary measure of success. This approach sheds light on how the visual system processes information in different contexts. Time is crucial in measuring perceptual success, highlighting the importance of accuracy over speed. === In education === Attribution theory has been used to research motivation in educational contexts such as mathematics The way in which teachers attribute behavior can impact their response to problematic children. Laurent Brun, Benoit Dompnier, and Pascal Pansu conducted a study examining interpersonal relationships in Attribution theory. Using Weiner's three dimensions of stability theory, locus of causality, and controllability, they were able to reasonably infer what behaviors teachers attribute toward their students' success. They assigned five profiles to teachers after the study, and they determined that these profiles were "greatly determined by the student's outcome valence." Teachers are more often to blame students' failures on internal reasons, such as an inability or disregarding of lessons, rather than potentially accepting external factors, like poor teaching strategies, that are leading a student toward "failing" in school. Similar profiles can explain why students succeed in class. However, teachers are more likely to accept external sources when students are doing well when compared to struggling students. The findings suggest that humans are more likely to praise themselves for others' success than be critical of themselves when they are teaching others. As stated before, the characteristics of attribution theory directly influence the motivation of student learning, more specifically, students learning English as a foreign language. Studies conducted in Algeria and Eastern Japan share different results when analyzing speaking tasks and oral expressions of EFL. Questionnaires were used to analyze student attribution reflected in outcomes of speaking tasks. In the study from Algeria, the majority of students believed success was achieved through their effort/ability, but failure was a result of external factors. Moreover, Japanese EFL stunts attributed oral task struggles to historically negative attitudes toward Japanese speakers of English, low pay of translators, and emphasis on grammar translation in schools, which are all external factors. However, both studies found that teachers affect causal attributions of students, and teacher feedback can positively or negatively influence learning motivation.” Attribution theory looks at how people explain the reasons behind successes and failures, and it’s being used a lot to study motivation in education. Most of the research has been more about theories than real-life evidence. More recently, the plans to focus on college students by using a big survey to see how different ways of thinking about success and failure affect their confidence in school. Confidence is important because when students blame themselves for failing, it can hurt their motivation. while giving credit for success to things like good teaching might make them feel less confident. Teachers also play a role since they often blame students’ failures on personal issues (like not trying hard enough) but giving credit for success to outside factors (like suitable teaching methods). Understanding how teachers and students think about success and failure can help create better ways to keep students motivated and confident. === In the deaf community === Many people with hearing loss reject hearing aids as a result of internal and external motivations. The framework of attribution theory provides insight into experiences and perspectives of individual reasoning of hearing aid non-use and contributes to better effectiveness in the field of hearing healthcare. A study conducted by Caitlyn Ritter of the mega journal 'Plos One' answers the question: What reasons do adults with hearing loss, who are prescribed hearing aids, provide for not using them? Results of the data were collected from 20 participants, highlighting nine themes that influenced hearing aid non-use. Among the internal motivations of non-use include non-necessity, stigmatization, lack of integration, and lack of knowledge about hearing aids. Moreover, external motivations for non-use include uncomfortableness, cost burden, professional distrust, and priority-setting. The external factors of hearing aid discomfort and the austerity of putting them in lead older adults to refrain from using them. Support and counseling are just as significant as expensive modern technology when referring to the increase in hearing aid usage.”. Consequently, lack of integration into daily living was popular among internal factors that discouraged people from wearing hearing aids. A study conducted on older adults with hearing loss identified perceived stigma as important in influencing decision-making processes and selection of the type of hearing aids and where they should be worn; three interrelated experiences were related to this stigma: self-perception, ageism, and vanity. Ritter's study features Doug, an interviewee, as he explains his use of hearing aids except when he goes on vacation because he tends to forget them. His example is a small but clear barrier to successful integration. Another participant shared they wanted to disguise their physical deficiency if they could and would only wear hearing aids if their hearing loss was severe, but not if they were ugly. In both studies, participants attribute the non-use of hearing aids to going against social norms, influencing a lot of adults with hearing loss to ditch them altogether. == Learned helplessness == The concept of learned helplessness emerged from animal research in which psychologists Martin Seligman and Steven F. Maier discovered that dogs classically conditioned to an electrical shock which they could not escape, subsequently failed to attempt to escape an avoidable shock in a similar situation. They argued that learned helplessness applied to human psychopathology. In particular, individuals who attribute negative outcomes to external, stable and global factors reflect a view in which they have no control over their situation. It is suggested that this aspect of not attempting to better a situation exacerbates negative mood, and may lead to clinical depression and related mental illnesses. == Perceptual salience == When people try to make attributions about another's behavior, their information focuses on the individual. Their perception of that individual is lacking most of the external factors which might affect the individual. The gaps tend to be skipped over and the attribution is made based on the perception information most salient. The most salient perceptual information dominates a person's perception of the situation. For individuals making behavioral attributions about themselves, the situation and external environment are entirely salient, but their own body and behavior are less so. This leads to the tendency to make an external attribution in regard to their own behavior. == COVID-19 pandemic == The onset of the COVID-19 pandemic furthered studies relating to Attribution theory. A study conducted by Elvin Yao and Jason Siegel looked further into Weiner's definition of Attribution theory and how people express emotions when the intentional spreading of COVID occurs. The researchers also included a controllability factor that played a part in the perceived intentionality. The results of the study demonstrated high levels of anger and frustration among people who sensed that someone was intentionally spreading COVID-19. These high levels of frustration also led to a desire to punish the person intentionally spreading the virus, especially when the spreader was in complete control of their circumstances and knowledge of their actions. Furthermore, whether or not a “spreader” had control of the factors surrounding the spreading of the virus, as long as the person had a high perceived intentionality then other people responded with anger. This study shows when the intentions of a person are perceived to be negative to society, then people respond negatively. == Criticism == Attribution theory has been criticized as being mechanistic and reductionist for assuming that people are rational, logical, and systematic thinkers. The fundamental attribution error, however, demonstrates that they are cognitive misers and motivated tacticians. It also fails to address the social, cultural, and historical factors that shape attributions of cause. This has been addressed extensively by discourse analysis, a branch of psychology that prefers to use qualitative methods including the use of language to understand psychological phenomena. The linguistic categorization theory for example demonstrates how language influences our attribution style. == See also == Abductive reasoning – Inference seeking the simplest and most likely explanation Attribution bias – Systematic errors made when people evaluate their own and others' behaviors Explanatory style Locus of control – Concept in psychology Naïve realism – Human tendency to believe that we see the world around us objectively Psychological projection – Attributing parts of the self to others Religious attribution Self-disorder – Mental state of a reduced perception of self-awareness Trait ascription bias == References == == Further reading ==
Wikipedia/Attribution_theory
Ligand field theory (LFT) describes the bonding, orbital arrangement, and other characteristics of coordination complexes. It represents an application of molecular orbital theory to transition metal complexes. A transition metal ion has nine valence atomic orbitals - consisting of five nd, one (n+1)s, and three (n+1)p orbitals. These orbitals have the appropriate energy to form bonding interactions with ligands. The LFT analysis is highly dependent on the geometry of the complex, but most explanations begin by describing octahedral complexes, where six ligands coordinate with the metal. Other complexes can be described with reference to crystal field theory. Inverted ligand field theory (ILFT) elaborates on LFT by breaking assumptions made about relative metal and ligand orbital energies. == History == Ligand field theory resulted from combining the principles laid out in molecular orbital theory and crystal field theory, which describe the loss of degeneracy of metal d orbitals in transition metal complexes. John Stanley Griffith and Leslie Orgel championed ligand field theory as a more accurate description of such complexes, although the theory originated in the 1930s with the work on magnetism by John Hasbrouck Van Vleck. Griffith and Orgel used the electrostatic principles established in crystal field theory to describe transition metal ions in solution and used molecular orbital theory to explain the differences in metal-ligand interactions, thereby explaining such observations as crystal field stabilization and visible spectra of transition metal complexes. In their paper, they proposed that the chief cause of color differences in transition metal complexes in solution is the incomplete d orbital subshells. That is, the unoccupied d orbitals of transition metals participate in bonding, which influences the colors they absorb in solution. In ligand field theory, the various d orbitals are affected differently when surrounded by a field of neighboring ligands and are raised or lowered in energy based on the strength of their interaction with the ligands. == Bonding == === σ-bonding (sigma bonding) === In an octahedral complex, the molecular orbitals created by coordination can be seen as resulting from the donation of two electrons by each of six σ-donor ligands to the d-orbitals on the metal. In octahedral complexes, ligands approach along the x-, y- and z-axes, so their σ-symmetry orbitals form bonding and anti-bonding combinations with the dz2 and dx2−y2 orbitals. The dxy, dxz and dyz orbitals remain non-bonding orbitals. Some weak bonding (and anti-bonding) interactions with the s and p orbitals of the metal also occur, to make a total of 6 bonding (and 6 anti-bonding) molecular orbitals In molecular symmetry terms, the six lone-pair orbitals from the ligands (one from each ligand) form six symmetry-adapted linear combinations (SALCs) of orbitals, also sometimes called ligand group orbitals (LGOs). The irreducible representations that these span are a1g, t1u and eg. The metal also has six valence orbitals that span these irreducible representations - the s orbital is labeled a1g, a set of three p-orbitals is labeled t1u, and the dz2 and dx2−y2 orbitals are labeled eg. The six σ-bonding molecular orbitals result from the combinations of ligand SALCs with metal orbitals of the same symmetry. === π-bonding (pi bonding) === π bonding in octahedral complexes occurs in two ways: via any ligand p-orbitals that are not being used in σ bonding, and via any π or π* molecular orbitals present on the ligand. In the usual analysis, the p-orbitals of the metal are used for σ bonding (and have the wrong symmetry to overlap with the ligand p or π or π* orbitals anyway), so the π interactions take place with the appropriate metal d-orbitals, i.e. dxy, dxz and dyz. These are the orbitals that are non-bonding when only σ bonding takes place. One important π bonding in coordination complexes is metal-to-ligand π bonding, also called π backbonding. It occurs when the LUMOs (lowest unoccupied molecular orbitals) of the ligand are anti-bonding π* orbitals. These orbitals are close in energy to the dxy, dxz and dyz orbitals, with which they combine to form bonding orbitals (i.e. orbitals of lower energy than the aforementioned set of d-orbitals). The corresponding anti-bonding orbitals are higher in energy than the anti-bonding orbitals from σ bonding so, after the new π bonding orbitals are filled with electrons from the metal d-orbitals, ΔO has increased and the bond between the ligand and the metal strengthens. The ligands end up with electrons in their π* molecular orbital, so the corresponding π bond within the ligand weakens. The other form of coordination π bonding is ligand-to-metal bonding. This situation arises when the π-symmetry p or π orbitals on the ligands are filled. They combine with the dxy, dxz and dyz orbitals on the metal and donate electrons to the resulting π-symmetry bonding orbital between them and the metal. The metal-ligand bond is somewhat strengthened by this interaction, but the complementary anti-bonding molecular orbital from ligand-to-metal bonding is not higher in energy than the anti-bonding molecular orbital from the σ bonding. It is filled with electrons from the metal d-orbitals, however, becoming the HOMO (highest occupied molecular orbital) of the complex. For that reason, ΔO decreases when ligand-to-metal bonding occurs. The greater stabilization that results from metal-to-ligand bonding is caused by the donation of negative charge away from the metal ion, towards the ligands. This allows the metal to accept the σ bonds more easily. The combination of ligand-to-metal σ-bonding and metal-to-ligand π-bonding is a synergic effect, as each enhances the other. As each of the six ligands has two orbitals of π-symmetry, there are twelve in total. The symmetry adapted linear combinations of these fall into four triply degenerate irreducible representations, one of which is of t2g symmetry. The dxy, dxz and dyz orbitals on the metal also have this symmetry, and so the π-bonds formed between a central metal and six ligands also have it (as these π-bonds are just formed by the overlap of two sets of orbitals with t2g symmetry.) == High and low spin and the spectrochemical series == The six bonding molecular orbitals that are formed are "filled" with the electrons from the ligands, and electrons from the d-orbitals of the metal ion occupy the non-bonding and, in some cases, anti-bonding MOs. The energy difference between the latter two types of MOs is called ΔO (O stands for octahedral) and is determined by the nature of the π-interaction between the ligand orbitals with the d-orbitals on the central atom. As described above, π-donor ligands lead to a small ΔO and are called weak- or low-field ligands, whereas π-acceptor ligands lead to a large value of ΔO and are called strong- or high-field ligands. Ligands that are neither π-donor nor π-acceptor give a value of ΔO somewhere in-between. The size of ΔO determines the electronic structure of the d4 - d7 ions. In complexes of metals with these d-electron configurations, the non-bonding and anti-bonding molecular orbitals can be filled in two ways: one in which as many electrons as possible are put in the non-bonding orbitals before filling the anti-bonding orbitals, and one in which as many unpaired electrons as possible are put in. The former case is called low-spin, while the latter is called high-spin. A small ΔO can be overcome by the energetic gain from not pairing the electrons, leading to high-spin. When ΔO is large, however, the spin-pairing energy becomes negligible by comparison and a low-spin state arises. The spectrochemical series is an empirically-derived list of ligands ordered by the size of the splitting Δ that they produce. It can be seen that the low-field ligands are all π-donors (such as I−), the high field ligands are π-acceptors (such as CN− and CO), and ligands such as H2O and NH3, which are neither, are in the middle. I− < Br− < S2− < SCN− < Cl− < NO3− < N3− < F− < OH− < C2O42− < H2O < NCS− < CH3CN < py (pyridine) < NH3 < en (ethylenediamine) < bipy (2,2'-bipyridine) < phen (1,10-phenanthroline) < NO2− < PPh3 < CN− < CO == See also == Crystal field theory Ligand dependent pathway Molecular orbital theory Nephelauxetic effect == References == == External links == Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012, ISBN 978-3-89336-796-2
Wikipedia/Ligand_field_theory
The germ theory of disease is the currently accepted scientific theory for many diseases. It states that microorganisms known as pathogens or "germs" can cause disease. These small organisms, which are too small to be seen without magnification, invade animals, plants, and even bacteria. Their growth and reproduction within their hosts can cause disease. "Germ" refers not just to bacteria but to any type of microorganism, such as protists or fungi, or other pathogens, including parasites, viruses, prions, or viroids. Diseases caused by pathogens are called infectious diseases. Even when a pathogen is the principal cause of a disease, environmental and hereditary factors often influence the severity of the disease, and whether a potential host individual becomes infected when exposed to the pathogen. Pathogens are disease-causing agents that can pass from one individual to another, across multiple domains of life. Basic forms of germ theory were proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762. However, such views were held in disdain in Europe, where Galen's miasma theory remained dominant among scientists and doctors. By the early 19th century, the first vaccine, smallpox vaccination, was commonplace in Europe, though doctors were unaware of how it worked or how to extend the principle to other diseases. A transitional period began in the late 1850s with the work of Louis Pasteur. This work was later extended by Robert Koch in the 1880s. By the end of that decade, the miasma theory was struggling to compete with the germ theory of disease. Viruses were initially discovered in the 1890s. Eventually, a "golden era" of bacteriology ensued, during which the germ theory quickly led to the identification of the actual organisms that cause many diseases. == Miasma theory == The miasma theory was the predominant theory of disease transmission before the germ theory took hold towards the end of the 19th century; it is no longer accepted as a correct explanation for disease by the scientific community. It held that diseases such as cholera, chlamydia infection, or the Black Death were caused by a miasma (μίασμα, Ancient Greek: "pollution"), a noxious form of "bad air" emanating from rotting organic matter. Miasma was considered to be a poisonous vapor or mist filled with particles from decomposed matter (miasmata) that was identifiable by its foul smell. The theory posited that diseases were the product of environmental factors such as contaminated water, foul air, and poor hygienic conditions. Such infections, according to the theory, were not passed between individuals but would affect those within a locale that gave rise to such vapors. == Development of germ theory == === Greece and Rome === In Antiquity, the Greek historian Thucydides (c. 460 – c. 400 BC) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others. One theory of the spread of contagious diseases that were not spread by direct contact was that they were spread by spore-like "seeds" (Latin: semina) that were present in and dispersible through the air. In his poem, De rerum natura (On the Nature of Things, c. 56 BC), the Roman poet Lucretius (c. 99 BC – c. 55 BC) stated that the world contained various "seeds", some of which could sicken a person if they were inhaled or ingested. The Roman statesman Marcus Terentius Varro (116–27 BC) wrote, in his Rerum rusticarum libri III (Three Books on Agriculture, 36 BC): "Precautions must also be taken in the neighborhood of swamps... because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases." The Greek physician Galen (AD 129 – c. 200/216) speculated in his On Initial Causes (c. 175 AD) that some patients might have "seeds of fever".: 4  In his On the Different Types of Fever (c. 175 AD), Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air.: 6  And in his Epidemics (c. 176–178 AD), Galen explained that patients might relapse during recovery from fever because some "seed of the disease" lurked in their bodies, which would cause a recurrence of the disease if the patients did not follow a physician's therapeutic regimen.: 7  === The Middle Ages === A hybrid form of miasma and contagion theory was proposed by Persian physician Ibn Sina (known as Avicenna in Europe) in The Canon of Medicine (1025). He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt. During the early Middle Ages, Isidore of Seville (c. 560–636) mentioned "plague-bearing seeds" (pestifera semina) in his On the Nature of Things (c. AD 613).: 20  Later in 1345, Tommaso del Garbo (c. 1305–1370) of Bologna, Italy mentioned Galen's "seeds of plague" in his work Commentaria non-parum utilia in libros Galeni (Helpful commentaries on the books of Galen).: 214  The 16th century Reformer Martin Luther appears to have had some idea of the contagion theory, commenting, "I have survived three plagues and visited several people who had two plague spots which I touched. But it did not hurt me, thank God. Afterwards when I returned home, I took up Margaret," (born 1534), "who was then a baby, and put my unwashed hands on her face, because I had forgotten; otherwise I should not have done it, which would have been tempting God." In 1546, Italian physician Girolamo Fracastoro published De Contagione et Contagiosis Morbis (On Contagion and Contagious Diseases), a set of three books covering the nature of contagious diseases, categorization of major pathogens, and theories on preventing and treating these conditions. Fracastoro blamed "seeds of disease" that propagate through direct contact with an infected host, indirect contact with fomites, or through particles in the air. === The Early Modern Period === In 1668, Italian physician Francesco Redi published experimental evidence rejecting spontaneous generation, the theory that living creatures arise from nonliving matter. He observed that maggots only arose from rotting meat that was uncovered. When meat was left in jars covered by gauze, the maggots would instead appear on the gauze's surface, later understood as rotting meat's smell passing through the mesh to attract flies that laid eggs. Microorganisms are said to have been first directly observed in the 1670s by Anton van Leeuwenhoek, an early pioneer in microbiology, considered "the Father of Microbiology". Leeuwenhoek is said to be the first to see and describe bacteria in 1674, yeast cells, the teeming life in a drop of water (such as algae), and the circulation of blood corpuscles in capillaries. The word "bacteria" didn't exist yet, so he called these microscopic living organisms "animalcules", meaning "little animals". Those "very little animalcules" he was able to isolate from different sources, such as rainwater, pond and well water, and the human mouth and intestine. Yet German Jesuit priest and scholar Athanasius Kircher (or "Kirchner", as it is often spelled) may have observed such microorganisms prior to this. One of his books written in 1646 contains a chapter in Latin, which reads in translation: "Concerning the wonderful structure of things in nature, investigated by microscope...who would believe that vinegar and milk abound with an innumerable multitude of worms." Kircher defined the invisible organisms found in decaying bodies, meat, milk, and secretions as "worms." His studies with the microscope led him to the belief, which he was possibly the first to hold, that disease and putrefaction, or decay were caused by the presence of invisible living bodies, writing that "a number of things might be discovered in the blood of fever patients." When Rome was struck by the bubonic plague in 1656, Kircher investigated the blood of plague victims under the microscope. He noted the presence of "little worms" or "animalcules" in the blood and concluded that the disease was caused by microorganisms. Kircher was the first to attribute infectious disease to a microscopic pathogen, inventing the germ theory of disease, which he outlined in his Scrutinium Physico-Medicum, published in Rome in 1658. Kircher's conclusion that disease was caused by microorganisms was correct, although it is likely that what he saw under the microscope were in fact red or white blood cells and not the plague agent itself. Kircher also proposed hygienic measures to prevent the spread of disease, such as isolation, quarantine, burning clothes worn by the infected, and wearing facemasks to prevent the inhalation of germs. It was Kircher who first proposed that living beings enter and exist in the blood. In the 18th century, more proposals were made, but struggled to catch on. In 1700, physician Nicolas Andry argued that microorganisms he called "worms" were responsible for smallpox and other diseases. In 1720, Richard Bradley theorised that the plague and "all pestilential distempers" were caused by "poisonous insects", living creatures viewable only with the help of microscopes. In 1762, the Austrian physician Marcus Antonius von Plenciz (1705–1786) published a book titled Opera medico-physica. It outlined a theory of contagion stating that specific animalcules in the soil and the air were responsible for causing specific diseases. Von Plenciz noted the distinction between diseases which are both epidemic and contagious (like measles and dysentery), and diseases which are contagious but not epidemic (like rabies and leprosy). The book cites Anton van Leeuwenhoek to show how ubiquitous such animalcules are and was unique for describing the presence of germs in ulcerating wounds. Ultimately, the theory espoused by von Plenciz was not accepted by the scientific community. === 19th and 20th centuries === ==== Agostino Bassi, Italy ==== During the early 19th century, driven by economic concerns over collapsing silk production, Italian entomologist Agostino Bassi researched a silkworm disease known as "muscardine" in French and "calcinaccio" or "mal del segno" in Italian, causing white fungal spots along the caterpillar. From 1835 to 1836, Bassi published his findings that fungal spores transmitted the disease between individuals. In recommending the rapid removal of diseased caterpillars and disinfection of their surfaces, Bassi outlined methods used in modern preventative healthcare. Italian naturalist Giuseppe Gabriel Balsamo-Crivelli named the causative fungal species after Bassi, currently classified as Beauveria bassiana. ==== Louis-Daniel Beauperthuy, France ==== In 1838 French specialist in tropical medicine Louis-Daniel Beauperthuy pioneered using microscopy in relation to diseases and independently developed a theory that all infectious diseases were due to parasitic infection with "animalcules" (microorganisms). With the help of his friend M. Adele de Rosseville, he presented his theory in a formal presentation before the French Academy of Sciences in Paris. By 1853, he was convinced that malaria and yellow fever were spread by mosquitos. He even identified the particular group of mosquitos that transmit yellow fever as the "domestic species" of "striped-legged mosquito", which can be recognised as Aedes aegypti, the actual vector. He published his theory in 1854 in the Gaceta Oficial de Cumana ("Official Gazette of Cumana"). His reports were assessed by an official commission, which discarded his mosquito theory. ==== Ignaz Semmelweis, Austria ==== Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital (Allgemeines Krankenhaus) in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies was implicated in its spread, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women. He then documented a sudden reduction in the mortality rate from 18% to 2.2% over a period of a year. Despite this evidence, he and his theories were rejected by most of the contemporary medical establishment. ==== Gideon Mantell, UK ==== Gideon Mantell, the Sussex doctor more famous for discovering dinosaur fossils, spent time with his microscope, and speculated in his Thoughts on Animalcules (1850) that perhaps "many of the most serious maladies which afflict humanity, are produced by peculiar states of invisible animalcular life". ==== John Snow, UK ==== British physician John Snow is credited as a founder of modern epidemiology for studying the 1854 Broad Street cholera outbreak. Snow criticized the Italian anatomist Giovanni Maria Lancisi for his early 18th century writings that claimed swamp miasma spread malaria, rebutting that bad air from decomposing organisms was not present in all cases. In his 1849 pamphlet On the Mode of Communication of Cholera, Snow proposed that cholera spread through the fecal–oral route, replicating in human lower intestines. In the book's second edition, published in 1855, Snow theorized that cholera was caused by cells smaller than human epithelial cells, leading to Robert Koch's 1884 confirmation of the bacterial species Vibrio cholerae as the causative agent. In recognizing a biological origin, Snow recommended boiling and filtering water, setting the precedent for modern boil-water advisory directives. Through a statistical analysis tying cholera cases to specific water pumps associated with the Southwark and Vauxhall Waterworks Company, which supplied sewage-polluted water from the River Thames, Snow showed that areas supplied by this company experienced fourteen times as many deaths as residents using Lambeth Waterworks Company pumps that obtained water from the upriver, cleaner Seething Wells. While Snow received praise for convincing the Board of Guardians of St James's Parish to remove the handles of contaminated pumps, he noted that the outbreak's cases were already declining as scared residents fled the region. ==== Louis Pasteur, France ==== During the mid-19th century, French microbiologist Louis Pasteur showed that treating the female genital tract with boric acid killed the microorganisms causing postpartum infections while avoiding damage to mucous membranes. Building on Redi's work, Pasteur disproved spontaneous generation by constructing swan neck flasks containing nutrient broth. Since the flask contents were only fermented when in direct contact with the external environment's air by removing the curved tubing, Pasteur demonstrated that bacteria must travel between sites of infection to colonize environments. Similar to Bassi, Pasteur extended his research on germ theory by studying pébrine, a disease that causes brown spots on silkworms. While Swiss botanist Carl Nägeli discovered the fungal species Nosema bombycis in 1857, Pasteur applied the findings to recommend improved ventilation and screening of silkworm eggs, an early form of disease surveillance. ==== Robert Koch, Germany ==== In 1884, German bacteriologist Robert Koch published four criteria for establishing causality between specific microorganisms and diseases, now known as Koch's postulates: The microorganism must be found in abundance in all organisms with the disease, but should not be found in healthy organisms. The microorganism must be isolated from a diseased organism and grown in pure culture. The cultured microorganism should cause disease when introduced into a healthy organism. The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent. During his lifetime, Koch recognized that the postulates were not universally applicable, such as asymptomatic carriers of cholera violating the first postulate. For this same reason, the third postulate specifies "should", rather than "must", because not all host organisms exposed to an infectious agent will acquire the infection, potentially due to differences in prior exposure to the pathogen. Limiting the second postulate, it was later discovered that viruses cannot be grown in pure cultures because they are obligate intracellular parasites, making it impossible to fulfill the second postulate. Similarly, pathogenic misfolded proteins, known as prions, only spread by transmitting their structure to other proteins, rather than self-replicating. While Koch's postulates retain historical importance for emphasizing that correlation does not imply causation, many pathogens are accepted as causative agents of specific diseases without fulfilling all of the criteria. In 1988, American microbiologist Stanley Falkow published a molecular version of Koch's postulates to establish correlation between microbial genes and virulence factors. ==== Joseph Lister, UK ==== After reading Pasteur's papers on bacterial fermentation, British surgeon Joseph Lister recognized that compound fractures, involving bones breaking through the skin, were more likely to become infected due to exposure to environmental microorganisms. He recognized that carbolic acid could be applied to the site of injury as an effective antiseptic. == See also == Alexander Fleming Cell theory Cooties Epidemiology Germ theory denialism History of emerging infectious diseases History of public health in the United Kingdom Robert Hooke Rudolf Virchow Zymotic disease == References == == Further reading == Baldwin, Peter. Contagion and the State in Europe, 1830-1930 (Cambridge UP, 1999), focus on cholera, smallpox and syphilis in Britain, France, Germany and Sweden. Brock, Thomas D. Robert Koch. A Life in Medicine and Bacteriology (1988). Dubos, René. Louis Pasteur: Free Lance of Science (1986) Gaynes, Robert P. Germ Theory (ASM Press, 2023), pp.143-205 online Geison, Gerald L. The Private Science of Louis Pasteur (Princeton University Press, 1995) online Hudson, Robert P. Disease and Its Control: The Shaping of Modern Thought (1983) Lawrence, Christopher, and Richard Dixey. "Practising on Principle: Joseph Lister and the Germ Theories of Disease," in Medical Theory, Surgical Practice: Studies in the History of Surgery ed. by Christopher Lawrence (Routledge, 1992), pp. 153-215. Magner, Lois N. A history of infectious diseases and the microbial world (2008) online Magner, Lois N. A History of Medicine (1992) pp. 305–334. online Nutton, Vivian. "The seeds of disease: an explanation of contagion and infection from the Greeks to the Renaissance." Medical history 27.1 (1983): 1-34. online Porter, Roy. Blood and Guts: A Short History of Medicine (2004) online Tomes, Nancy. 'The gospel of germs: Men, women, and the microbe in American life (Harvard University Press, 1999) online. Tomes, Nancy. "Moralizing the microbe: the germ theory and the moral construction of behavior in the late-nineteenth-century antituberculosis movement." in Morality and health (Routledge, 2013) pp. 271-294. Tomes, Nancy J. "American attitudes toward the germ theory of disease: Phyllis Allen Richmond revisited." Journal of the History of Medicine and Allied Sciences 52.1 (1997): 17-50. online Winslow, Charles-Edward Amory. The Conquest of Epidemic Disease. A Chapter in the History of Ideas (1943) online. == External links == John Horgan, "Germ Theory" (2023) Stephen T. Abedon Germ Theory of Disease Supplemental Lecture (98/03/28 update), www.mansfield.ohio-state.edu William C. Campbell The Germ Theory Timeline, germtheorytimeline.info Science's war on infectious diseases, www.creatingtechnology.org
Wikipedia/Germ_theory
Landau theory (also known as Ginzburg–Landau theory, despite the confusing name) in physics is a theory that Lev Landau introduced in an attempt to formulate a general theory of continuous (i.e., second-order) phase transitions. It can also be adapted to systems under externally-applied fields, and used as a quantitative model for discontinuous (i.e., first-order) transitions. Although the theory has now been superseded by the renormalization group and scaling theory formulations, it remains an exceptionally broad and powerful framework for phase transitions, and the associated concept of the order parameter as a descriptor of the essential character of the transition has proven transformative. == Mean-field formulation (no long-range correlation) == Landau was motivated to suggest that the free energy of any system should obey two conditions: Be analytic in the order parameter and its gradients. Obey the symmetry of the Hamiltonian. Given these two conditions, one can write down (in the vicinity of the critical temperature, Tc) a phenomenological expression for the free energy as a Taylor expansion in the order parameter. === Second-order transitions === Consider a system that breaks some symmetry below a phase transition, which is characterized by an order parameter η {\displaystyle \eta } . This order parameter is a measure of the order before and after a phase transition; the order parameter is often zero above some critical temperature and non-zero below the critical temperature. In a simple ferromagnetic system like the Ising model, the order parameter is characterized by the net magnetization m {\displaystyle m} , which becomes spontaneously non-zero below a critical temperature T c {\displaystyle T_{c}} . In Landau theory, one considers a free energy functional that is an analytic function of the order parameter. In many systems with certain symmetries, the free energy will only be a function of even powers of the order parameter, for which it can be expressed as the series expansion F ( T , η ) − F 0 = a ( T ) η 2 + b ( T ) 2 η 4 + ⋯ {\displaystyle F(T,\eta )-F_{0}=a(T)\eta ^{2}+{\frac {b(T)}{2}}\eta ^{4}+\cdots } In general, there are higher order terms present in the free energy, but it is a reasonable approximation to consider the series to fourth order in the order parameter, as long as the order parameter is small. For the system to be thermodynamically stable (that is, the system does not seek an infinite order parameter to minimize the energy), the coefficient of the highest even power of the order parameter must be positive, so b ( T ) > 0 {\displaystyle b(T)>0} . For simplicity, one can assume that b ( T ) = b 0 {\displaystyle b(T)=b_{0}} , a constant, near the critical temperature. Furthermore, since a ( T ) {\displaystyle a(T)} changes sign above and below the critical temperature, one can likewise expand a ( T ) ≈ a 0 ( T − T c ) {\displaystyle a(T)\approx a_{0}(T-T_{c})} , where it is assumed that a > 0 {\displaystyle a>0} for the high-temperature phase while a < 0 {\displaystyle a<0} for the low-temperature phase, for a transition to occur. With these assumptions, minimizing the free energy with respect to the order parameter requires ∂ F ∂ η = 2 a ( T ) η + 2 b ( T ) η 3 = 0 {\displaystyle {\frac {\partial F}{\partial \eta }}=2a(T)\eta +2b(T)\eta ^{3}=0} The solution to the order parameter that satisfies this condition is either η = 0 {\displaystyle \eta =0} , or η 0 2 = − a b = − a 0 b 0 ( T − T c ) {\displaystyle \eta _{0}^{2}=-{\frac {a}{b}}=-{\frac {a_{0}}{b_{0}}}(T-T_{c})} It is clear that this solution only exists for T < T c {\displaystyle T<T_{c}} , otherwise η = 0 {\displaystyle \eta =0} is the only solution. Indeed, η = 0 {\displaystyle \eta =0} is the minimum solution for T > T c {\displaystyle T>T_{c}} , but the solution η 0 {\displaystyle \eta _{0}} minimizes the free energy for T < T c {\displaystyle T<T_{c}} , and thus is a stable phase. Furthermore, the order parameter follows the relation η ( T ) ∝ | T − T c | 1 / 2 {\displaystyle \eta (T)\propto \left|T-T_{c}\right|^{1/2}} below the critical temperature, indicating a critical exponent β = 1 / 2 {\displaystyle \beta =1/2} for this Landau mean-theory model. The free-energy will vary as a function of temperature given by F − F 0 = { − a 0 2 2 b 0 ( T − T c ) 2 , T < T c 0 , T > T c {\displaystyle F-F_{0}={\begin{cases}-{\dfrac {a_{0}^{2}}{2b_{0}}}(T-T_{c})^{2},&T<T_{c}\\0,&T>T_{c}\end{cases}}} From the free energy, one can compute the specific heat, c p = − T ∂ 2 F ∂ T 2 = { a 0 2 b 0 T , T < T c 0 , T > T c {\displaystyle c_{p}=-T{\frac {\partial ^{2}F}{\partial T^{2}}}={\begin{cases}{\dfrac {a_{0}^{2}}{b_{0}}}T,&T<T_{c}\\0,&T>T_{c}\end{cases}}} which has a finite jump at the critical temperature of size Δ c = a 0 2 T c / b 0 {\displaystyle \Delta c=a_{0}^{2}T_{c}/b_{0}} . This finite jump is therefore not associated with a discontinuity that would occur if the system absorbed latent heat, since T c Δ S = 0 {\displaystyle T_{c}\Delta S=0} . It is also noteworthy that the discontinuity in the specific heat is related to the discontinuity in the second derivative of the free energy, which is characteristic of a second-order phase transition. Furthermore, the fact that the specific heat has no divergence or cusp at the critical point indicates its critical exponent for c ∼ | T − T c | − α {\displaystyle c\sim |T-T_{c}|^{-\alpha }} is α = 0 {\displaystyle \alpha =0} . === Irreducible representations === Landau expanded his theory to consider the restraints that it imposes on the symmetries before and after a transition of second order. They need to comply with a number of requirements: The distorted (or ordered) symmetry needs to be a subgroup of the higher one. The order parameter that embodies the distortion needs to transform as a single irreducible representation (irrep) of the parent symmetry The irrep should not contain a third order invariant If the irrep allows for more than one fourth order invariant, the resulting symmetry minimizes a linear combination of these invariants In the latter case more than one daughter structure should be reacheable through a continuous transition. A good example of this are the structure of MnP (space group Cmca) and the low temperature structure of NbS (space group P63mc). They are both daughters of the NiAs-structure and their distortions transform according to the same irrep of that spacegroup. === Applied fields === In many systems, one can consider a perturbing field h {\displaystyle h} that couples linearly to the order parameter. For example, in the case of a classical dipole moment μ {\displaystyle \mu } , the energy of the dipole-field system is − μ B {\displaystyle -\mu B} . In the general case, one can assume an energy shift of − η h {\displaystyle -\eta h} due to the coupling of the order parameter to the applied field h {\displaystyle h} , and the Landau free energy will change as a result: F ( T , η ) − F 0 = a 0 ( T − T c ) η 2 + b 0 2 η 4 − η h {\displaystyle F(T,\eta )-F_{0}=a_{0}(T-T_{c})\eta ^{2}+{\frac {b_{0}}{2}}\eta ^{4}-\eta h} In this case, the minimization condition is ∂ F ∂ η = 2 a ( T ) η + 2 b 0 η 3 − h = 0 {\displaystyle {\frac {\partial F}{\partial \eta }}=2a(T)\eta +2b_{0}\eta ^{3}-h=0} One immediate consequence of this equation and its solution is that, if the applied field is non-zero, then the magnetization is non-zero at any temperature. This implies there is no longer a spontaneous symmetry breaking that occurs at any temperature. Furthermore, some interesting thermodynamic and universal quantities can be obtained from this above condition. For example, at the critical temperature where a ( T c ) = 0 {\displaystyle a(T_{c})=0} , one can find the dependence of the order parameter on the external field: η ( T c ) = ( h 2 b 0 ) 1 / 3 ∝ h 1 / δ {\displaystyle \eta (T_{c})=\left({\frac {h}{2b_{0}}}\right)^{1/3}\propto h^{1/\delta }} indicating a critical exponent δ = 3 {\displaystyle \delta =3} . Furthermore, from the above condition, it is possible to find the zero-field susceptibility χ ≡ ∂ η / ∂ h | h = 0 {\displaystyle \chi \equiv \partial \eta /\partial h|_{h=0}} , which must satisfy 0 = 2 a ∂ η ∂ h + 6 b η 2 ∂ η ∂ h − 1 {\displaystyle 0=2a{\frac {\partial \eta }{\partial h}}+6b\eta ^{2}{\frac {\partial \eta }{\partial h}}-1} [ 2 a + 6 b η 2 ] ∂ η ∂ h = 1 {\displaystyle [2a+6b\eta ^{2}]{\frac {\partial \eta }{\partial h}}=1} In this case, recalling in the zero-field case that η 2 = − a / b {\displaystyle \eta ^{2}=-a/b} at low temperatures, while η 2 = 0 {\displaystyle \eta ^{2}=0} for temperatures above the critical temperature, the zero-field susceptibility therefore has the following temperature dependence: χ ( T , h → 0 ) = { 1 2 a 0 ( T − T c ) , T > T c 1 − 4 a 0 ( T − T c ) , T < T c ∝ | T − T c | − γ {\displaystyle \chi (T,h\to 0)={\begin{cases}{\frac {1}{2a_{0}(T-T_{c})}},&T>T_{c}\\{\frac {1}{-4a_{0}(T-T_{c})}},&T<T_{c}\end{cases}}\propto |T-T_{c}|^{-\gamma }} which is reminiscent of the Curie-Weiss law for the temperature dependence of magnetic susceptibility in magnetic materials, and yields the mean-field critical exponent γ = 1 {\displaystyle \gamma =1} . It is noteworthy that although the critical exponents so obtained are incorrect for many models and systems, they correctly satisfy various exponent equalities such as the Rushbrooke equality: α + 2 β + γ = 2 {\displaystyle \alpha +2\beta +\gamma =2} . === First-order transitions === Landau theory can also be used to study first-order transitions. There are two different formulations, depending on whether or not the system is symmetric under a change in sign of the order parameter. ==== I. Symmetric Case ==== Here we consider the case where the system has a symmetry and the energy is invariant when the order parameter changes sign. A first-order transition will arise if the quartic term in F {\displaystyle F} is negative. To ensure that the free energy remains positive at large η {\displaystyle \eta } , one must carry the free-energy expansion to sixth-order, F ( T , η ) = A ( T ) η 2 − B 0 η 4 + C 0 η 6 , {\displaystyle F(T,\eta )=A(T)\eta ^{2}-B_{0}\eta ^{4}+C_{0}\eta ^{6},} where A ( T ) = A 0 ( T − T 0 ) {\displaystyle A(T)=A_{0}(T-T_{0})} , and T 0 {\displaystyle T_{0}} is some temperature at which A ( T ) {\displaystyle A(T)} changes sign. We denote this temperature by T 0 {\displaystyle T_{0}} and not T c {\displaystyle T_{c}} , since it will emerge below that it is not the temperature of the first-order transition, and since there is no critical point, the notion of a "critical temperature" is misleading to begin with. A 0 , B 0 , {\displaystyle A_{0},B_{0},} and C 0 {\displaystyle C_{0}} are positive coefficients. We analyze this free energy functional as follows: (i) For T > T 0 {\displaystyle T>T_{0}} , the η 2 {\displaystyle \eta ^{2}} and η 6 {\displaystyle \eta ^{6}} terms are concave upward for all η {\displaystyle \eta } , while the η 4 {\displaystyle \eta ^{4}} term is concave downward. Thus for sufficiently high temperatures F {\displaystyle F} is concave upward for all η {\displaystyle \eta } , and the equilibrium solution is η = 0 {\displaystyle \eta =0} . (ii) For T < T 0 {\displaystyle T<T_{0}} , both the η 2 {\displaystyle \eta ^{2}} and η 4 {\displaystyle \eta ^{4}} terms are negative, so η = 0 {\displaystyle \eta =0} is a local maximum, and the minimum of F {\displaystyle F} is at some non-zero value ± η 0 ( T ) {\displaystyle \pm \eta _{0}(T)} , with F ( T 0 , η 0 ( T 0 ) ) < 0 {\displaystyle F(T_{0},\eta _{0}(T_{0}))<0} . (iii) For T {\displaystyle T} just above T 0 {\displaystyle T_{0}} , η = 0 {\displaystyle \eta =0} turns into a local minimum, but the minimum at η 0 ( T ) {\displaystyle \eta _{0}(T)} continues to be the global minimum since it has a lower free energy. It follows that as the temperature is raised above T 0 {\displaystyle T_{0}} , the global minimum cannot continuously evolve from η 0 ( T ) {\displaystyle \eta _{0}(T)} to 0. Rather, at some intermediate temperature T ∗ {\displaystyle T_{*}} , the minima at η 0 ( T ∗ ) {\displaystyle \eta _{0}(T_{*})} and η = 0 {\displaystyle \eta =0} must become degenerate. For T > T ∗ {\displaystyle T>T_{*}} , the global minimum will jump discontinuously from η 0 ( T ∗ ) {\displaystyle \eta _{0}(T_{*})} to 0. To find T ∗ {\displaystyle T_{*}} , we demand that free energy be zero at η = η 0 ( T ∗ ) {\displaystyle \eta =\eta _{0}(T_{*})} (just like the η = 0 {\displaystyle \eta =0} solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, 0 = A ( T ) η 2 − B 0 η 4 + C 0 η 6 , {\displaystyle 0=A(T)\eta ^{2}-B_{0}\eta ^{4}+C_{0}\eta ^{6},} 0 = 2 A ( T ) η − 4 B 0 η 3 + 6 C 0 η 5 , {\displaystyle 0=2A(T)\eta -4B_{0}\eta ^{3}+6C_{0}\eta ^{5},} which are satisfied when η 2 ( T ∗ ) = B 0 / 2 C 0 {\displaystyle \eta ^{2}(T_{*})={B_{0}}/{2C_{0}}} . The same equations also imply that A ( T ∗ ) = A 0 ( T ∗ − T 0 ) = B 0 2 / 4 C 0 {\displaystyle A(T_{*})=A_{0}(T_{*}-T_{0})=B_{0}^{2}/4C_{0}} . That is, T ∗ = T 0 + B 0 2 4 A 0 C 0 . {\displaystyle T_{*}=T_{0}+{\frac {B_{0}^{2}}{4A_{0}C_{0}}}.} From this analysis both points made above can be seen explicitly. First, the order parameter suffers a discontinuous jump from ( B 0 / 2 C 0 ) 1 / 2 {\displaystyle (B_{0}/2C_{0})^{1/2}} to 0. Second, the transition temperature T ∗ {\displaystyle T_{*}} is not the same as the temperature T 0 {\displaystyle T_{0}} where A ( T ) {\displaystyle A(T)} vanishes. At temperatures below the transition temperature, T < T ∗ {\displaystyle T<T_{*}} , the order parameter is given by η 0 2 = B 0 3 C 0 [ 1 + 1 − 3 A ( T ) C 0 B 0 2 ] {\displaystyle \eta _{0}^{2}={\frac {B_{0}}{3C_{0}}}\left[1+{\sqrt {1-{\frac {3A(T)C_{0}}{B_{0}^{2}}}}}\right]} which is plotted to the right. This shows the clear discontinuity associated with the order parameter as a function of the temperature. To further demonstrate that the transition is first-order, one can show that the free energy for this order parameter is continuous at the transition temperature T ∗ {\displaystyle T_{*}} , but its first derivative (the entropy) suffers from a discontinuity, reflecting the existence of a non-zero latent heat. ==== II. Nonsymmetric Case ==== Next we consider the case where the system does not have a symmetry. In this case there is no reason to keep only even powers of η {\displaystyle \eta } in the expansion of F {\displaystyle F} , and a cubic term must be allowed (The linear term can always be eliminated by a shift η → η {\displaystyle \eta \to \eta } + constant.) We thus consider a free energy functional F ( T , η ) = A ( T ) η 2 − C 0 η 3 + B 0 η 4 + ⋯ . {\displaystyle F(T,\eta )=A(T)\eta ^{2}-C_{0}\eta ^{3}+B_{0}\eta ^{4}+\cdots .} Once again A ( T ) = A 0 ( T − T 0 ) {\displaystyle A(T)=A_{0}(T-T_{0})} , and A 0 , B 0 , C 0 {\displaystyle A_{0},B_{0},C_{0}} are all positive. The sign of the cubic term can always be chosen to be negative as we have done by reversing the sign of η {\displaystyle \eta } if necessary. We analyze this free energy functional as follows: (i) For T < T 0 {\displaystyle T<T_{0}} , we have a local maximum at η = 0 {\displaystyle \eta =0} , and since the free energy is bounded below, there must be two local minima at nonzero values η − ( T ) < 0 {\displaystyle \eta _{-}(T)<0} and η + ( T ) > 0 {\displaystyle \eta _{+}(T)>0} . The cubic term ensures that η + {\displaystyle \eta _{+}} is the global minimum since it is deeper. (ii) For T {\displaystyle T} just above T 0 {\displaystyle T_{0}} , the minimum at η − {\displaystyle \eta _{-}} disappears, the maximum at η = 0 {\displaystyle \eta =0} turns into a local minimum, but the minimum at η + {\displaystyle \eta _{+}} persists and continues to be the global minimum. As the temperature is further raised, F ( T , η + ( T ) ) {\displaystyle F(T,\eta _{+}(T))} rises until it equals zero at some temperature T ∗ {\displaystyle T_{*}} . At T ∗ {\displaystyle T_{*}} we get a discontinuous jump in the global minimum from η + ( T ∗ ) {\displaystyle \eta _{+}(T_{*})} to 0. (The minima cannot coalesce for that would require the first three derivatives of F {\displaystyle F} to vanish at η = 0 {\displaystyle \eta =0} .) To find T ∗ {\displaystyle T_{*}} , we demand that free energy be zero at η = η + ( T ∗ ) {\displaystyle \eta =\eta _{+}(T_{*})} (just like the η = 0 {\displaystyle \eta =0} solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, 0 = A ( T ) η 2 − C 0 η 3 + B 0 η 4 , {\displaystyle 0=A(T)\eta ^{2}-C_{0}\eta ^{3}+B_{0}\eta ^{4},} 0 = 2 A ( T ) η − 3 C 0 η 2 + 4 B 0 η 3 , {\displaystyle 0=2A(T)\eta -3C_{0}\eta ^{2}+4B_{0}\eta ^{3},} which are satisfied when η ( T ∗ ) = C 0 / 2 B 0 {\displaystyle \eta (T_{*})={C_{0}}/{2B_{0}}} . The same equations also imply that A ( T ∗ ) = A 0 ( T ∗ − T 0 ) = C 0 2 / 4 B 0 {\displaystyle A(T_{*})=A_{0}(T_{*}-T_{0})=C_{0}^{2}/4B_{0}} . That is, T ∗ = T 0 + C 0 2 4 A 0 B 0 . {\displaystyle T_{*}=T_{0}+{\frac {C_{0}^{2}}{4A_{0}B_{0}}}.} As in the symmetric case the order parameter suffers a discontinuous jump from ( C 0 / 2 B 0 ) {\displaystyle (C_{0}/2B_{0})} to 0. Second, the transition temperature T ∗ {\displaystyle T_{*}} is not the same as the temperature T 0 {\displaystyle T_{0}} where A ( T ) {\displaystyle A(T)} vanishes. === Applications === It was known experimentally that the liquid–gas coexistence curve and the ferromagnet magnetization curve both exhibited a scaling relation of the form | T − T c | β {\displaystyle |T-T_{c}|^{\beta }} , where β {\displaystyle \beta } was mysteriously the same for both systems. This is the phenomenon of universality. It was also known that simple liquid–gas models are exactly mappable to simple magnetic models, which implied that the two systems possess the same symmetries. It then followed from Landau theory why these two apparently disparate systems should have the same critical exponents, despite having different microscopic parameters. It is now known that the phenomenon of universality arises for other reasons (see Renormalization group). In fact, Landau theory predicts the incorrect critical exponents for the Ising and liquid–gas systems. The great virtue of Landau theory is that it makes specific predictions for what kind of non-analytic behavior one should see when the underlying free energy is analytic. Then, all the non-analyticity at the critical point, the critical exponents, are because the equilibrium value of the order parameter changes non-analytically, as a square root, whenever the free energy loses its unique minimum. The extension of Landau theory to include fluctuations in the order parameter shows that Landau theory is only strictly valid near the critical points of ordinary systems with spatial dimensions higher than 4. This is the upper critical dimension, and it can be much higher than four in more finely tuned phase transitions. In Mukhamel's analysis of the isotropic Lifschitz point, the critical dimension is 8. This is because Landau theory is a mean field theory, and does not include long-range correlations. This theory does not explain non-analyticity at the critical point, but when applied to superfluid and superconductor phase transition, Landau's theory provided inspiration for another theory, the Ginzburg–Landau theory of superconductivity. == Including long-range correlations == Consider the Ising model free energy above. Assume that the order parameter Ψ {\displaystyle \Psi } and external magnetic field, h {\displaystyle h} , may have spatial variations. Now, the free energy of the system can be assumed to take the following modified form: F := ∫ d D x ( a ( T ) + r ( T ) ψ 2 ( x ) + s ( T ) ψ 4 ( x ) + f ( T ) ( ∇ ψ ( x ) ) 2 + h ( x ) ψ ( x ) + O ( ψ 6 ; ( ∇ ψ ) 4 ) ) {\displaystyle F:=\int d^{D}x\ \left(a(T)+r(T)\psi ^{2}(x)+s(T)\psi ^{4}(x)\ +f(T)(\nabla \psi (x))^{2}\ +h(x)\psi (x)\ \ +{\mathcal {O}}(\psi ^{6};(\nabla \psi )^{4})\right)} where D {\displaystyle D} is the total spatial dimensionality. So, ⟨ ψ ( x ) ⟩ := Tr ψ ( x ) e − β H Z {\displaystyle \langle \psi (x)\rangle :={\frac {{\text{Tr}}\ \psi (x){\rm {e}}^{-\beta H}}{Z}}} Assume that, for a localized external magnetic perturbation h ( x ) → 0 + h 0 δ ( x ) {\displaystyle h(x)\rightarrow 0+h_{0}\delta (x)} , the order parameter takes the form ψ ( x ) → ψ 0 + ϕ ( x ) {\displaystyle \psi (x)\rightarrow \psi _{0}+\phi (x)} . Then, δ ⟨ ψ ( x ) ⟩ δ h ( 0 ) = ϕ ( x ) h 0 = β ( ⟨ ψ ( x ) ψ ( 0 ) ⟩ − ⟨ ψ ( x ) ⟩ ⟨ ψ ( 0 ) ⟩ ) {\displaystyle {\frac {\delta \langle \psi (x)\rangle }{\delta h(0)}}={\frac {\phi (x)}{h_{0}}}=\beta \left(\langle \psi (x)\psi (0)\rangle -\langle \psi (x)\rangle \langle \psi (0)\rangle \right)} That is, the fluctuation ϕ ( x ) {\displaystyle \phi (x)} in the order parameter corresponds to the order-order correlation. Hence, neglecting this fluctuation (like in the earlier mean-field approach) corresponds to neglecting the order-order correlation, which diverges near the critical point. One can also solve for ϕ ( x ) {\displaystyle \phi (x)} , from which the scaling exponent, ν {\displaystyle \nu } , for correlation length ξ ∼ ( T − T c ) − ν {\displaystyle \xi \sim (T-T_{c})^{-\nu }} can deduced. From these, the Ginzburg criterion for the upper critical dimension for the validity of the Ising mean-field Landau theory (the one without long-range correlation) can be calculated as: D ≥ 2 + 2 β ν {\displaystyle D\geq 2+2{\frac {\beta }{\nu }}} In our current Ising model, mean-field Landau theory gives β = 1 / 2 = ν {\displaystyle \beta =1/2=\nu } and so, it (the Ising mean-field Landau theory) is valid only for spatial dimensionality greater than or equal to 4 (at the marginal values of D = 4 {\displaystyle D=4} , there are small corrections to the exponents). This modified version of mean-field Landau theory is sometimes also referred to as the Landau–Ginzburg theory of Ising phase transitions. As a clarification, there is also a Ginzburg–Landau theory specific to superconductivity phase transition, which also includes fluctuations. == See also == Ginzburg–Landau theory Landau–de Gennes theory Ginzburg criterion Stuart–Landau equation == Footnotes == == Further reading == Landau L.D. Collected Papers (Nauka, Moscow, 1969) Michael C. Cross, Landau theory of second order phase transitions, [1] (Caltech statistical mechanics lecture notes). Yukhnovskii, I R, Phase Transitions of the Second Order – Collective Variables Method, World Scientific, 1987, ISBN 9971-5-0087-6
Wikipedia/Landau_theory
Attachment theory is a psychological and evolutionary framework, concerning the relationships between humans, particularly the importance of early bonds between infants and their primary caregivers. Developed by psychiatrist and psychoanalyst John Bowlby (1907–90), the theory posits that infants need to form a close relationship with at least one primary caregiver to ensure their survival, and to develop healthy social and emotional functioning. Pivotal aspects of attachment theory include the observation that infants seek proximity to attachment figures, especially during stressful situations. Secure attachments are formed when caregivers are sensitive and responsive in social interactions, and consistently present, particularly between the ages of six months and two years. As children grow, they use these attachment figures as a secure base from which to explore the world and return to for comfort. The interactions with caregivers form patterns of attachment, which in turn create internal working models that influence future relationships. Separation anxiety or grief following the loss of an attachment figure is considered to be a normal and adaptive response for an attached infant. Research by developmental psychologist Mary Ainsworth in the 1960s and '70s expanded on Bowlby's work, introducing the concept of the "secure base", impact of maternal responsiveness and sensitivity to infant distress, and identified attachment patterns in infants: secure, avoidant, anxious, and disorganized attachment. In the 1980s, attachment theory was extended to adult relationships and attachment in adults, making it applicable beyond early childhood. Bowlby's theory integrated concepts from evolutionary biology, object relations theory, control systems theory, ethology, and cognitive psychology, and was fully articulated in his trilogy, Attachment and Loss (1969–82). While initially criticized by academic psychologists and psychoanalysts, attachment theory has become a dominant approach to understanding early social development and has generated extensive research. Despite some criticisms related to temperament, social complexity, and the limitations of discrete attachment patterns, the theory's core concepts have been widely accepted and have influenced therapeutic practices and social and childcare policies. == Attachment == Within attachment theory, attachment means an affectional bond or tie between an individual and an attachment figure (usually a caregiver/guardian). Such bonds may be reciprocal between two adults, but between a child and a caregiver, these bonds are based on the child's need for safety, security, and protection—which is most important in infancy and childhood. Attachment theory is not an exhaustive description of human relationships, nor is it synonymous with love and affection, although these may indicate that bonds exist. In child-to-adult relationships, the child's tie is called the "attachment" and the caregiver's reciprocal equivalent is referred to as the "care-giving bond". The theory proposes that children attach to carers instinctively, for the purpose of survival and, ultimately, genetic replication. The biological aim is survival and the psychological aim is security. The relationship that a child has with their attachment figure is especially important in threatening situations. Having access to a secure figure decreases fear in children when they are presented with threatening situations. Not only is having a decreased level of fear important for general mental stability, but it also implicates how children might react to threatening situations. The presence of a supportive attachment figure is especially important in a child's developmental years. In addition to support, attunement (accurate understanding and emotional connection) is crucial in a caregiver-child relationship. If the caregiver is poorly attuned to the child, the child may grow to feel misunderstood and anxious. Infants form attachments to any consistent caregiver who is sensitive and responsive in social interactions with them. The quality of social engagement is more influential than the amount of time spent. The biological mother is the usual principal attachment figure, but the role can be assumed by anyone who consistently behaves in a "mothering" way over a period of time. Within attachment theory, this means a set of behaviours that involves engaging in lively social interaction with the infant and responding readily to signals and approaches. Nothing in the theory suggests that fathers are not equally likely to become principal attachment figures if they provide most of the child care and related social interaction. A secure attachment to a father who is a "secondary attachment figure" may also counter the possible negative effects of an unsatisfactory attachment to a mother who is the primary attachment figure. Some infants direct attachment behaviour (proximity seeking) towards more than one attachment figure almost as soon as they start to show discrimination between caregivers; most come to do so during their second year. These figures are arranged hierarchically, with the principal attachment figure at the top. The set-goal of the attachment behavioural system is to maintain a bond with an accessible and available attachment figure. "Alarm" is the term used for activation of the attachment behavioural system caused by fear of danger. "Anxiety" is the anticipation or fear of being cut off from the attachment figure. If the figure is unavailable or unresponsive, separation distress occurs. In infants, physical separation can cause anxiety and anger, followed by sadness and despair. By age three or four, physical separation is no longer such a threat to the child's bond with the attachment figure. Threats to security in older children and adults arise from prolonged absence, breakdowns in communication, emotional unavailability or signs of rejection or abandonment. === Behaviours === The attachment behavioural system serves to achieve or maintain proximity to the attachment figure. Pre-attachment behaviours occur in the first six months of life. During the first phase (the first two months), infants smile, babble, and cry to attract the attention of potential caregivers. Although infants of this age learn to discriminate between caregivers, these behaviours are directed at anyone in the vicinity. During the second phase (two to six months), the infant discriminates between familiar and unfamiliar adults, becoming more responsive toward the caregiver; following and clinging are added to the range of behaviours. The infant's behaviour toward the caregiver becomes organized on a goal-directed basis to achieve the conditions that make it feel secure. By the end of the first year, the infant is able to display a range of attachment behaviours designed to maintain proximity. These manifest as protesting the caregiver's departure, greeting the caregiver's return, clinging when frightened, and following when able. With the development of locomotion, the infant begins to use the caregiver or caregivers as a "safe base" from which to explore.: 71  Infant exploration is greater when the caregiver is present because the infant's attachment system is relaxed and it is free to explore. If the caregiver is inaccessible or unresponsive, attachment behaviour is more strongly exhibited. Anxiety, fear, illness, and fatigue will cause a child to increase attachment behaviours. After the second year, as the child begins to see the caregiver as an independent person, a more complex and goal-corrected partnership is formed. Children begin to notice others' goals and feelings and plan their actions accordingly. === Tenets === Modern attachment theory is based on three principles: Bonding is an intrinsic human need. Regulation of emotion and fear to enhance vitality. Promoting adaptiveness and growth. Common attachment behaviours and emotions, displayed in most social primates including humans, are adaptive. The long-term evolution of these species has involved selection for social behaviours that make individual or group survival more likely. The commonly observed attachment behaviour of toddlers staying near familiar people would have had safety advantages in the environment of early adaptation and has similar advantages today. Bowlby saw the environment of early adaptation as similar to current hunter-gatherer societies. There is a survival advantage in the capacity to sense possibly dangerous conditions such as unfamiliarity, being alone, or rapid approach. According to Bowlby, proximity-seeking to the attachment figure in the face of threat is the "set-goal" of the attachment behavioural system. Bowlby's original account of a sensitivity period during which attachments can form of between six months and two to three years has been modified by later researchers. These researchers have shown there is indeed a sensitive period during which attachments will form if possible, but the time frame is broader and the effect less fixed and irreversible than first proposed. With further research, authors discussing attachment theory have come to appreciate social development is affected by later as well as earlier relationships. Early steps in attachment take place most easily if the infant has one caregiver, or the occasional care of a small number of other people. According to Bowlby, almost from the beginning, many children have more than one figure toward whom they direct attachment behaviour. These figures are not treated alike; there is a strong bias for a child to direct attachment behaviour mainly toward one particular person. Bowlby used the term "monotropy" to describe this bias. Researchers and theorists have abandoned this concept insofar as it may be taken to mean the relationship with the special figure differs qualitatively from that of other figures. Rather, current thinking postulates definite hierarchies of relationships. Early experiences with caregivers gradually give rise to a system of thoughts, memories, beliefs, expectations, emotions, and behaviours about the self and others. This system, called the "internal working model of social relationships", continues to develop with time and experience. Internal models regulate, interpret, and predict attachment-related behaviour in the self and the attachment figure. As they develop in line with environmental and developmental changes, they incorporate the capacity to reflect and communicate about past and future attachment relationships. They enable the child to handle new types of social interactions; knowing, for example, an infant should be treated differently from an older child, or that interactions with teachers and parents share characteristics. Even interaction with coaches share similar characteristics, as athletes who secure attachment relationships with not only their parents but their coaches will play a role in the growth of athletes in their prospective sport. This internal working model continues to develop through adulthood, helping cope with friendships, marriage, and parenthood, all of which involve different behaviours and feelings. The development of attachment is a transactional process. Specific attachment behaviours begin with predictable, apparently innate, behaviours in infancy. They change with age in ways determined partly by experiences and partly by situational factors. As attachment behaviours change with age, they do so in ways shaped by relationships. A child's behaviour when reunited with a caregiver is determined not only by how the caregiver has treated the child before, but on the history of effects the child has had on the caregiver. === Cultural differences === In Western culture child-rearing, there is a focus on single attachment to primarily the mother. This dyadic model is not the only strategy of attachment producing a secure and emotionally adept child. Having a single, dependably responsive and sensitive caregiver (namely the mother) does not guarantee the ultimate success of the child. Results from Israeli, Dutch and east African studies show children with multiple caregivers grow up not only feeling secure, but developed "more enhanced capacities to view the world from multiple perspectives." This evidence can be more readily found in hunter-gatherer communities, like those that exist in rural Tanzania. In hunter-gatherer communities, in the past and present, mothers are the primary caregivers, but share the maternal responsibility of ensuring the child's survival with a variety of different allomothers. So while the mother is important, she is not the only opportunity for relational attachment a child can make. Several group members (with or without blood relation) contribute to the task of bringing up a child, sharing the parenting role and therefore can be sources of multiple attachment. There is evidence of this communal parenting throughout history that "would have significant implications for the evolution of multiple attachment." In "non-metropolis" India (where "dual income nuclear families" are more the norm and dyadic mother relationship is), where a family normally consists of 3 generations (and sometimes 4: great-grandparents, grandparents, parents, and child or children), the child or children would have four to six caregivers from whom to select their "attachment figure". A child's "uncles and aunts" (parents' siblings and their spouses) also contribute to the child's psycho-social enrichment. Although it has been debated for years, and there are differences across cultures, research has shown that the three basic aspects of attachment theory are, to some degree, universal. Studies in Israel and Japan resulted in findings which diverge from a number of studies completed in Western Europe and the United States. The prevailing hypotheses are: 1) that secure attachment is the most desirable state, and the most prevalent; 2) maternal sensitivity influences infant attachment patterns; and 3) specific infant attachments predict later social and cognitive competence. == Empirical research and theoretical developments == John Bowlby initially conceptualized attachment as an evolutionary system that would ensure infant survival. Mary Ainsworth provided empirical testing through observational studies such as the Strange Situation Experiment. During the Strange Situation experiment, four participants partake in a series of eight "episodes" of experiences. These participants are a mother, a baby, a stranger and an observer. The mother, accompanied by the observer, carries the baby into the room and the observer leaves. The mother puts the baby down in a specified location, then sits quietly in her chair until the baby solicites her attention. The stranger enters, sits quietly for one minute, then converses with the mother for one minute, and then gradually approaches the baby. The mother then leaves the room. If the baby is playing with its toys, the stranger merely observes. If the baby is not playing with its toys, the stranger tries to interest the baby in the toys. If the baby is distressed, the stranger tries to comfort the baby. The mother enters, and pauses in the doorway so that the baby can respond to her presence. The stranger then leaves. After the baby begins to play with its toys again, the mother leaves again and says, "bye-bye". the baby is left alone for three minutes, unless the baby is so distressed it has to be comforted. The stranger enters and repeats the same behavior from the fourth episode. The mother returns, the stranger leaves, and the mother reunites with her baby. These eight episodes were observed through an adjoining room, and the baby's responses were categorized into different types of attachment behaviours. Bretherton (1992) then followed this theoretical development, highlighting Bowlby's interdisciplinary framework. He relied on the concepts of ethology, psychoanalysis, and cognitive science. Ainsworth introduced the systematic classification of attachment styles contingent upon infant caregivers' interactive experiences. Building on this foundation, Main and Solomon (1990) extended the original attachment classification to identify a disorganized/disoriented attachment style. They observed infants displaying contradictory or confused behaviors when reunited with a caregiver. This further complicated the understanding of attachment patterning and has informed clinical practice and developmental research. Empirical studies have further classified how early attachment is formed and passed on across generations. Beebe et al. (2010) studied four-month-old mother interactions using microanalytic methods. They determined that coordinated gazes and vocal affect predicted attachment security at twelve months using the Strange Situation Procedure. Similarly, Steele et al. (1996) found intergenerational continuity regarding parents' attachment classifications and those of infants. Another one of the most influential studies supporting the principle of attachment theory was conducted by Harry Harlow. Harlow’s research with rhesus monkeys demonstrated the critical importance of caregiving and emotional comfort in creating attachment. In these experiments, infant monkeys were separated from their biological mothers and given the choice between two inanimate surrogate mothers: one made of wire and wood and one made of foam and cloth. In addition, the monkeys were assigned one of two conditions: one condition where the wire mother provided milk while the cloth mother had no food to offer, and the other condition where the cloth mother provided food while the wire mother did not. In both conditions, the infant monkeys overwhelmingly preferred the cloth mother, clinging to it for comfort and security even though it did not provide nourishment. The infant monkeys' preference highlighted the importance of comfort, warmth, and emotional security over mere sustenance. Harlow’s work provided strong support for Bowlby’s claim that the need for affection and emotional security is a fundamental aspect of earlier development, also influencing later social and emotional outcomes. Clearly, children need more than just food and shelter; they require emotional attunement and a reliable source of comfort to develop a sense of security. This introduces the importance of having a “secure base.” A secure base allows children to confidently explore their environment, knowing that they have a supportive caregiver to return to when distressed. This level of responsiveness, as well as warmth and responsiveness, is critical for successful relationships and attachment. The attachment theory has also been extended into the adult relationship domain. Adult romantic attachment has been reviewed by Fraley and Shaver (2000), who advocated that attachment behaviors are observed in infancy. Seeking proximity and secure base responses were also present in adult romantic relationships This review illustrates how the attachment framework can be applied across the lifespan and highlights ongoing debates about continuity, measurements, and individual differences. Together, both the theoretical advances and empirical studies underscore the importance of early relational experiences. It also supports the broader application of attachment theory in various contexts across developmental stages and relationships. == Attachment patterns == The strength of a child's attachment behaviour in a given circumstance does not indicate the "strength" of the attachment bond. Some insecure children will routinely display very pronounced attachment behaviours, while many secure children find that there is no great need to engage in either intense or frequent shows of attachment behaviour. Individuals with different attachment styles have different beliefs about romantic love period, availability, trust capability of love partners and love readiness. === Secure attachment === A toddler who is securely attached to his or her parent (or other familiar caregiver) will explore freely while the caregiver is present, typically engages with strangers, is often visibly upset when the caregiver departs, and is generally happy to see the caregiver return. The extent of exploration and of distress are affected, however, by the child's temperamental make-up and by situational factors as well as by attachment status. A child's attachment is largely influenced by their primary caregiver's sensitivity to their needs. Parents who consistently (or almost always) respond to their child's needs will create securely attached children. Such children are certain that their parents will be responsive to their needs and communications. In the traditional Ainsworth et al. (1978) coding of the Strange Situation, secure infants are denoted as "Group B" infants and they are further subclassified as B1, B2, B3, and B4. Although these subgroupings refer to different stylistic responses to the comings and goings of the caregiver, they were not given specific labels by Ainsworth and colleagues, although their descriptive behaviours led others (including students of Ainsworth) to devise a relatively "loose" terminology for these subgroups. B1s have been referred to as "secure-reserved", B2s as "secure-inhibited", B3s as "secure-balanced", and B4s as "secure-reactive". However, in academic publications the classification of infants (if subgroups are denoted) is typically simply "B1" or "B2", although more theoretical and review-oriented papers surrounding attachment theory may use the above terminology. Secure attachment is the most common type of attachment relationship seen throughout societies. Securely attached children are best able to explore when they have the knowledge of a secure base (their caregiver) to return to in times of need. When assistance is given, this bolsters the sense of security and also, assuming the parent's assistance is helpful, educates the child on how to cope with the same problem in the future. Therefore, secure attachment can be seen as the most adaptive attachment style. According to some psychological researchers, a child becomes securely attached when the parent is available and able to meet the needs of the child in a responsive and appropriate manner. At infancy and early childhood, if parents are caring and attentive towards their children, those children will be more prone to secure attachment. === Anxious-ambivalent attachment === Anxious-ambivalent attachment is a form of insecure attachment and is also misnamed as "resistant attachment". In general, a child with an anxious-ambivalent pattern of attachment will typically explore little (in the Strange Situation) and is often wary of strangers, even when the parent is present. When the caregiver departs, the child is often highly distressed showing behaviours such as crying or screaming. The child is generally ambivalent when the caregiver returns. The anxious-ambivalent strategy is a response to unpredictably responsive caregiving, and the displays of anger (ambivalent resistant, C1) or helplessness (ambivalent passive, C2) towards the caregiver on reunion can be regarded as a conditional strategy for maintaining the availability of the caregiver by preemptively taking control of the interaction. The C1 (ambivalent resistant) subtype is coded when "resistant behavior is particularly conspicuous. The mixture of seeking and yet resisting contact and interaction has an unmistakably angry quality and indeed an angry tone may characterize behavior in the preseparation episodes". Regarding the C2 (ambivalent passive) subtype, Ainsworth et al. wrote: Perhaps the most conspicuous characteristic of C2 infants is their passivity. Their exploratory behavior is limited throughout the SS and their interactive behaviors are relatively lacking in active initiation. Nevertheless, in the reunion episodes they obviously want proximity to and contact with their mothers, even though they tend to use signalling rather than active approach, and protest against being put down rather than actively resisting release ... In general the C2 baby is not as conspicuously angry as the C1 baby. Research done by McCarthy and Taylor (1999) found that children with abusive childhood experiences were more likely to develop ambivalent attachments. The study also found that children with ambivalent attachments were more likely to experience difficulties in maintaining intimate relationships as adults. === Dismissive-avoidant attachment === An infant with a dismissive-avoidant pattern of attachment will avoid or ignore the caregiver—showing little emotion when the caregiver departs or returns. The infant will not explore very much regardless of who is there. Infants classified as dismissive-avoidant (A) represented a puzzle in the early 1970s. They did not exhibit distress on separation, and either ignored the caregiver on their return (A1 subtype) or showed some tendency to approach together with some tendency to ignore or turn away from the caregiver (A2 subtype). Ainsworth and Bell theorized that the apparently unruffled behaviour of the avoidant infants was in fact a mask for distress, a hypothesis later evidenced through studies of the heart-rate of avoidant infants. Infants are depicted as dismissive-avoidant when there is: ... conspicuous avoidance of the mother in the reunion episodes which is likely to consist of ignoring her altogether, although there may be some pointed looking away, turning away, or moving away ... If there is a greeting when the mother enters, it tends to be a mere look or a smile ... Either the baby does not approach his mother upon reunion, or they approach in "abortive" fashions with the baby going past the mother, or it tends to only occur after much coaxing ... If picked up, the baby shows little or no contact-maintaining behavior; he tends not to cuddle in; he looks away and he may squirm to get down. Ainsworth's narrative records showed that infants avoided the caregiver in the stressful Strange Situation Procedure when they had a history of experiencing rebuff of attachment behaviour. The infant's needs were frequently not met and the infant had come to believe that communication of emotional needs had no influence on the caregiver. Ainsworth's student Mary Main theorized that avoidant behaviour in the Strange Situation Procedure should be regarded as "a conditional strategy, which paradoxically permits whatever proximity is possible under conditions of maternal rejection" by de-emphasising attachment needs. Main proposed that avoidance has two functions for an infant whose caregiver is consistently unresponsive to their needs. Firstly, avoidant behaviour allows the infant to maintain a conditional proximity with the caregiver: close enough to maintain protection, but distant enough to avoid rebuff. Secondly, the cognitive processes organizing avoidant behaviour could help direct attention away from the unfulfilled desire for closeness with the caregiver—avoiding a situation in which the child is overwhelmed with emotion ("disorganized distress"), and therefore unable to maintain control of themselves and achieve even conditional proximity. === Disorganized/disoriented attachment === Beginning in 1983, Crittenden offered A/C and other new organized classifications (see below). Drawing on records of behaviours discrepant with the A, B and C classifications, a fourth classification was added by Ainsworth's colleague Mary Main. In the Strange Situation, the attachment system is expected to be activated by the departure and return of the caregiver. If the behaviour of the infant does not appear to the observer to be coordinated in a smooth way across episodes to achieve either proximity or some relative proximity with the caregiver, then it is considered 'disorganized' as it indicates a disruption or flooding of the attachment system (e.g. by fear). Infant behaviours in the Strange Situation Protocol coded as disorganized/disoriented include overt displays of fear; contradictory behaviours or affects occurring simultaneously or sequentially; stereotypic, asymmetric, misdirected or jerky movements; or freezing and apparent dissociation. Lyons-Ruth has urged, however, that it should be more widely "recognized that 52% of disorganized infants continue to approach the caregiver, seek comfort, and cease their distress without clear ambivalent or avoidant behavior". The benefit of this category was hinted at earlier in Ainsworth's own experience finding difficulties in fitting all infant behaviour into the three classifications used in her Baltimore study. Ainsworth and colleagues sometimes observed tense movements such as hunching the shoulders, putting the hands behind the neck and tensely cocking the head, and so on. It was our clear impression that such tension movements signified stress, both because they tended to occur chiefly in the separation episodes and because they tended to be prodromal to crying. Indeed, our hypothesis is that they occur when a child is attempting to control crying, for they tend to vanish if and when crying breaks through. Such observations also appeared in the doctoral theses of Ainsworth's students. Crittenden, for example, noted that one abused infant in her doctoral sample was classed as secure (B) by her undergraduate coders because her strange situation behaviour was "without either avoidance or ambivalence, she did show stress-related stereotypic headcocking throughout the strange situation. This pervasive behavior, however, was the only clue to the extent of her stress". There is rapidly growing interest in disorganized attachment from clinicians and policy-makers as well as researchers. However, the disorganized/disoriented attachment (D) classification has been criticized by some for being too encompassing, including Ainsworth herself. In 1990, Ainsworth put in print her blessing for the new 'D' classification, though she urged that the addition be regarded as "open-ended, in the sense that subcategories may be distinguished", as she worried that too many different forms of behaviour might be treated as if they were the same thing. Indeed, the D classification puts together infants who use a somewhat disrupted secure (B) strategy with those who seem hopeless and show little attachment behaviour; it also puts together infants who run to hide when they see their caregiver in the same classification as those who show an avoidant (A) strategy on the first reunion and then an ambivalent-resistant (C) strategy on the second reunion. Perhaps responding to such concerns, George and Solomon have divided among indices of disorganized/disoriented attachment (D) in the Strange Situation, treating some of the behaviours as a 'strategy of desperation' and others as evidence that the attachment system has been flooded (e.g. by fear, or anger). Crittenden also argues that some behaviour classified as Disorganized/disoriented can be regarded as more 'emergency' versions of the avoidant and/or ambivalent/resistant strategies, and function to maintain the protective availability of the caregiver to some degree. Sroufe et al. have agreed that "even disorganized attachment behaviour (simultaneous approach-avoidance; freezing, etc.) enables a degree of proximity in the face of a frightening or unfathomable parent". However, "the presumption that many indices of 'disorganization' are aspects of organized patterns does not preclude acceptance of the notion of disorganization, especially in cases where the complexity and dangerousness of the threat are beyond children's capacity for response." For example, "Children placed in care, especially more than once, often have intrusions. In videos of the Strange Situation Procedure, they tend to occur when a rejected/neglected child approaches the stranger in an intrusion of desire for comfort, then loses muscular control and falls to the floor, overwhelmed by the intruding fear of the unknown, potentially dangerous, strange person." Main and Hesse found most of the mothers of these children had suffered major losses or other trauma shortly before or after the birth of the infant and had reacted by becoming severely depressed. In fact, fifty-six per cent of mothers who had lost a parent by death before they completed high school had children with disorganized attachments. Subsequent studies, while emphasising the potential importance of unresolved loss, have qualified these findings. For example, Solomon and George found unresolved loss in the mother tended to be associated with disorganized attachment in their infant primarily when they had also experienced an unresolved trauma in their life prior to the loss. === Categorization differences across cultures === Across different cultures deviations from the Strange Situation Protocol have been observed. A Japanese study in 1986 (Takahashi) studied 60 Japanese mother-infant pairs and compared them with Ainsworth's distributional pattern. Although the ranges for securely attached and insecurely attached had no significant differences in proportions, the Japanese insecure group consisted of only resistant children, with no children categorized as avoidant. This may be because the Japanese child rearing philosophy stressed close mother infant bonds more so than in Western cultures. In Northern Germany, Grossmann et al. (Grossmann, Huber, & Wartner, 1981; Grossmann, Spangler, Suess, & Unzner, 1985) replicated the Ainsworth Strange Situation with 46 mother infant pairs and found a different distribution of attachment classifications with a high number of avoidant infants: 52% avoidant, 34% secure, and 13% resistant (Grossmann et al., 1985). Another study in Israel found there was a high frequency of an ambivalent pattern, which according to Grossman et al. (1985) could be attributed to a greater parental push toward children's independence. === Later patterns and the dynamic-maturational model === Techniques have been developed to guide a child to verbalize their state of mind with respect to attachment. One such is the "stem story", in which a child receives the beginning of a story that raises attachment issues and is asked to complete it. This is modified for older children, adolescents and adults, where semi-structured interviews are used instead, and the way content is delivered may be as significant as the content itself. However, there are no substantially validated measures of attachment for middle childhood or early adolescence (from 7 to 13 years of age). Some studies of older children have identified further attachment classifications. Main and Cassidy observed that disorganized behaviour in infancy can develop into a child using caregiver-controlling or punitive behaviour to manage a helpless or dangerously unpredictable caregiver. In these cases, the child's behaviour is organized, but the behaviour is treated by researchers as a form of disorganization, since the hierarchy in the family no longer follows parenting authority in that scenario. American psychologist Patricia McKinsey Crittenden has elaborated classifications of further forms of avoidant and ambivalent attachment behaviour, as seen in her dynamic-maturational model of attachment and adaptation (DMM). These include the caregiving and punitive behaviours also identified by Main and Cassidy (termed A3 and C3, respectively), but also other patterns such as compulsive compliance with the wishes of a threatening parent (A4). Crittenden's ideas developed from Bowlby's proposal: "Given certain adverse circumstances during childhood, the selective exclusion of information of certain sorts may be adaptive. Yet, when during adolescence and adulthood the situation changes, the persistent exclusion of the same forms of information may become maladaptive". Crittenden theorizes the human experience of danger comprise two basic components: Emotions provoked by the potential for danger, which Crittenden refers to as "affective information." In childhood, the unexplained absence of an attachment figure would cause these emotions. A strategy an infant faced with insensitive or rejecting parenting may use to maintain availability of the attachment figure is to repress emotional information that could result in rejection by said attachment figure. Causal or other sequentially ordered knowledge about the potential for safety or danger, which would include awareness of behaviours that indicate whether an attachment figure is available as a secure haven. If the infant represses knowledge that the caregiver is not a reliable source of protection and safety, they may use clingy and/or aggressive behaviour to demand attention and potentially increase the availability of an attachment figure who otherwise displays inconsistent or misleading responses to the infant's attachment behaviours. Crittenden proposes both kinds of information can be split off from consciousness or behavioural expression as a 'strategy' to maintain the availability of an attachment figure (see disorganized/disoriented attachment for type distinctions). Type A strategies split off emotional information about feeling threatened, and Type C strategies split off temporally-sequenced knowledge about how and why the attachment figure is available. In contrast, Type B strategies use both kinds of information without much distortion. For example, a toddler may have come to depend upon a Type C strategy of tantrums to maintain an unreliable attachment figure's availability, which may cause the attachment figure to respond appropriately to the child's attachment behaviours. As a result of learning the attachment figure is becoming more reliable, the toddler's reliance on coercive behaviours is reduced, and a more secure attachment may develop. === Significance of patterns === Research based on data from longitudinal studies, such as the National Institute of Child Health and Human Development Study of Early Child Care and the Minnesota Study of Risk and Adaption from Birth to Adulthood, and from cross-sectional studies, consistently shows associations between early attachment classifications and peer relationships as to both quantity and quality. Lyons-Ruth, for example, found that "for each additional withdrawing behavior displayed by mothers in relation to their infant's attachment cues in the Strange Situation Procedure, the likelihood of clinical referral by service providers was increased by 50%." There is an extensive body of research demonstrating a significant association between attachment organizations and children's functioning across multiple domains. Early insecure attachment does not necessarily predict difficulties, but it is a liability for the child, particularly if similar parental behaviours continue throughout childhood. Compared to that of securely attached children, the adjustment of insecure children in many spheres of life is not as soundly based, putting their future relationships in jeopardy. Although the link is not fully established by research and there are other influences besides attachment, secure infants are more likely to become socially competent than their insecure peers. Relationships formed with peers influence the acquisition of social skills, intellectual development and the formation of social identity. Classification of children's peer status (popular, neglected or rejected) has been found to predict subsequent adjustment. Insecure children, particularly avoidant children, are especially vulnerable to family risk. Their social and behavioural problems increase or decline with deterioration or improvement in parenting. However, an early secure attachment appears to have a lasting protective function. As with attachment to parental figures, subsequent experiences may alter the course of development. Studies have suggested that infants with a high-risk for autism spectrum disorders (ASD) may express attachment security differently from infants with a low-risk for ASD. Behavioural problems and social competence in insecure children increase or decline with deterioration or improvement in quality of parenting and the degree of risk in the family environment. Some authors have questioned the idea that a taxonomy of categories representing a qualitative difference in attachment relationships can be developed. Examination of data from 1,139 15-month-olds showed that variation in attachment patterns was continuous rather than grouped. This criticism introduces important questions for attachment typologies and the mechanisms behind apparent types. However, it has relatively little relevance for attachment theory itself, which "neither requires nor predicts discrete patterns of attachment." There is some evidence that gender differences in attachment patterns of adaptive significance begin to emerge in middle childhood. There has been a common tendency observed by researchers that males demonstrate a greater tendency to engage in criminal behaviour which is suspected to be related to males being more likely to experience inadequate early attachments to primary caregivers. Insecure attachment and early psychosocial stress indicate the presence of environmental risk (for example poverty, mental illness, instability, minority status, violence). Environmental risk can cause insecure attachment, while also favouring the development of strategies for earlier reproduction. Different reproductive strategies have different adaptive values for males and females: Insecure males tend to adopt avoidant strategies, whereas insecure females tend to adopt anxious/ambivalent strategies, unless they are in a very high risk environment. Adrenarche is proposed as the endocrine mechanism underlying the reorganization of insecure attachment in middle childhood. == Changes in attachment during childhood and adolescence == Childhood and adolescence allows the development of an internal working model useful for forming attachments. This internal working model is related to the individual's state of mind which develops with respect to attachment generally and explores how attachment functions in relationship dynamics based on childhood and adolescent experience. The organization of an internal working model is generally seen as leading to more stable attachments in those who develop such a model, rather than those who rely more on the individual's state of mind alone in forming new attachments. Age, cognitive growth, and continued social experience advance the development and complexity of the internal working model. Attachment-related behaviours lose some characteristics typical of the infant-toddler period and take on age-related tendencies. The preschool period involves the use of negotiation and bargaining. For example, four-year-olds are not distressed by separation if they and their caregiver have already negotiated a shared plan for the separation and reunion. Ideally, these social skills become incorporated into the internal working model to be used with other children and later with adult peers. As children move into the school years at about six years old, most develop a goal-corrected partnership with parents, in which each partner is willing to compromise in order to maintain a gratifying relationship. By middle childhood, the goal of the attachment behavioural system has changed from proximity to the attachment figure to availability. Generally, a child is content with longer separations, provided contact—or the possibility of physically reuniting, if needed—is available. Attachment behaviours such as clinging and following decline and self-reliance increases. By middle childhood (ages 7–11), there may be a shift toward mutual coregulation of secure-base contact in which caregiver and child negotiate methods of maintaining communication and supervision as the child moves toward a greater degree of independence. The attachment system used by adolescents is seen as a "safety regulating system" whose main function is to promote physical and psychological safety. There are 2 different events that can trigger the attachment system. Those triggers include, the presence of a potential danger or stress, internal and external, and a threat of accessibility and/or availability of an attachment figure. The ultimate goal of the attachment system is security, so during a time of danger or inaccessibility the behavioural system accepts felt security in the context of the availability of protection. By adolescence we are able to find security through a variety of things, such as food, exercise, and social media. Felt security can be achieved through a number of ways, and often without the physical presence of the attachment figure. Higher levels of maturity allows adolescent teens to more capably interact with their environment on their own because the environment is perceived as less threatening. Adolescents teens will also see an increase in cognitive, emotional and behavioural maturity that dictates whether or not teens are less likely to experience conditions that activate their need for an attachment figure. For example, when teenagers get sick and stay home from school, surely they want their parents to be home so they can take care of them, but they are also able to stay home by themselves without experiencing serious amounts of distress. Additionally, the social environment that a school fosters impacts adolescents attachment behaviour, even if these same adolescents have not had issues with attachment behaviour previously. High schools that have a permissive environment compared to an authoritative environment promote positive attachment behaviour. For example, when students feel connected to their teachers and peers because of their permissive schooling environment, they are less likely to skip school. Positive-attachment behaviour in high schools have important implications on how a school's environment should be structured. Here are the attachment style differences during adolescence: Secure adolescents are expected to hold their mothers at a higher rate than all other support figures, including father, significant others, and best friends. Insecure adolescents identify more strongly with their peers than their parents as their primary attachment figures. Their friends are seen as a significantly strong source of attachment support. Dismissing adolescents rate their parents as a less significant source of attachment support and would consider themselves as their primary attachment figure. Preoccupied adolescents would rate their parents as their primary source of attachment support and would consider themselves as a much less significant source of attachment support. == Attachment styles in adults == Attachment theory was extended to adult romantic relationships in the late 1980s by Cindy Hazan and Phillip Shaver. Four styles of attachment have been identified in adults: secure, anxious-preoccupied, dismissive-avoidant and fearful-avoidant. These roughly correspond to infant classifications: secure, insecure-ambivalent, insecure-avoidant and disorganized/disoriented. === Securely attached === Securely attached adults have been "linked to a high need for achievement and a low fear of failure (Elliot & Reis, 2003)". They will positively approach a task with the goal of mastering it and have an appetite for exploration in achievement settings (Elliot & Reis, 2003). Research shows that securely attached adults have a "low level of personal distress and high levels of concern for others". Due to their high rates of self-efficacy, securely attached adults typically do not hesitate to remove a person having a negative impact from problematic situations they are facing. This calm response is representative of the securely attached adult's emotionally regulated response to threats that many studies have supported in the face of diverse situations. Adult secure attachment comes from an individual's early connection with their caregiver(s), genes and their romantic experiences. Within romantic relationships, a securely attached adult will appear in the following ways: excellent conflict resolution, mentally flexible, effective communicators, avoidance of manipulation, comfortable with closeness without fearfulness of being enmeshed, quickly forgiving, viewing sex and emotional intimacy as one, believing they can positively impact their relationship, and caring for their partner in the way they want to be cared for. In summation, they are great partners who treat their partners very well, as they are not afraid to give positively and ask for their needs to be met. Securely attached adults believe that there are "many potential partners that would be responsive to their needs", and if they come across an individual who is not meeting their needs, they will typically lose interest quickly. === Anxious-preoccupied === Anxious preoccupied adults seek high levels of intimacy, approval and responsiveness from partners, becoming overly dependent. They tend to be less trusting, have less positive views about themselves than their partners, and may exhibit high levels of emotional expressiveness, worry and impulsiveness in their relationships. The anxiety that adults feel prevents the establishment of satisfactory defence exclusion. Thus, it is possible that individuals that have been anxiously attached to their attachment figure or figures have not been able to develop sufficient defences against separation anxiety. Because of their lack of preparation these individuals will then overreact to the anticipation of separation or the actual separation from their attachment figure. The anxiety comes from an individual's intense and/or unstable relationship that leaves the anxious or preoccupied individual relatively defenceless. In terms of adult relationships, if an adult experiences this inconsistent behaviour from their romantic partner or acquaintance, they might develop some of the aspects of this attachment type. Besides, insecurity and distress about relationships can be driven by individuals who exhibit inconsistent connection or emotionally abusive behaviours. However, a secure relationship can also reduce anxious behaviour and be a resource for safety and support. === Dismissive-avoidant === Dismissive-avoidant adults desire a high level of independence, often appearing to avoid attachment altogether. They view themselves as self-sufficient, invulnerable to attachment feelings and not needing close relationships. They tend to suppress their feelings, dealing with conflict by distancing themselves from partners of whom they often have a poor opinion. Adults lack the interest of forming close relationships and maintaining emotional closeness with the people around them. They have a great amount of distrust in others, but at the same time possess a positive model of self; they would prefer to invest in their own ego skills. They try to create high levels of self-esteem by investing disproportionately in their abilities or accomplishments. These adults maintain their positive views of self, based on their personal achievements and competence rather than searching for and feeling acceptance from others. These adults will explicitly reject or minimize the importance of emotional attachment and passively avoid relationships when they feel as though they are becoming too close. They strive for self-reliance and independence. When it comes to the opinions of others about themselves, they are very indifferent and are relatively hesitant to internalize positive feedback from their peers. Dismissive avoidance is considered to be the result of defensive deactivation and disconnection to avoid potential rejection, and is in some cases amplified by a genuine disinterest in social connection. Adults with dismissive-avoidant patterns are less likely to seek social support than other attachment styles. They are likely to fear intimacy and lack confidence in others. Because of their distrust they cannot be convinced that other people have the ability to deliver emotional support. Under a high cognitive load, however, dismissive-avoidant adults appear to have a lowered ability to suppress difficult attachment-related emotions, as well difficulty maintaining positive self-representations. This suggests that hidden vulnerabilities may underlie an active denial process. === Fearful-avoidant === Fearful-avoidant adults have mixed feelings about close relationships, both desiring and feeling uncomfortable with emotional closeness. The dangerous part about the contrast between wanting to form social relationships while simultaneously fearing the relationship is that it creates mental instability. This mental instability then translates into mistrusting the relationships they do form and also viewing themselves as unworthy. Furthermore, fearful-avoidant adults also have a less pleasant outlook on life compared to anxious-preoccupied and dismissive avoidant groups. Like dismissive-avoidant adults, fearful-avoidant adults tend to seek less intimacy, suppressing their feelings. According to research studies, an individual with a fearful avoidant attachment might have had childhood trauma or persistently negative perceptions and actions from their family members. Apart from these, genetic factors and personality may also have an impact on how an individual behaves with parents as well as how they understand their relationships in their adulthood. === Assessing and measuring attachment === Two main aspects of adult attachment have been studied. The organization and stability of the mental working models that underlie the attachment styles is explored by social psychologists interested in romantic attachment. Developmental psychologists interested in the individual's state of mind with respect to attachment generally explore how attachment functions in relationship dynamics and impacts relationship outcomes. The organization of mental working models is more stable while the individual's state of mind with respect to attachment fluctuates more. Some authors have suggested that adults do not hold a single set of working models. Instead, on one level they have a set of rules and assumptions about attachment relationships in general. On another level they hold information about specific relationships or relationship events. Information at different levels need not be consistent. Individuals can therefore hold different internal working models for different relationships. There are a number of different measures of adult attachment, the most common being self-report questionnaires and coded interviews based on the Adult Attachment Interview. The various measures were developed primarily as research tools, for different purposes and addressing different domains, for example romantic relationships, platonic relationships, parental relationships or peer relationships. Some classify an adult's state of mind with respect to attachment and attachment patterns by reference to childhood experiences, while others assess relationship behaviours and security regarding parents and peers. === Associations of adult attachment with other traits === Adult attachment styles are related to individual differences in the ways in which adults experience and manage their emotions. Recent meta-analyses link insecure attachment styles to lower emotional intelligence and lower trait mindfulness. == History == === Maternal deprivation === The early thinking of the object relations school of psychoanalysis, particularly Melanie Klein, influenced Bowlby. However, he profoundly disagreed with the prevalent psychoanalytic belief that infants' responses relate to their internal fantasy life rather than real-life events. As Bowlby formulated his concepts, he was influenced by case studies on disturbed and delinquent children, such as those of William Goldfarb published in 1943 and 1945. Bowlby's contemporary René Spitz observed separated children's grief, proposing that "psychotoxic" results were brought about by inappropriate experiences of early care. A strong influence was the work of social worker and psychoanalyst James Robertson who filmed the effects of separation on children in hospital. He and Bowlby collaborated in making the 1952 documentary film A Two-Year Old Goes to the Hospital which was instrumental in a campaign to alter hospital restrictions on visits by parents. In his 1951 monograph for the World Health Organization, Maternal Care and Mental Health, Bowlby put forward the hypothesis that "the infant and young child should experience a warm, intimate, and continuous relationship with his mother in which both find satisfaction and enjoyment", the lack of which may have significant and irreversible mental health consequences. This was also published as Child Care and the Growth of Love for public consumption. The central proposition was influential but highly controversial. At the time there was limited empirical data and no comprehensive theory to account for such a conclusion. Nevertheless, Bowlby's theory sparked considerable interest in the nature of early relationships, giving a strong impetus to, (in the words of Mary Ainsworth), a "great body of research" in an extremely difficult, complex area. Bowlby's work (and Robertson's films) caused a virtual revolution in a hospital visiting by parents, hospital provision for children's play, educational and social needs, and the use of residential nurseries. Over time, orphanages were abandoned in favour of foster care or family-style homes in most developed countries. Bowlby's work about parental provisions after child birth implicates that maternal deprivation negatively influences the attachment behaviour trajectory of a child's life. If a mother experiences post-partum anxiety, stress, or depression, the attachment they have with their child can be disrupted. It is important for pregnant women to have mental-health support pre and post-partum because mental illness often results in low feelings of attachment to their infant. === Formulation of the theory === Following the publication of Maternal Care and Mental Health, Bowlby sought new understanding from the fields of evolutionary biology, ethology, developmental psychology, cognitive science and control systems theory. He formulated the innovative proposition that mechanisms underlying an infant's emotional tie to the caregiver(s) emerged as a result of evolutionary pressure. He set out to develop a theory of motivation and behaviour control built on science rather than Freud's psychic energy model. Bowlby argued that with attachment theory he had made good the "deficiencies of the data and the lack of theory to link alleged cause and effect" of Maternal Care and Mental Health. ==== Ethology ==== Bowlby's attention was drawn to ethology in the early 1950s when he read Konrad Lorenz's work. Other important influences were ethologists Nikolaas Tinbergen and Robert Hinde. Bowlby subsequently collaborated with Hinde. In 1953 Bowlby stated "the time is ripe for a unification of psychoanalytic concepts with those of ethology, and to pursue the rich vein of research which this union suggests." Konrad Lorenz had examined the phenomenon of "imprinting", a behaviour characteristic of some birds and mammals which involves rapid learning of recognition by the young, of a conspecific or comparable object. After recognition comes a tendency to follow. Certain types of learning are possible, respective to each applicable type of learning, only within a limited age range known as a critical period. Bowlby's concepts included the idea that attachment involved learning from experience during a limited age period, influenced by adult behaviour. He did not apply the imprinting concept in its entirety to human attachment. However, he considered that attachment behaviour was best explained as instinctive, combined with the effect of experience, stressing the readiness the child brings to social interactions. Over time it became apparent there were more differences than similarities between attachment theory and imprinting so the analogy was dropped. Ethologists expressed concern about the adequacy of some research on which attachment theory was based, particularly the generalization to humans from animal studies. Schur, discussing Bowlby's use of ethological concepts (pre-1960) commented that concepts used in attachment theory had not kept up with changes in ethology itself. Ethologists and others writing in the 1960s and 1970s questioned and expanded the types of behaviour used as indications of attachment. Observational studies of young children in natural settings provided other behaviours that might indicate attachment; for example, staying within a predictable distance of the mother without effort on her part and picking up small objects, bringing them to the mother but not to others. Although ethologists tended to be in agreement with Bowlby, they pressed for more data, objecting to psychologists writing as if there were an "entity which is 'attachment', existing over and above the observable measures." Robert Hinde considered "attachment behaviour system" to be an appropriate term which did not offer the same problems "because it refers to postulated control systems that determine the relations between different kinds of behaviour." ==== Psychoanalysis ==== Psychoanalytic concepts influenced Bowlby's view of attachment, in particular, the observations by Anna Freud and Dorothy Burlingham of young children separated from familiar caregivers during World War II. However, Bowlby rejected psychoanalytical explanations for early infant bonds including "drive theory" in which the motivation for attachment derives from gratification of hunger and libidinal drives. He called this the "cupboard-love" theory of relationships. In his view it failed to see attachment as a psychological bond in its own right rather than an instinct derived from feeding or sexuality. Based on ideas of primary attachment and Neo-Darwinism, Bowlby identified what he saw as fundamental flaws in psychoanalysis: the overemphasis of internal dangers rather than external threat, and the view of the development of personality via linear phases with regression to fixed points accounting for psychological distress. Bowlby instead posited that several lines of development were possible, the outcome of which depended on the interaction between the organism and the environment. In attachment this would mean that although a developing child has a propensity to form attachments, the nature of those attachments depends on the environment to which the child is exposed. From early in the development of attachment theory there was criticism of the theory's lack of congruence with various branches of psychoanalysis. Bowlby's decisions left him open to criticism from well-established thinkers working on similar problems. ==== Internal working model ==== The philosopher Kenneth Craik had noted the ability of thought to predict events. He stressed the survival value of natural selection for this ability. A key component of attachment theory is the attachment behaviour system where certain behaviours have a predictable outcome (i.e. proximity) and serve as self-preservation method (i.e. protection). All taking place outside of an individual's awareness, This internal working model allows a person to try out alternatives mentally, using knowledge of the past while responding to the present and future. Bowlby applied Craik's ideas to attachment, when other psychologists were applying these concepts to adult perception and cognition. Infants absorb all sorts of complex social-emotional information from the social interactions that they observe. They notice the helpful and hindering behaviours of one person to another. From these observations they develop expectations of how two characters should behave, known as a "secure base script." These scripts provide as a template of how attachment related events should unfold and they are the building blocks of ones internal working models. An infant's internal working model is developed in response to the infant's experience based internal working models of self, and environment, with emphasis on the caregiving environment and the outcomes of his or her proximity-seeking behaviours. Theoretically, secure child and adult script, would allow for an attachment situation where one person successfully utilizes another as a secure base from which to explore and as a safe haven in times of distress. In contrast, insecure individuals would create attachment situations with more complications. For example, If the caregiver is accepting of these proximity-seeking behaviours and grants access, the infant develops a secure organization; if the caregiver consistently denies the infant access, an avoidant organization develops; and if the caregiver inconsistently grants access, an ambivalent organization develops. In retrospect, internal working models are constant with and reflect the primary relationship with our caregivers. Childhood attachment directly influences our adult relationships. A parent's internal working model that is operative in the attachment relationship with her infant can be accessed by examining the parent's mental representations. Recent research has demonstrated that the quality of maternal attributions as markers of maternal mental representations can be associated with particular forms of maternal psychopathology and can be altered in a relative short time-period by targeted psychotherapeutic intervention. ==== Cybernetics ==== The theory of control systems (cybernetics), developing during the 1930s and 1940s, influenced Bowlby's thinking. The young child's need for proximity to the attachment figure was seen as balancing homeostatically with the need for exploration. (Bowlby compared this process to physiological homeostasis whereby, for example, blood pressure is kept within limits). The actual distance maintained by the child would vary as the balance of needs changed. For example, the approach of a stranger, or an injury, would cause the child exploring at a distance to seek proximity. The child's goal is not an object (the caregiver) but a state; maintenance of the desired distance from the caregiver depending on circumstances. ==== Cognitive development ==== Bowlby's reliance on Piaget's theory of cognitive development gave rise to questions about object permanence (the ability to remember an object that is temporarily absent) in early attachment behaviours. An infant's ability to discriminate strangers and react to the mother's absence seemed to occur months earlier than Piaget suggested would be cognitively possible. More recently, it has been noted that the understanding of mental representation has advanced so much since Bowlby's day that present views can be more specific than those of Bowlby's time. ==== Behaviourism ==== In 1969, Gerwitz discussed how mother and child could provide each other with positive reinforcement experiences through their mutual attention, thereby learning to stay close together. This explanation would make it unnecessary to posit innate human characteristics fostering attachment. Learning theory, (behaviourism), saw attachment as a remnant of dependency with the quality of attachment being merely a response to the caregiver's cues. The main predictors of attachment quality are parents being sensitive and responsive to their children. When parents interact with their infants in a warm and nurturing manner, their attachment quality increases. The way that parents interact with their children at four months is related to attachment behaviour at 12 months, thus it is important for parents' sensitivity and responsiveness to remain stable. The lack of sensitivity and responsiveness increases the likelihood for attachment disorders to development in children. Behaviourists saw behaviours like crying as a random activity meaning nothing until reinforced by a caregiver's response. To behaviourists, frequent responses would result in more crying. To attachment theorists, crying is an inborn attachment behaviour to which the caregiver must respond if the infant is to develop emotional security. Conscientious responses produce security which enhances autonomy and results in less crying. Ainsworth's research in Baltimore supported the attachment theorists' view. In the last decade, behaviour analysts have constructed models of attachment based on the importance of contingent relationships. These behaviour analytic models have received some support from research and meta-analytic reviews. ==== Developments since 1970s ==== In the 1970s, problems with viewing attachment as a trait (stable characteristic of an individual) rather than as a type of behaviour with organizing functions and outcomes, led some authors to the conclusion that attachment behaviours were best understood in terms of their functions in the child's life. This way of thinking saw the secure base concept as central to attachment theory's logic, coherence, and status as an organizational construct. Following this argument, the assumption that attachment is expressed identically in all humans cross-culturally was examined. The research showed that though there were cultural differences, the three basic patterns, secure, avoidant and ambivalent, can be found in every culture in which studies have been undertaken, even where communal sleeping arrangements are the norm. The selection of the secure pattern is found in the majority of children across cultures studied. This follows logically from the fact that attachment theory provides for infants to adapt to changes in the environment, selecting optimal behavioural strategies. How attachment is expressed shows cultural variations which need to be ascertained before studies can be undertaken; for example Gusii infants are greeted with a handshake rather than a hug. Securely attached Gusii infants anticipate and seek this contact. There are also differences in the distribution of insecure patterns based on cultural differences in child-rearing practices. The scholar Michael Rutter in 1974 studied the importance of distinguishing between the consequences of attachment deprivation upon intellectual retardation in children and lack of development in the emotional growth in children. Rutter's conclusion was that a careful delineation of maternal attributes needed to be identified and differentiated for progress in the field to continue. The biggest challenge to the notion of the universality of attachment theory came from studies conducted in Japan where the concept of amae plays a prominent role in describing family relationships. Arguments revolved around the appropriateness of the use of the Strange Situation procedure where amae is practised. Ultimately research tended to confirm the universality hypothesis of attachment theory. Most recently a 2007 study conducted in Sapporo in Japan found attachment distributions consistent with global norms using the six-year Main and Cassidy scoring system for attachment classification. Critics in the 1990s such as J. R. Harris, Steven Pinker and Jerome Kagan were generally concerned with the concept of infant determinism (nature versus nurture), stressing the effects of later experience on personality. Building on the work on temperament of Stella Chess, Kagan rejected almost every assumption on which attachment theory's cause was based. Kagan argued that heredity was far more important than the transient developmental effects of early environment. For example, a child with an inherently difficult temperament would not elicit sensitive behavioural responses from a caregiver. The debate spawned considerable research and analysis of data from the growing number of longitudinal studies. Subsequent research has not borne out Kagan's argument, possibly suggesting that it is the caregiver's behaviours that form the child's attachment style, although how this style is expressed may differ with the child's temperament. Harris and Pinker put forward the notion that the influence of parents had been much exaggerated, arguing that socialization took place primarily in peer groups. H. Rudolph Schaffer concluded that parents and peers had different functions, fulfilling distinctive roles in children's development. Psychoanalyst/psychologists Peter Fonagy and Mary Target have attempted to bring attachment theory and psychoanalysis into a closer relationship through cognitive science as mentalization. Mentalization, or theory of mind, is the capacity of human beings to guess with some accuracy what thoughts, emotions and intentions lie behind behaviours as subtle as facial expression. It has been speculated that this connection between theory of mind and the internal working model may open new areas of study, leading to alterations in attachment theory. Since the late 1980s, there has been a developing rapprochement between attachment theory and psychoanalysis, based on common ground as elaborated by attachment theorists and researchers, and a change in what psychoanalysts consider to be central to psychoanalysis. Object relations models which emphasise the autonomous need for a relationship have become dominant and are linked to a growing recognition in psychoanalysis of the importance of infant development in the context of relationships and internalized representations. Psychoanalysis has recognized the formative nature of a child's early environment including the issue of childhood trauma. A psychoanalytically based exploration of the attachment system and an accompanying clinical approach has emerged together with a recognition of the need for measurement of outcomes of interventions. One focus of attachment research has been the difficulties of children whose attachment history was poor, including those with extensive non-parental child care experiences. Concern with the effects of child care was intense during the so-called "day care wars" of the late-20th century, during which some authors stressed the deleterious effects of day care. As a result of this controversy, training of child care professionals has come to stress attachment issues, including the need for relationship-building by the assignment of a child to a specific care-giver. Although only high-quality child care settings are likely to provide this, more infants in child care receive attachment-friendly care than in the past. A natural experiment permitted extensive study of attachment issues as researchers followed thousands of Romanian orphans adopted into Western families after the end of the Nicolae Ceaușescu regime. The English and Romanian Adoptees Study Team, led by Michael Rutter, followed some of the children into their teens, attempting to unravel the effects of poor attachment, adoption, new relationships, physical problems and medical issues associated with their early lives. Studies of these adoptees, whose initial conditions were shocking, yielded reason for optimism as many of the children developed quite well. Researchers noted that separation from familiar people is only one of many factors that help to determine the quality of development. Although higher rates of atypical insecure attachment patterns were found compared to native-born or early-adopted samples, 70% of later-adopted children exhibited no marked or severe attachment disorder behaviours. Authors considering attachment in non-Western cultures have noted the connection of attachment theory with Western family and child care patterns characteristic of Bowlby's time. As children's experience of care changes, so may attachment-related experiences. For example, changes in attitudes toward female sexuality have greatly increased the numbers of children living with their never-married mothers or being cared for outside the home while the mothers work. This social change has made it more difficult for childless people to adopt infants in their own countries. There has been an increase in the number of older-child adoptions and adoptions from third-world sources in first-world countries. Adoptions and births to same-sex couples have increased in number and gained legal protection, compared to their status in Bowlby's time. Regardless of whether parents are genetically related, adoptive parents attachment roles they will still influence and affect their child's attachment behaviours throughout their lifetime. Issues have been raised to the effect that the dyadic model characteristic of attachment theory cannot address the complexity of real-life social experiences, as infants often have multiple relationships within the family and in child care settings. It is suggested these multiple relationships influence one another reciprocally, at least within a family. Principles of attachment theory have been used to explain adult social behaviours, including mating, social dominance and hierarchical power structures, in-group identification, group coalitions, membership in cults and totalitarian systems and negotiation of reciprocity and justice. Those explanations have been used to design parental care training, and have been particularly successful in the design of child abuse prevention programmes. While a wide variety of studies have upheld the basic tenets of attachment theory, research has been inconclusive as to whether self-reported early attachment and later depression are demonstrably related. == Neurobiology of attachment == In addition to longitudinal studies, there has been psychophysiological research on the neurobiology of attachment. Research has begun to include neural development, behaviour genetics and temperament concepts. Generally, temperament and attachment constitute separate developmental domains, but aspects of both contribute to a range of interpersonal and intrapersonal developmental outcomes. Some types of temperament may make some individuals susceptible to the stress of unpredictable or hostile relationships with caregivers in the early years. In the absence of available and responsive caregivers it appears that some children are particularly vulnerable to developing attachment disorders. The quality of caregiving received at infancy and childhood directly affects an individual's neurological systems which controls stress regulation. In psychophysiological research on attachment, the two main areas studied have been autonomic responses, such as heart rate or respiration, and the activity of the hypothalamic–pituitary–adrenal axis, a system that is responsible for the body's reaction to stress. Infants' physiological responses have been measured during the Strange Situation procedure looking at individual differences in infant temperament and the extent to which attachment acts as a moderator. Recent studies convey that early attachment relationships become molecularly instilled into the being, thus affecting later immune system functioning. Empirical evidence communicates that early negative experiences produce pro inflammatory phenotype cells in the immune system, which is directly related to cardiovascular disease, autoimmune diseases, and certain types of cancer. Recent improvements involving methods of research have enabled researchers to further investigate the neural correlates of attachment in humans. These advances include identifying key brain structures, neural circuits, neurotransmitter systems, and neuropeptides, and how they are involved in attachment system functioning and can indicate more about a certain individual, even predict their behaviour. There is initial evidence that caregiving and attachment involve both unique and overlapping brain regions. Another issue is the role of inherited genetic factors in shaping attachments: for example one type of polymorphism of the gene coding for the D2 dopamine receptor has been linked to anxious attachment and another in the gene for the 5-HT2A serotonin receptor with avoidant attachment. Studies show that attachment in adulthood is simultaneously related to biomarkers of immunity. For example, individuals with an avoidance attachment style produce higher levels of the pro inflammatory cytokine interleukin-6 (IL-6) when reacting to an interpersonal stressor, while individuals representing an anxious attachment style tend to have elevated cortisol production and lower numbers of T cells. Although children vary genetically and each individual requires different attachment relationships, there is consistent evidence that maternal warmth during infancy and childhood creates a safe haven for individuals resulting in superior immune system functioning. One theoretical basis for this is that it makes biological sense for children to vary in their susceptibility to rearing influence. == Crime == Attachment theory has often been applied in the discipline of criminology. It has been used in an attempt to identify causal mechanisms in criminal behaviour – with uses ranging from offender profiling, better understanding types of offence and the pursuit of preventative policy. It has been found that disturbances early on in child-caregiver relationships are a risk factor in criminality. Attachment theory in this context has been described as "perhaps the most influential of contemporary psychoanalytically oriented theories of crime". === History === The origins of attachment theory within criminology can be found in the work of August Aichhorn. In applying psychoanalysis to pedagogy, he argued that abnormal child relationships are the underlying problem causing delinquency. The intersection of crime and attachment theory was further researched by John Bowlby. In his first published work, Forty-four Juvenile Thieves, he studied a sample of 88 children (44 juvenile thieves and 44 non-delinquent controls) and determined that child-mother separation caused delinquent character formation, particularly in the development of an "affectionless character" often seen in the persistent offender. 17 of the juvenile thieves had been separated from their mothers for longer than six months during their first five years, and only 2 children from the control group had such a separation. He also found that 14 of the thieves were "affectionless characters" distinguishing them from others by their lack of affection, no emotional ties, no real friendships, and having "no roots in their relationships". === Age distribution of crime === Two theories about why the crime peaks around the late teenage years and early twenties are called the developmental theory and life-course theory, and both involve attachment theory. Developmental perspectives argue that individuals who have disrupted childhood attachments will have criminal careers that continue long into adulthood. Life course perspectives argue that relationships at every stage of the life course can influence an individual's likelihood of committing crimes. === Types of offences === Disrupted attachment patterns from childhood have been identified as a risk factor for domestic violence. These disruptions in childhood can prevent the formation of a secure attachment relationship, and in turn adversely affecting a healthy way to deal with stress. In adulthood, lack of coping mechanisms can result in violent behaviour. Bowlby's theory of functional anger states that children signal to their caregiver that their attachment needs are not being met by use of angry behaviour. This perception of low support from partner has been identified as a strong predictor of male violence. Other predictors have been named as perceived deficiency in maternal love in childhood, low self-esteem. It has also been found that individuals with a dismissive attachment style, often seen in an antisocial/narcissistic-narcissistic subtype of offender, tend to be emotionally abusive as well as violent. Individuals in the borderline/emotionally dependent subtype have traits which originate from insecure attachment in childhood, and tend to have high levels of anger. It has been found that sexual offenders have significantly less secure maternal and paternal attachments compared with non-offenders which suggests that insecure attachments in infancy persist into adulthood. In a recent study, 57% of sexual offenders were found to be of a preoccupied attachment style. There is also evidence that suggests subtypes of sexual crime can have different attachment styles. Dismissive individuals tend to be hostile towards others, and are more likely to offend violently against adult women. By contrast, child abusers are more likely to have preoccupied attachment styles as the tendency to seek approval from others becomes distorted and attachment relationships become sexualized. === Uses within probation practice === Attachment theory has been of special interest within probation settings. When put into practice, probation officers aim to learn their probationer's attachment history because it can give them insight into how the probationer will respond to different scenarios and when they are the most vulnerable to reoffend. One of the primary strategies of implementation is to set up the probation officer as a secure base. This secure base relationship is formed by the probation officer being reliable, safe, and in tune with the probationer, and is intended to help give them a partly representational secure relationship that they have not been able to form. == Practical applications == As a theory of socioemotional development, attachment theory has implications and practical applications in social policy, decisions about the care and welfare of children and mental health. === Child care policies === Social policies concerning the care of children were the driving force in Bowlby's development of attachment theory. The difficulty lies in applying attachment concepts to policy and practice. In 2008 C.H. Zeanah and colleagues stated, "Supporting early child-parent relationships is an increasingly prominent goal of mental health practitioners, community-based service providers and policy makers ... Attachment theory and research have generated important findings concerning early child development and spurred the creation of programs to support early child-parent relationships." Additionally, practitioners can use the concepts of attachment theory that suggests deep relationships which builds attachment security towards mental health interventions. Attachment security has been found to strengthen one's ability to cope with stress, anxiety, and maintain that, in turn, can contribute to the person's well-being and mental health For example, previous studies have demonstrated that individuals who demonstrate avoidance attachment styles experiences less stress and distress when presented with ostracism. However, finding quality childcare while at work or school is an issue for many families. NIHD recent study convey that top notch day care contributes to secure attachment relationships in children. People have commented on this matter stating that "legislative initiatives reflecting higher standards for credentialing and licensing childcare workers, requiring education in child development and attachment theory, and at least a two-year associate degree course as well as salary increases and increased stature for childcare positions". Corporations should implement more flexible work arrangements that recognize child care as essential for all its employees. This includes re-examination of parental leave policies. Too many parents are forced to return to work too soon post childbirth because of company policy or financial necessity. No matter the reason this inhibits early parent child bonding. In addition to this, there should be increased attention to the training and screening of childcare workers. In his article reviewing attachment theory, Sweeney suggested, among several policy implications, "legislative initiatives reflecting higher standards for credentialing and licensing childcare workers, requiring education in child development and attachment theory, and at least a two-year associate degree course as well as salary increases and increased stature for childcare positions". Historically, attachment theory had significant policy implications for hospitalized or institutionalized children, and those in poor quality daycare. Controversy remains over whether non-maternal care, particularly in group settings, has deleterious effects on social development. It is plain from research that poor quality care carries risks but that those who experience good quality alternative care cope well although it is difficult to provide good quality, individualized care in group settings. Attachment theory has implications in residence and contact disputes, and applications by foster parents to adopt foster children. In the past, particularly in North America, the main theoretical framework was psychoanalysis. Increasingly attachment theory has replaced it, thus focusing on the quality and continuity of caregiver relationships rather than economic well-being or automatic precedence of any one party, such as the biological mother. Rutter noted that in the UK, since 1980, family courts have shifted considerably to recognize the complications of attachment relationships. Children tend to have attachment relationships with both parents and often grandparents or other relatives. Judgements need to take this into account along with the impact of step-families. Attachment theory has been crucial in highlighting the importance of social relationships in dynamic rather than fixed terms. Attachment theory can also inform decisions made in social work, especially in humanistic social work (Petru Stefaroi), and court processes about foster care or other placements. Considering the child's attachment needs can help determine the level of risk posed by placement options. Within adoption, the shift from "closed" to "open" adoptions and the importance of the search for biological parents would be expected on the basis of attachment theory. Many researchers in the field were strongly influenced by it. === Clinical practice in children === Although attachment theory has become a major scientific theory of socioemotional development with one of the widest research lines in modern psychology, it has, until recently, been less used in clinical practice. The attachment theory focused on the attention of the child when the mother is there and the responses that the child shows when the mother leaves, which indicated the attachment and bonding of the mother and the child. The attention therapy is done while the child is being restrained by the therapists and the responses displayed were noted. The tests were done to show the responses of the child. This may be partly due to lack of attention paid to clinical application by Bowlby himself and partly due to broader meanings of the word 'attachment' used among practitioners. It may also be partly due to the mistaken association of attachment theory with the pseudoscientific interventions misleadingly known as attachment therapy or holding therapy. ==== Prevention and treatment ==== In 1988, Bowlby published a series of lectures indicating how attachment theory and research could be used in understanding and treating child and family disorders. His focus for bringing about change was the parents' internal working models, parenting behaviours and the parents' relationship with the therapeutic intervenor. Ongoing research has led to a number of individual treatments and prevention and intervention programs. In regards to personal development, children from all the age groups were tested to show the effectiveness of the theory that is being theorized by Bowlby. They range from individual therapy to public health programs to interventions designed for foster caregivers. For infants and younger children, the focus is on increasing the responsiveness and sensitivity of the caregiver, or if that is not possible, placing the child with a different caregiver. An assessment of the attachment status or caregiving responses of the caregiver is invariably included, as attachment is a two-way process involving attachment behaviour and caregiver response. Some programs are aimed at foster cares because the attachment behaviours of infants or children with attachment difficulties often do not elicit appropriate caregiver responses. Modern prevention and intervention programs have proven successful. ==== Reactive attachment disorder and attachment disorder ==== One atypical attachment pattern is considered to be an actual disorder, known as reactive attachment disorder or RAD, which is a recognized psychiatric diagnosis (ICD-10 F94.1/2 and DSM-IV-TR 313.89). Against common misconception, this is not the same as 'disorganized attachment'. The essential feature of reactive attachment disorder is markedly disturbed and developmentally inappropriate social relatedness in most contexts that begins before age five years, associated with gross pathological care. There are two subtypes, one reflecting a disinhibited attachment pattern, the other an inhibited pattern. RAD is not a description of insecure attachment styles, however problematic those styles may be; instead, it denotes a lack of age-appropriate attachment behaviours that may appear to resemble a clinical disorder. Although the term "reactive attachment disorder" is now popularly applied to perceived behavioural difficulties that fall outside the DSM or ICD criteria, particularly on the Web and in connection with the pseudo-scientific attachment therapy, "true" RAD is thought to be rare. "Attachment disorder" is an ambiguous term, which may refer to reactive attachment disorder or to the more problematic insecure attachment styles (although none of these are clinical disorders). It may also be used to refer to proposed new classification systems put forward by theorists in the field, and is used within attachment therapy as a form of unvalidated diagnosis. One of the proposed new classifications, "secure base distortion" has been found to be associated with caregiver traumatization. === Clinical practice in adults and families === As attachment theory offers a broad, far-reaching view of human functioning, it can enrich a therapist's understanding of patients and the therapeutic relationship rather than dictate a particular form of treatment. Some forms of psychoanalysis-based therapy for adults—within relational psychoanalysis and other approaches—also incorporate attachment theory and patterns. == Criticism == A 2010 study in the Journal of Personality looked at twins in Italy using the ACE Model and found that their shared environment (including shared aspects of their upbringing) was "completely irrelevant" in explaining their adult attachment styles. Instead, levels of attachment-related anxiety and avoidance in the adult twins were completely explained by their genes and their unshared environment (aspects of the environment that were different for the twins). A 2013 study from Utah State suggests an individual can have different attachment styles in relation to different people and that "parents' time away from their child was not a significant predictor of attachment." Attachment theory models are heavily focused on attachment to the mother, not other family members and peers, also noted by Rosjke Hasseldine. Salvador Minuchin suggested that attachment theory's focus on the mother-child relation ignores the value in other familial influences: "The entire family—not just the mother or primary caretaker—including father, siblings, grandparents, often cousins, aunts and uncles, are extremely significant in the experience of the child...And yet, when I hear attachment theorists talk, I don't hear anything about these other important figures in a child's life." A 2016 article from the Psychological Bulletin suggests that one's attachment could largely be due to heredity; hence, the authors point to the need to focus research on nonshared environmental effects, requiring "behavioral genetic designs that afford differentiating heritability from shared and nonshared environmental influences". The late Jerome Kagan was a highly respected psychologist who believed a child's behaviour is largely due to temperament, as well as social class and culture, rather than attachment style. A 2018 paper proposes that attachment theory represents a Western middle-class perspective, ignoring the diverse caregiving values and practices in most of the world. == See also == == Citations == == General and cited references == == Further reading ==
Wikipedia/Attachment_theory
In chemistry, molecular orbital theory (MO theory or MOT) is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century. The MOT explains the paramagnetic nature of O2, which valence bond theory cannot explain. In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule. Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms. Molecular orbital theory revolutionized the study of chemical bonding by approximating the states of bonded electrons – the molecular orbitals – as linear combinations of atomic orbitals (LCAO). These approximations are made by applying the density functional theory (DFT) or Hartree–Fock (HF) models to the Schrödinger equation. Molecular orbital theory and valence bond theory are the foundational theories of quantum chemistry. == Linear combination of atomic orbitals (LCAO) method == In the LCAO method, each molecule has a set of molecular orbitals. It is assumed that the molecular orbital wave function ψj can be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation: ψ j = ∑ i = 1 n c i j χ i . {\displaystyle \psi _{j}=\sum _{i=1}^{n}c_{ij}\chi _{i}.} One may determine cij coefficients numerically by substituting this equation into the Schrödinger equation and applying the variational principle. The variational principle is a mathematical technique used in quantum mechanics to build up the coefficients of each atomic orbital basis. A larger coefficient means that the orbital basis is composed more of that particular contributing atomic orbital – hence, the molecular orbital is best characterized by that type. This method of quantifying orbital contribution as a linear combination of atomic orbitals is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent. Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state. There are three main requirements for atomic orbital combinations to be suitable as approximate molecular orbitals. The atomic orbital combination must have the correct symmetry, which means that it must belong to the correct irreducible representation of the molecular symmetry group. Using symmetry adapted linear combinations, or SALCs, molecular orbitals of the correct symmetry can be formed. Atomic orbitals must also overlap within space. They cannot combine to form molecular orbitals if they are too far away from one another. Atomic orbitals must be at similar energy levels to combine as molecular orbitals. Because if the energy difference is great, when the molecular orbitals form, the change in energy becomes small. Consequently, there is not enough reduction in energy of electrons to make significant bonding. == History == Molecular orbital theory was developed in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones. MO theory was originally called the Hund-Mulliken theory. According to physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones. This paper predicted a triplet ground state for the dioxygen molecule which explained its paramagnetism (see Molecular orbital diagram § Dioxygen) before valence bond theory, which came up with its own explanation in 1931. The word orbital was introduced by Mulliken in 1932. By 1933, the molecular orbital theory had been accepted as a valid and useful theory. Erich Hückel applied molecular orbital theory to unsaturated hydrocarbon molecules starting in 1931 with his Hückel molecular orbital (HMO) method for the determination of MO energies for pi electrons, which he applied to conjugated and aromatic hydrocarbons. This method provided an explanation of the stability of molecules with six pi-electrons such as benzene. The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule. By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent. This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations. This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods. The success of Molecular Orbital Theory also spawned ligand field theory, which was developed during the 1930s and 1940s as an alternative to crystal field theory. == Types of orbitals == Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals resulting from bonds between atoms. These are often divided into three types, bonding, antibonding, and non-bonding. A bonding orbital concentrates electron density in the region between a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together. An anti-bonding orbital concentrates electron density "behind" each nucleus (i.e. on the side of each atom which is farthest from the other atom), and so tends to pull each of the two nuclei away from the other and actually weaken the bond between the two nuclei. Electrons in non-bonding orbitals tend to be associated with atomic orbitals that do not interact positively or negatively with one another, and electrons in these orbitals neither contribute to nor detract from bond strength. Molecular orbitals are further divided according to the types of atomic orbitals they are formed from. Chemical substances will form bonding interactions if their orbitals become lower in energy when they interact with each other. Different bonding orbitals are distinguished that differ by electron configuration (electron cloud shape) and by energy levels. The molecular orbitals of a molecule can be illustrated in molecular orbital diagrams. Common bonding orbitals are sigma (σ) orbitals which are symmetric about the bond axis and pi (π) orbitals with a nodal plane along the bond axis. Less common are delta (δ) orbitals and phi (φ) orbitals with two and three nodal planes respectively along the bond axis. Antibonding orbitals are signified by the addition of an asterisk. For example, an antibonding pi orbital may be shown as π*. == Bond order == Bond order is the number of chemical bonds between a pair of atoms. The bond order of a molecule can be calculated by subtracting the number of electrons in anti-bonding orbitals from the number of bonding orbitals, and the resulting number is then divided by two. A molecule is expected to be stable if it has bond order larger than zero. It is adequate to consider the valence electron to determine the bond order. Because (for principal quantum number n > 1) when MOs are derived from 1s AOs, the difference in number of electrons in bonding and anti-bonding molecular orbital is zero. So, there is no net effect on bond order if the electron is not the valence one. Bond order = 1 2 ( Number of electrons in bonding MO − Number of electrons in anti-bonding MO ) {\displaystyle {\text{Bond order}}={\frac {1}{2}}({\text{Number of electrons in bonding MO}}-{\text{Number of electrons in anti-bonding MO}})} From bond order, one can predict whether a bond between two atoms will form or not. For example, the existence of He2 molecule. From the molecular orbital diagram, the bond order is 1 2 ( 2 − 2 ) = 0 {\textstyle {\frac {1}{2}}(2-2)=0} . That means, no bond formation will occur between two He atoms which is seen experimentally. It can be detected under very low temperature and pressure molecular beam and has binding energy of approximately 0.001 J/mol. (The helium dimer is a van der Waals molecule.) Besides, the strength of a bond can also be realized from bond order (BO). For example: For H2: Bond order is 1 2 ( 2 − 0 ) = 1 {\textstyle {\frac {1}{2}}(2-0)=1} ; bond energy is 436 kJ/mol. For H2+: Bond order is 1 2 ( 1 − 0 ) = 1 2 {\textstyle {\frac {1}{2}}(1-0)={\frac {1}{2}}} ; bond energy is 171 kJ/mol. As the bond order of H2+ is smaller than H2, it should be less stable which is observed experimentally and can be seen from the bond energy. == Magnetism explained by molecular orbital theory == For almost every covalent molecule that exists, we can now draw the Lewis structure, predict the electron-pair geometry, predict the molecular geometry, and come close to predicting bond angles. However, one of the most important molecules we know, the oxygen molecule O2, presents a problem with respect to its Lewis structure. The electronic structure of O2 adheres to all the rules governing Lewis theory. There is an O=O double bond, and each oxygen atom has eight electrons around it. However, this picture is at odds with the magnetic behavior of oxygen. By itself, O2 is not magnetic, but it is attracted to magnetic fields. Thus, when we pour liquid oxygen past a strong magnet, it collects between the poles of the magnet and defies gravity. Such attraction to a magnetic field is called paramagnetism, and it arises in molecules that have unpaired electrons. And yet, the Lewis structure of O2 indicates that all electrons are paired. How do we account for this discrepancy? Molecular orbital diagram of oxygen molecule: Atomic number of oxygen – 8 Electronic configuration – 1s²2s²2p4 Electronic configuration of oxygen molecule; ó1s² < *ó1s² < ó2s² < *ó2s² , [ π2px² = π2py²] < ó 2pz² < [*π2px¹ =*π2py¹] < *ó2pz Bond order of O2 = (Bonding electrons − Anti bonding electrons) / 2 = (10 − 6) / 2 = 2 O2 has unpaired electrons, hence it is paramagnetic. Magnetic susceptibility measures the force experienced by a substance in a magnetic field. When we compare the weight of a sample to the weight measured in a magnetic field, paramagnetic samples that are attracted to the magnet will appear heavier because of the force exerted by the magnetic field. We can calculate the number of unpaired electrons based on the increase in weight. Experiments show that each O2 molecule has two unpaired electrons. The Lewis-structure model does not predict the presence of these two unpaired electrons. Unlike oxygen, the apparent weight of most molecules decreases slightly in the presence of an inhomogeneous magnetic field. Materials in which all of the electrons are paired are diamagnetic and weakly repel a magnetic field. Paramagnetic and diamagnetic materials do not act as permanent magnets. Only in the presence of an applied magnetic field do they demonstrate attraction or repulsion. Water, like most molecules, contains all paired electrons. Living things contain a large percentage of water, so they demonstrate diamagnetic behavior. If you place a frog near a sufficiently large magnet, it will levitate. Molecular orbital theory (MO theory) provides an explanation of chemical bonding that accounts for the paramagnetism of the oxygen molecule. It also explains the bonding in a number of other molecules, such as violations of the octet rule and more molecules with more complicated bonding (beyond the scope of this text) that are difficult to describe with Lewis structures. Additionally, it provides a model for describing the energies of electrons in a molecule and the probable location of these electrons. Unlike valence bond theory, which uses hybrid orbitals that are assigned to one specific atom, MO theory uses the combination of atomic orbitals to yield molecular orbitals that are delocalized over the entire molecule rather than being localized on its constituent atoms. MO theory also helps us understand why some substances are electrical conductors, others are semiconductors, and still others are insulators. Molecular orbital theory describes the distribution of electrons in molecules in much the same way that the distribution of electrons in atoms is described using atomic orbitals. Using quantum mechanics, the behavior of an electron in a molecule is still described by a wave function, Ψ, analogous to the behavior in an atom. Just like electrons around isolated atoms, electrons around atoms in molecules are limited to discrete (quantized) energies. The region of space in which a valence electron in a molecule is likely to be found is called a molecular orbital (Ψ2). Like an atomic orbital, a molecular orbital is full when it contains two electrons with opposite spin. == Overview == MOT provides a global, delocalized perspective on chemical bonding. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as they are in eigenstates permitted by certain quantum rules. Thus, when excited with the requisite amount of energy through high-frequency light or other means, electrons can transition to higher-energy molecular orbitals. For instance, in the simple case of a hydrogen diatomic molecule, promotion of a single electron from a bonding orbital to an antibonding orbital can occur under UV radiation. This promotion weakens the bond between the two hydrogen atoms and can lead to photodissociation, the breaking of a chemical bond due to the absorption of light. Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state. Although in MO theory some molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, other orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding is far more delocalized in MO theory, which makes it more applicable to resonant molecules that have equivalent non-integer bond orders than valence bond theory. This makes MO theory more useful for the description of extended systems. Robert S. Mulliken, who actively participated in the advent of molecular orbital theory, considers each molecule to be a self-sufficient unit. He asserts in his article: ...Attempts to regard a molecule as consisting of specific atomic or ionic units held together by discrete numbers of bonding electrons or electron-pairs are considered as more or less meaningless, except as an approximation in special cases, or as a method of calculation […]. A molecule is here regarded as a set of nuclei, around each of which is grouped an electron configuration closely similar to that of a free atom in an external field, except that the outer parts of the electron configurations surrounding each nucleus usually belong, in part, jointly to two or more nuclei....An example is the MO description of benzene, C6H6, which is an aromatic hexagonal ring of six carbon atoms and three double bonds. In this molecule, 24 of the 30 total valence bonding electrons – 24 coming from carbon atoms and 6 coming from hydrogen atoms – are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C–C or C–H), similarly to the electrons in the valence bond description. However, in benzene the remaining six bonding electrons are located in three π (pi) molecular bonding orbitals that are delocalized around the ring. Two of these electrons are in an MO that has equal orbital contributions from all six atoms. The other four electrons are in orbitals with vertical nodes at right angles to each other. As in the VB theory, all of these six delocalized π electrons reside in a larger space that exists above and below the ring plane. All carbon–carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the three molecular π orbitals combine and evenly spread the extra six electrons over six carbon atoms. In molecules such as methane, CH4, the eight valence electrons are found in four MOs that are spread out over all five atoms. It is possible to transform the MOs into four localized sp3 orbitals. Linus Pauling, in 1931, hybridized the carbon 2s and 2p orbitals so that they pointed directly at the hydrogen 1s basis functions and featured maximal overlap. However, the delocalized MO description is more appropriate for predicting ionization energies and the positions of spectral absorption bands. When methane is ionized, a single electron is taken from the valence MOs, which can come from the s bonding or the triply degenerate p bonding levels, yielding two ionization energies. In comparison, the explanation in valence bond theory is more complicated. When one electron is removed from an sp3 orbital, resonance is invoked between four valence bond structures, each of which has a single one-electron bond and three two-electron bonds. Triply degenerate T2 and A1 ionized states (CH4+) are produced from different linear combinations of these four structures. The difference in energy between the ionized and ground state gives the two ionization energies. As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons in the π orbitals are spread out in molecular orbitals over long distances in a molecule, resulting in light absorption in lower energies (the visible spectrum), which accounts for the characteristic colours of these substances. This and other spectroscopic data for molecules are well explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching. The same MO principles also naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. This results from continuous band overlap of half-filled p orbitals and explains electrical conduction. MO theory recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and therefore conduct electricity in the sheet plane, as if they resided in a metal. == See also == == References == == External links == Molecular Orbital Theory - Purdue University Molecular Orbital Theory - Sparknotes Molecular Orbital Theory - Mark Bishop's Chemistry Site Introduction to MO Theory - Queen Mary, London University Molecular Orbital Theory - a related terms table An introduction to Molecular Group Theory - Oxford University
Wikipedia/Molecular_orbital_theory
Flory–Huggins solution theory is a lattice model of the thermodynamics of polymer solutions which takes account of the great dissimilarity in molecular sizes in adapting the usual expression for the entropy of mixing. The result is an equation for the Gibbs free energy change Δ G m i x {\displaystyle \Delta G_{\rm {mix}}} for mixing a polymer with a solvent. Although it makes simplifying assumptions, it generates useful results for interpreting experiments. == Theory == The thermodynamic equation for the Gibbs energy change accompanying mixing at constant temperature and (external) pressure is Δ G m i x = Δ H m i x − T Δ S m i x {\displaystyle \Delta G_{\rm {mix}}=\Delta H_{\rm {mix}}-T\Delta S_{\rm {mix}}} A change, denoted by Δ {\displaystyle \Delta } , is the value of a variable for a solution or mixture minus the values for the pure components considered separately. The objective is to find explicit formulas for Δ H m i x {\displaystyle \Delta H_{\rm {mix}}} and Δ S m i x {\displaystyle \Delta S_{\rm {mix}}} , the enthalpy and entropy increments associated with the mixing process. The result obtained by Flory[1] and Huggins[2] is Δ G m i x = R T [ n 1 ln ⁡ ϕ 1 + n 2 ln ⁡ ϕ 2 + n 1 ϕ 2 χ 12 ] {\displaystyle \Delta G_{\rm {mix}}=RT[\,n_{1}\ln \phi _{1}+n_{2}\ln \phi _{2}+n_{1}\phi _{2}\chi _{12}\,]} The right-hand side is a function of the number of moles n 1 {\displaystyle n_{1}} and volume fraction ϕ 1 {\displaystyle \phi _{1}} of solvent (component 1 {\displaystyle 1} ), the number of moles n 2 {\displaystyle n_{2}} and volume fraction ϕ 2 {\displaystyle \phi _{2}} of polymer (component 2 {\displaystyle 2} ), with the introduction of a parameter χ {\displaystyle \chi } to take account of the energy of interdispersing polymer and solvent molecules. R {\displaystyle R} is the gas constant and T {\displaystyle T} is the absolute temperature. The volume fraction is analogous to the mole fraction, but is weighted to take account of the relative sizes of the molecules. For a small solute, the mole fractions would appear instead, and this modification is the innovation due to Flory and Huggins. In the most general case the mixing parameter, χ {\displaystyle \chi } , is a free energy parameter, thus including an entropic component. == Derivation == We first calculate the entropy of mixing, the increase in the uncertainty about the locations of the molecules when they are interspersed. In the pure condensed phases – solvent and polymer – everywhere we look we find a molecule.[3] Of course, any notion of "finding" a molecule in a given location is a thought experiment since we can't actually examine spatial locations the size of molecules. The expression for the entropy of mixing of small molecules in terms of mole fractions is no longer reasonable when the solute is a macromolecular chain. We take account of this dissymmetry in molecular sizes by assuming that individual polymer segments and individual solvent molecules occupy sites on a lattice. Each site is occupied by exactly one molecule of the solvent or by one monomer of the polymer chain, so the total number of sites is N = N 1 + x N 2 {\displaystyle N=N_{1}+xN_{2}} where N 1 {\displaystyle N_{1}} is the number of solvent molecules and N 2 {\displaystyle N_{2}} is the number of polymer molecules, each of which has x {\displaystyle x} segments.[4] For a random walk on a lattice we can calculate the entropy change (the increase in spatial uncertainty) as a result of mixing solute and solvent. Δ S m i x = − k B [ N 1 ln ⁡ N 1 N + N 2 ln ⁡ x N 2 N ] {\displaystyle \Delta S_{\rm {mix}}=-k_{\rm {B}}\left[N_{1}\ln {\tfrac {N_{1}}{N}}+N_{2}\ln {\tfrac {xN_{2}}{N}}\right]} where k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant. Define the lattice volume fractions ϕ 1 {\displaystyle \phi _{1}} and ϕ 2 {\displaystyle \phi _{2}} ϕ 1 = N 1 N , ϕ 2 = x N 2 N {\displaystyle \phi _{1}={\frac {N_{1}}{N}},\quad \phi _{2}={\frac {xN_{2}}{N}}} These are also the probabilities that a given lattice site, chosen at random, is occupied by a solvent molecule or a polymer segment, respectively. Thus Δ S m i x = − k B [ N 1 ln ⁡ ϕ 1 + N 2 ln ⁡ ϕ 2 ] {\displaystyle \Delta S_{\rm {mix}}=-k_{\rm {B}}[\,N_{1}\ln \phi _{1}+N_{2}\ln \phi _{2}\,]} For a small solute whose molecules occupy just one lattice site, x {\displaystyle x} equals one, the volume fractions reduce to molecular or mole fractions, and we recover the usual entropy of mixing. In addition to the entropic effect, we can expect an enthalpy change.[5] There are three molecular interactions to consider: solvent-solvent w 11 {\displaystyle w_{11}} , monomer-monomer w 22 {\displaystyle w_{22}} (not the covalent bonding, but between different chain sections), and monomer-solvent w 12 {\displaystyle w_{12}} . Each of the last occurs at the expense of the average of the other two, so the energy increment per monomer-solvent contact is Δ w = w 12 − 1 2 ( w 22 + w 11 ) {\displaystyle \Delta w=w_{12}-{\tfrac {1}{2}}(w_{22}+w_{11})} The total number of such contacts is x N 2 z ϕ 1 = N 1 ϕ 2 z {\displaystyle xN_{2}z\phi _{1}=N_{1}\phi _{2}z} where z {\displaystyle z} is the coordination number, the number of nearest neighbors for a lattice site, each one occupied either by one chain segment or a solvent molecule. That is, x N 2 {\displaystyle xN_{2}} is the total number of polymer segments (monomers) in the solution, so x N 2 z {\displaystyle xN_{2}z} is the number of nearest-neighbor sites to all the polymer segments. Multiplying by the probability ϕ 1 {\displaystyle \phi _{1}} that any such site is occupied by a solvent molecule,[6] we obtain the total number of polymer-solvent molecular interactions. An approximation following mean field theory is made by following this procedure, thereby reducing the complex problem of many interactions to a simpler problem of one interaction. The enthalpy change is equal to the energy change per polymer monomer-solvent interaction multiplied by the number of such interactions Δ H m i x = N 1 ϕ 2 z Δ w {\displaystyle \Delta H_{\rm {mix}}=N_{1}\phi _{2}z\Delta w} The polymer-solvent interaction parameter chi is defined as χ 12 = z Δ w k B T {\displaystyle \chi _{12}={\frac {z\Delta w}{k_{\rm {B}}T}}} It depends on the nature of both the solvent and the solute, and is the only material-specific parameter in the model. The enthalpy change becomes Δ H m i x = k B T N 1 ϕ 2 χ 12 {\displaystyle \Delta H_{\rm {mix}}=k_{\rm {B}}TN_{1}\phi _{2}\chi _{12}} Assembling terms, the total free energy change is Δ G m i x = R T [ n 1 ln ⁡ ϕ 1 + n 2 ln ⁡ ϕ 2 + n 1 ϕ 2 χ 12 ] {\displaystyle \Delta G_{\rm {mix}}=RT[\,n_{1}\ln \phi _{1}+n_{2}\ln \phi _{2}+n_{1}\phi _{2}\chi _{12}\,]} where we have converted the expression from molecules N 1 {\displaystyle N_{1}} and N 2 {\displaystyle N_{2}} to moles n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} by transferring the Avogadro constant N A {\displaystyle N_{\text{A}}} to the gas constant R = k B N A {\displaystyle R=k_{\rm {B}}N_{\text{A}}} . The value of the interaction parameter can be estimated from the Hildebrand solubility parameters δ a {\displaystyle \delta _{a}} and δ b {\displaystyle \delta _{b}} χ 12 = V s e g ( δ a − δ b ) 2 R T {\displaystyle \chi _{12}={\frac {V_{\rm {seg}}(\delta _{a}-\delta _{b})^{2}}{RT}}} where V s e g {\displaystyle V_{\rm {seg}}} is the actual volume of a polymer segment. In the most general case the interaction Δ w {\displaystyle \Delta w} and the ensuing mixing parameter, χ {\displaystyle \chi } , is a free energy parameter, thus including an entropic component. This means that aside to the regular mixing entropy there is another entropic contribution from the interaction between solvent and monomer. This contribution is sometimes very important in order to make quantitative predictions of thermodynamic properties. More advanced solution theories exist, such as the Flory–Krigbaum theory. == Liquid-liquid phase separation == Polymers can separate out from the solvent, and do so in a characteristic way. The Flory–Huggins free energy per unit volume, for a polymer with N {\displaystyle N} monomers, can be written in a simple dimensionless form f = ϕ N ln ⁡ ϕ + ( 1 − ϕ ) ln ⁡ ( 1 − ϕ ) + χ ϕ ( 1 − ϕ ) {\displaystyle f={\frac {\phi }{N}}\ln \phi +(1-\phi )\ln(1-\phi )+\chi \phi (1-\phi )} for ϕ {\displaystyle \phi } the volume fraction of monomers, and N ≫ 1 {\displaystyle N\gg 1} . The osmotic pressure (in reduced units) is Π = ϕ N − ln ⁡ ( 1 − ϕ ) − ϕ − χ ϕ 2 {\displaystyle \Pi ={\frac {\phi }{N}}-\ln(1-\phi )-\phi -\chi \phi ^{2}} . The polymer solution is stable with respect to small fluctuations when the second derivative of this free energy is positive. This second derivative is f ″ = 1 N ϕ + 1 1 − ϕ − 2 χ {\displaystyle f''={\frac {1}{N\phi }}+{\frac {1}{1-\phi }}-2\chi } and the solution first becomes unstable when this and the third derivative f ‴ = − 1 N ϕ 2 + 1 ( 1 − ϕ ) 2 {\displaystyle f'''=-{\frac {1}{N\phi ^{2}}}+{\frac {1}{(1-\phi )^{2}}}} are both equal to zero. A little algebra then shows that the polymer solution first becomes unstable at a critical point at χ cp ≃ 1 / 2 + N − 1 / 2 + ⋯ ϕ cp ≃ N − 1 / 2 − N − 1 + ⋯ {\displaystyle \chi _{\text{cp}}\simeq 1/2+N^{-1/2}+\cdots \qquad \phi _{\text{cp}}\simeq N^{-1/2}-N^{-1}+\cdots } This means that for all values of 0 < χ ≲ 1 / 2 {\displaystyle 0<\chi \lesssim 1/2} the monomer-solvent effective interaction is weakly repulsive, but this is too weak to cause liquid/liquid separation. However, when χ > 1 / 2 {\displaystyle \chi >1/2} , there is separation into two coexisting phases, one richer in polymer but poorer in solvent, than the other. The unusual feature of the liquid/liquid phase separation is that it is highly asymmetric: the volume fraction of monomers at the critical point is approximately N − 1 / 2 {\displaystyle N^{-1/2}} , which is very small for large polymers. The amount of polymer in the solvent-rich/polymer-poor coexisting phase is extremely small for long polymers. The solvent-rich phase is close to pure solvent. This is peculiar to polymers, a mixture of small molecules can be approximated using the Flory–Huggins expression with N = 1 {\displaystyle N=1} , and then ϕ cp = 1 / 2 {\displaystyle \phi _{\text{cp}}=1/2} and both coexisting phases are far from pure. == Polymer blends == Synthetic polymers rarely consist of chains of uniform length in solvent. The Flory–Huggins free energy density can be generalized to an N-component mixture of polymers with lengths r i {\displaystyle r_{i}} by f ( { ϕ i , r i } ) = ∑ i = 1 N ϕ i r i ln ⁡ ϕ i + 1 2 ∑ i , j = 1 N ϕ i ϕ j χ i j {\displaystyle f{\Bigl (}\{\phi _{i},r_{i}\}{\Bigr )}=\sum _{i=1}^{N}{\frac {\phi _{i}}{r_{i}}}\ln \phi _{i}+{\frac {1}{2}}\sum _{i,j=1}^{N}\phi _{i}\phi _{j}\chi _{ij}} For a binary polymer blend, where one species consists of N A {\displaystyle N_{A}} monomers and the other N B {\displaystyle N_{B}} monomers this simplifies to f ( ϕ ) = ϕ N A ln ⁡ ϕ + 1 − ϕ N B ln ⁡ ( 1 − ϕ ) + χ ϕ ( 1 − ϕ ) {\displaystyle f(\phi )={\frac {\phi }{N_{A}}}\ln \phi +{\frac {1-\phi }{N_{B}}}\ln(1-\phi )+\chi \phi (1-\phi )} As in the case for dilute polymer solutions, the first two terms on the right-hand side represent the entropy of mixing. For large polymers of N A ≫ 1 {\displaystyle N_{A}\gg 1} and N B ≫ 1 {\displaystyle N_{B}\gg 1} these terms are negligibly small. This implies that for a stable mixture to exist χ < 0 {\displaystyle \chi <0} , so for polymers A and B to blend their segments must attract one another. == Limitations == Flory–Huggins theory tends to agree well with experiments in the semi-dilute concentration regime and can be used to fit data for even more complicated blends with higher concentrations. The theory qualitatively predicts phase separation, the tendency for high molecular weight species to be immiscible, the χ ∝ T − 1 {\displaystyle \chi \propto T^{-1}} interaction-temperature dependence and other features commonly observed in polymer mixtures. However, unmodified Flory–Huggins theory fails to predict the lower critical solution temperature observed in some polymer blends and the lack of dependence of the critical temperature T c {\displaystyle T_{\text{c}}} on chain length r i {\displaystyle r_{i}} . Additionally, it can be shown that for a binary blend of polymer species with equal chain lengths ( N A = N B ) {\displaystyle (N_{A}=N_{B})} the critical concentration should be ψ c = 1 / 2 {\displaystyle \psi _{\text{c}}=1/2} ; however, polymers blends have been observed where this parameter is highly asymmetric. In certain blends, mixing entropy can dominate over monomer interaction. By adopting the mean-field approximation, χ {\displaystyle \chi } parameter complex dependence on temperature, blend composition, and chain length was discarded. Specifically, interactions beyond the nearest neighbor may be highly relevant to the behavior of the blend and the distribution of polymer segments is not necessarily uniform, so certain lattice sites may experience interaction energies disparate from that approximated by the mean-field theory. One well-studied effect on interaction energies neglected by unmodified Flory–Huggins theory is chain correlation. In dilute polymer mixtures, where chains are well separated, intramolecular forces between monomers of the polymer chain dominate and drive demixing leading to regions where polymer concentration is high. As the polymer concentration increases, chains tend to overlap and the effect becomes less important. In fact, the demarcation between dilute and semi-dilute solutions is commonly defined by the concentration where polymers begin to overlap c ∗ {\displaystyle c^{*}} which can be estimated as c ∗ = m 4 3 π R g 3 {\displaystyle c^{*}={\frac {m}{{\frac {4}{3}}\pi R_{\text{g}}^{3}}}} Here, m is the mass of a single polymer chain, and R g {\displaystyle R_{\text{g}}} is the chain's radius of gyration. == Footnotes == == References == == External links == "Conformations, Solutions and Molecular Weight" (book chapter), Chapter 3 of Book Title: Polymer Science and Technology; by Joel R. Fried; 2nd Edition, 2003
Wikipedia/Flory–Huggins_solution_theory
Value theory, also called axiology, studies the nature, sources, and types of values. It is a branch of philosophy and an interdisciplinary field closely associated with social sciences such as economics, sociology, anthropology, and psychology. Value is the worth of something, usually understood as covering both positive and negative degrees corresponding to the terms good and bad. Values influence many human endeavors related to emotion, decision-making, and action. Value theorists distinguish various types of values, like the contrast between intrinsic and instrumental value. An entity has intrinsic value if it is good in itself, independent of external factors. An entity has instrumental value if it is useful as a means leading to other good things. Other classifications focus on the type of benefit, including economic, moral, political, aesthetic, and religious values. Further categorizations distinguish absolute values from values that are relative to something else. Diverse schools of thought debate the nature and origins of values. Value realists state that values exist as objective features of reality. Anti-realists reject this, with some seeing values as subjective human creations and others viewing value statements as meaningless. Regarding the sources of value, hedonists argue that only pleasure has intrinsic value, whereas desire theorists discuss desires as the ultimate source of value. Perfectionism, another approach, emphasizes the cultivation of characteristic human abilities. Value pluralism identifies diverse sources of intrinsic value, raising the issue of whether values belonging to different types are comparable. Value theorists employ various methods of inquiry, ranging from reliance on intuitions and thought experiments to the analysis of language, description of first-person experience, observation of behavior, and surveys. Value theory is related to various fields, such as ethics, which focuses primarily on normative concepts of right behavior, whereas value theory explores evaluative concepts about what is good. In economics, theories of value are frameworks to assess and explain the economic value of commodities. Sociology and anthropology examine values as aspects of societies and cultures, reflecting dominant preferences and beliefs. Psychologists tend to understand values as abstract motivational goals that shape an individual's personality. The roots of value theory lie in antiquity as reflections on the highest good that humans should pursue. Diverse traditions contributed to this area of thought during the medieval and early modern periods, but it was only established as a distinct discipline in the late 19th and early 20th centuries. == Definition == Value theory, also known as axiology and theory of values, is the systematic study of values. As a branch of philosophy, it examines which things are good and what it means for something to be good. It distinguishes different types of values and explores how they can be measured and compared. This field also studies whether values are a fundamental aspect of reality and how they influence phenomena such as emotion, desire, decision, and action. Value theory is relevant to many human endeavors because values are guiding principles that underlie the political, economic, scientific, and personal spheres. It analyzes and evaluates phenomena such as well-being, utility, beauty, human life, knowledge, wisdom, freedom, love, and justice. The precise definition of value theory is debated and some theorists rely on alternative characterizations. In a broad sense, value theory is a catch-all label that encompasses all philosophical disciplines studying evaluative and normative topics. According to this view, value theory is one of the main branches of philosophy and includes ethics, aesthetics, social philosophy, political philosophy, and philosophy of religion. A similar broad characterization sees value theory as a multidisciplinary area of inquiry that integrates research from fields like sociology, anthropology, psychology, and economics alongside philosophy. In a narrow sense, value theory is a subdiscipline of ethics that is particularly relevant to the school of consequentialism since it determines how to assess the value of consequences. The word axiology has its origin in the ancient Greek terms ἄξιος (axios, meaning 'worthy' or 'of value') and λόγος (logos, meaning 'study' or 'theory of'). Even though the roots of value theory reach back to the ancient period, this area of thought was only conceived as a distinct discipline in the late 19th and early 20th centuries, when the term axiology was coined. The terms value theory and axiology are usually used as synonyms, but some philosophers distinguish between them. According to one characterization, axiology is a subfield of value theory that limits itself to theories about which things are valuable and how valuable they are. The term timology is an older and less common synonym. == Value == Value is the worth, usefulness, or merit of something. Value theorists examine the expressions used to describe and compare values, called evaluative terms. They are further interested in the types or categories of values. The proposed classifications overlap and are based on factors like the source, beneficiary, and function of the value. === Evaluative terms === Values are expressed through evaluative terms. For example, the words good, best, great, and excellent convey positive values, whereas words like bad and terrible indicate negative values. Value theorists distinguish between thin and thick evaluative terms. Thin evaluative terms, like good and bad, express pure evaluations without any additional descriptive content. They contrast with thick evaluative terms, like courageous and cruel, which provide more information by expressing other qualities, such as character traits, in addition to the evaluation. Values are often understood as degrees that cover positive and negative magnitudes corresponding to good and bad. The term value is sometimes restricted to positive degrees to contrast with the term disvalue for negative degrees. The words better and worse are used to compare degrees, but it is controversial whether a quantitative comparison is always possible. Evaluation is the assessment or measurement of value, often employed to compare the benefits of different options to find the most advantageous choice. Evaluative terms are sometimes distinguished from normative or deontic terms. Normative or deontic terms, like right, wrong, and obligation, prescribe actions or other states by expressing what ought to be done or what is required. Evaluative terms have a wider scope because they are not limited to what people can control or are responsible for. For instance, involuntary events like digestion and earthquakes can have a positive or negative value even if they are not right or wrong in a strict sense. Despite the distinction, evaluative and normative concepts are closely related. For example, the value of the consequences of an action may influence its normative status—whether the action is right or wrong. === Types === ==== Intrinsic and instrumental ==== A thing has intrinsic or final value if it is good in itself or good for its own sake, independent of external factors or outcomes. A thing has extrinsic or instrumental value if it is useful or leads to other good things, serving as a means to bring about a desirable end. For example, tools like microwaves or money have instrumental value due to the useful functions they perform. In some cases, the thing produced this way has itself instrumental value, like when using money to buy a microwave. This can result in a chain of instrumentally valuable things in which each link gets its value by causing the following link. Intrinsically valuable things stand at the endpoint of these chains and ground the value of all the preceding links. One suggestion to distinguish between intrinsic and instrumental value, proposed by G. E. Moore, relies on a thought experiment that imagines the valuable thing in isolation from everything else. In such a situation, purely instrumentally valuable things lose their value since they serve no purpose while purely intrinsically valuable things remain valuable. According to a common view, pleasure is one of the sources of intrinsic value. Other suggested sources include desire satisfaction, virtue, life, health, beauty, freedom, and knowledge. Intrinsic and instrumental value are not exclusive categories. As a result, a thing can have both intrinsic and instrumental value if it is both good in itself while also leading to other good things. In a similar sense, a thing can have different instrumental values at the same time, both positive and negative ones. This is the case if some of its consequences are good while others are bad. The total instrumental value of a thing is the value balance of all its consequences. Because instrumental value depends on other values, it is an open question whether it should be understood as a value in a strict sense. For example, the overall value of a chain of causes leading to an intrinsically valuable thing remains the same if instrumentally valuable links are added or removed without affecting the intrinsically valuable thing. The observation that the overall value does not change is sometimes used as an argument that the things added or removed do not have value. Traditionally, value theorists have used the terms intrinsic value and final value interchangeably, just like the terms extrinsic value and instrumental value. This practice has been questioned in the 20th century based on the idea that they are similar but not identical concepts. According to this view, a thing has intrinsic value if the source of its value is an intrinsic property, meaning that the value does not depend on how the thing is related to other objects. Extrinsic value, by contrast, depends on external relations. This view sees instrumental value as one type of extrinsic value based on external causal relations. At the same time, it allows that there are other types of non-instrumental extrinsic value that result from external non-causal relations. Final value is understood as what is valued for its own sake, independent of whether intrinsic or extrinsic properties are responsible. ==== Absolute and relative ==== Another distinction relies on the contrast between absolute and relative value. Absolute value, also called value simpliciter, is a form of unconditional value. A thing has relative value if its value is relative to other things or limited to certain considerations or viewpoints. One form of relative value is restricted to the type of an entity, expressed in sentences like "That is a good knife" or "Jack is a good thief". This form is known as attributive goodness since the word "good" modifies the meaning of another term. To be attributively good as a certain type means to possess qualities characteristic of that type. For instance, a good knife is sharp and a good thief has the skill of stealing without getting caught. Attributive goodness contrasts with predicative goodness. The sentence "Pleasure is good" is an example since the word good is used as a predicate to talk about the unqualified value of pleasure. Attributive and predicative goodness can accompany each other, but this is not always the case. For instance, being a good thief is not necessarily a good thing. Another type of relative value restricts goodness to a specific person. Known as personal value, it expresses what benefits a particular person, promotes their welfare, or is in their interest. For example, a poem written by a child may have personal value for the parents even if the poem lacks value for others. Impersonal value, by contrast, is good in general without restriction to any specific person or viewpoint. Some philosophers, like Moore, reject the existence of personal values, holding that all values are impersonal. Others have proposed theories about the relation between personal and impersonal value. The agglomerative theory says that impersonal value is nothing but the sum of all personal values. Another view understands impersonal value as a specific type of personal value taken from the perspective of the universe as a whole. Agent-relative value is sometimes contrasted with personal value as another person-specific limitation of the evaluative outlook. Agent-relative values affect moral considerations about what a person is responsible for or guilty of. For example, if Mei promises to pick Pedro up from the airport then an agent-relative value obligates Mei to drive to the airport. This obligation is in place even if it does not benefit Mei, in which case there is an agent-relative value without a personal value. In consequentialism, agent-relative values are often discussed in relation to ethical dilemmas. One dilemma revolves around the question of whether an individual should murder an innocent person if this prevents the murder of two innocent people by a different perpetrator. The agent-neutral perspective tends to affirm this idea since one murder is preferable to two. The agent-relative perspective tends to reject this conclusion, arguing that the initial murder should be avoided since it negatively impacts the agent-relative value of the individual committing it. Traditionally, most value theorists see absolute value as the main topic of value theory and focus their attention on this type. Nonetheless, some philosophers, like Peter Geach and Philippa Foot, have argued that the concept of absolute value by itself is meaningless and should be understood as one form of relative value. ==== Other distinctions ==== Other categorizations of values have been proposed following diverse classification principles without a single approach widely accepted by all theorists. Some focus on the types of entities that have value. They include distinct categories for entities like individuals, groups, society, the environment, and inert things. Another subdivision pays attention to the type of benefit involved and encompasses material, economic, moral, social, political, aesthetic, and religious values. Classifications by the beneficiary of the value distinguish between self- and other-oriented values. A historically influential approach identifies three spheres of value: truth, goodness, and beauty. For example, the neo-Kantian philosopher Wilhelm Windelband characterizes them as the highest goals of consciousness, with thought aiming at truth, will aiming at goodness, and emotion aiming at beauty. A similar view, proposed by the Chinese philosopher Zhang Dainian, says that the value of truth belongs to knowledge, the value of goodness belongs to behavior, and the value of beauty belongs to art. This three-fold distinction also plays a central role in the philosophies of Franz Brentano and Jürgen Habermas. Other suggested types of values include objective, subjective, potential, actual, contingent, necessary, inherent, and constitutive values. == Schools of thought == === Realism and anti-realism === Value realism is the view that values have mind-independent existence. This means that objective facts determine what has value, irrespective of subjective beliefs and preferences. According to this view, the evaluative statement "That act is bad" is as objectively true or false as the empirical statement "That act causes distress". Realists often analyze values as properties of valuable things. For example, stating that kindness is good asserts that kindness possesses the property of goodness. Value realists disagree about what type of property is involved. Naturalists say that value is a natural property. Natural properties, like size and shape, can be known through empirical observation and are studied by the natural sciences. Non-naturalists reject this view but agree that values are real. They say that values differ significantly from empirical properties and belong to another domain of reality. According to one view, they are known through rational or emotional intuition rather than empirical observation. Another disagreement among realists is about whether the entity carrying the value is a concrete individual or a state of affairs. For instance, the name "Bill" refers to an individual while the sentence "Bill is pleased" refers to a state of affairs, which combines the individual "Bill" with the property "pleased". Some value theorists hold that the value is a property directly of Bill while others contend that it is a property of the state of affairs that Bill is pleased. This distinction affects various disputes in value theory. In some cases, a value is intrinsic according to one view and extrinsic according to the other. Value realism contrasts with anti-realism, which comes in various forms. In its strongest version, anti-realism rejects the existence of values in any form, claiming that value statements are meaningless. There are various intermediate views between this position and realism. Some anti-realists accept that value claims have meaning but deny that they have a truth value, a position known as non-cognitivism. For example, emotivists say that value claims express emotional attitudes, similar to how exclamations like "Yay!" or "Boo!" express emotions rather than stating facts. Cognitivists contend that value statements have a truth value, meaning that sentences like "knowledge is intrinsically good" are either true or false. Following this view, error theorists defend anti-realism by stating that all value statements are false because there are no values. Another view accepts the existence of values but denies that they are mind-independent. According to this view, the mental states of individuals determine whether an object has value, for instance, because individuals desire it. A similar view is defended by existentialists like Jean-Paul Sartre, who argued that values are human creations that endow the world with meaning. Subjectivist theories say that values are relative to each subject, whereas more objectivist outlooks hold that values depend on mind in general rather than on the individual mind. A different position accepts that values are mind-independent but holds that they are reducible to other facts, meaning that they are not a fundamental part of reality. One form of reductionism maintains that a thing is good if it is fitting to favor this thing, regardless of whether people actually favor it, a position known as the fitting-attitude theory of value. The buck-passing account, a closely related reductive view, argues that a thing is valuable if people have reasons to treat the thing in certain ways. These reasons come from other features of the valuable thing. According to some views, reductionism is a form of realism, but the strongest form of realism says that value is a fundamental part of reality and cannot be reduced to other aspects. === Sources of value === Various theories about the sources of value have been proposed. They aim to clarify what kinds of things are intrinsically good. The historically influential theory of hedonism states that how people feel is the only source of value. More specifically, it says that pleasure is the only intrinsic good and pain is the only intrinsic evil. According to this view, everything else only has instrumental value to the extent that it leads to pleasure or pain, including knowledge, health, and justice. Hedonists usually understand the term pleasure in a broad sense that covers all kinds of enjoyable experiences, including bodily pleasures of food and sex as well as more intellectual or abstract pleasures, like the joy of reading a book or happiness about a friend's promotion. Pleasurable experiences come in degrees, and hedonists usually associate their intensity and duration with the magnitude of value they have. Many hedonists identify pleasure and pain as symmetric opposites, meaning that the value of pleasure balances out the disvalue of pain if they have the same intensity. However, some hedonists reject this symmetry and give more weight to avoiding pain than to experiencing pleasure. Although it is widely accepted that pleasure is valuable, the hedonist claim that it is the only source of value is controversial. Welfarism, a closely related theory, understands well-being as the only source of value. Well-being is what is ultimately good for a person, which can include other aspects besides pleasure, such as health, personal growth, meaningful relationships, and a sense of purpose in life. Desire theories offer a slightly different account, stating that desire satisfaction is the only source of value. This theory overlaps with hedonism because many people desire pleasure and because desire satisfaction is often accompanied by pleasure. Nonetheless, there are important differences: people desire a variety of other things as well, like knowledge, achievement, and respect; additionally, desire satisfaction may not always result in pleasure. Some desire theorists hold that value is a property of desire satisfaction itself, while others say that it is a property of the objects that satisfy a desire. One debate in desire theory concerns whether every desire is a source of value. For example, if a person has a false belief that money makes them happy, it is questionable whether the satisfaction of their desire for money is a source of value. To address this consideration, some desire theorists say that a desire can only provide value if a fully informed and rational person would have it, thereby excluding misguided desires from being a source of value. Perfectionism identifies the realization of human nature and the cultivation of characteristic human abilities as the source of intrinsic goodness. It covers capacities and character traits belonging to the bodily, emotional, volitional, cognitive, social, artistic, and religious fields. Perfectionists disagree about which human excellences are the most important. Many are pluralistic in recognizing a diverse array of human excellences, such as knowledge, creativity, health, beauty, free agency, and moral virtues like benevolence and courage. According to one suggestion, there are two main fields of human goods: theoretical abilities responsible for understanding the world and practical abilities responsible for interacting with it. Some perfectionists provide an ideal characterization of human nature as the goal of human flourishing, holding that human excellences are those aspects that promote the realization of this goal. This view is exemplified in Aristotle's focus on rationality as the nature and ideal state of human beings. Non-humanistic versions extend perfectionism to the natural world in general, arguing that excellence as a source of intrinsic value is not limited to the human realm. === Monism and pluralism === Monist theories of value assert that there is only a single source of intrinsic value. They agree that various things have value but maintain that all fundamentally good things belong to the same type. For example, hedonists hold that nothing but pleasure has intrinsic value, while desire theorists argue that desire satisfaction is the only source of fundamental goodness. Pluralists reject this view, contending that a simple single-value system is too crude to capture the complexity of the sphere of values. They say that diverse sources of value exist independently of one another, each contributing to the overall value of the world. One motivation for value pluralism is the observation that people value diverse types of things, including happiness, friendship, success, and knowledge. This diversity becomes particularly prominent when people face difficult decisions between competing values, such as choosing between friendship and career success. In such cases, value pluralists can argue that the different items have different types of values. Since monists accept only one source of intrinsic value, they may provide a different explanation by proposing that some of the valuable items only have instrumental value but lack intrinsic value. Pluralists have proposed various accounts of how their view affects practical decisions. Rational decisions often rely on value comparisons to determine which course of action should be pursued. Some pluralists discuss a hierarchy of values reflecting the relative importance and weight of different value types to help people promote higher values when faced with difficult choices. For example, philosopher Max Scheler ranks values based on how enduring and fulfilling they are into the levels of pleasure, utility, vitality, culture, and holiness. He asserts that people should not promote lower values, like pleasure, if this comes at the expense of higher values. Radical pluralists reject this approach, putting more emphasis on diversity by holding that different types of values are not comparable with each other. This means that each value type is unique, making it impossible to determine which one is superior. Some value theorists use radical pluralism to argue that value conflicts are inevitable, that the gain of one value cannot always compensate for the loss of another, and that some ethical dilemmas are irresolvable. For example, philosopher Isaiah Berlin applied this idea to the values of liberty and equality, arguing that a gain in one cannot make up for a loss in the other. Similarly, philosopher Joseph Raz said that it is often impossible to compare the values of career paths, like when choosing between becoming a lawyer or a clarinetist. The terms incomparability and incommensurability are often used as synonyms in this context. However, philosophers like Ruth Chang distinguish them. According to this view, incommensurability means that there is no common measure to quantify values of different types. Incommensurable values may or may not be comparable. If they are, it is possible to say that one value is better than another, but it is not possible to quantify how much better it is. === Others === Several controversies surround the question of how the intrinsic value of a whole is determined by the intrinsic values of its parts. According to the additivity principle, the intrinsic value of a whole is simply the sum of the intrinsic values of its parts. For example, if a virtuous person becomes happy then the intrinsic value of the happiness is simply added to the intrinsic value of the virtue, thereby increasing the overall value. Various counterexamples to the additivity principle have been proposed, suggesting that the relation between parts and wholes is more complex. For instance, Immanuel Kant argued that if a vicious person becomes happy, this happiness, though good in itself, does not increase the overall value. On the contrary, it makes things worse, according to Kant, since viciousness should not be rewarded with happiness. This situation is known as an organic unity—a whole whose intrinsic value differs from the sum of the intrinsic values of its parts. Another perspective, called holism about value, asserts that the intrinsic value of a thing depends on its context. Holists can argue that happiness has positive intrinsic value in the context of virtue and negative intrinsic value in the context of vice. Atomists reject this view, saying that intrinsic value is context-independent. Theories of value aggregation provide concrete principles for calculating the overall value of an outcome based on how positively or negatively each individual is affected by it. For example, if a government implements a new policy that affects some people positively and others negatively, theories of value aggregation can be used to determine whether the overall value of the policy is positive or negative. Axiological utilitarianism accepts the additivity principle, saying that the total value is simply the sum of all individual values. Axiological egalitarians are not only interested in the sum total of value but also in how the values are distributed. They argue that an outcome with a balanced advantage distribution is better than an outcome where some benefit a lot while others benefit little, even if the two outcomes have the same sum total. Axiological prioritarians are particularly concerned with the benefits of individuals who are worse off. They say that providing advantages to people in need has more value than providing the same advantages to others. Another debate addresses the meaning of life, investigating whether life or existence as a whole has a higher meaning or purpose. Naturalist views argue that the meaning of life is found within the physical world, either as objective values that are true for everyone or as subjective values that vary according to individual preferences. Suggested fields where humans find meaning include exercising freedom, committing oneself to a cause, practicing altruism, engaging in positive social relationships, or pursuing personal happiness. Supernaturalists, by contrast, propose that meaning lies beyond the natural world. For example, various religions teach that God created the world for a higher purpose, imbuing existence with meaning. A related outlook argues that immortal souls serve as sources of meaning by being connected to a transcendent reality and evolving spiritually. Existential nihilists reject both naturalist and supernaturalist explanations by asserting that there is no higher purpose. They suggest that life is meaningless, with the consequence that there is no higher reason to continue living and that all efforts, achievements, happiness, and suffering are ultimately pointless. Formal axiology is a theory of value initially developed by philosopher Robert S. Hartman. This approach treats axiology as a formal science, akin to logic and mathematics. It uses axioms to give an abstract definition of value, understanding it not as a property of things but as a property of concepts. Value measures the extent to which an entity fulfills its concept. For example, a good car has all the desirable qualities of cars, like a reliable engine and effective brakes, whereas a bad car lacks many. Formal axiology distinguishes between three fundamental value types: intrinsic values apply to people; extrinsic values apply to things, actions, and social roles; systemic values apply to conceptual constructs. Formal axiology examines how these value types form a hierarchy and how they can be measured. == Methods == Value theorists employ various methods to conduct their inquiries, justify theories, and measure values. Intuitionists rely on intuitions to assess evaluative claims. In this context, an intuition is an immediate apprehension or understanding of a self-evident claim, meaning that its truth can be assessed without inferring it from another observation. Value theorists often rely on thought experiments to gain this type of understanding. Thought experiments are imagined scenarios that exemplify philosophical problems. Philosophers use counterfactual reasoning to evaluate possible consequences and gain insight into underlying problems. For example, philosopher Robert Nozick imagines an experience machine that can virtually simulate an ideal life. Based on his contention that people would not want to spend the rest of their lives in this pleasurable simulation, Nozick argues against the hedonist claim that pleasure is the only source of intrinsic value. According to him, the thought experiment shows that the value of an authentic connection to reality is not reducible to pleasure. Phenomenologists provide a detailed first-person description of the experience of values. They closely examine emotional experiences, ranging from desire, interest, and preference to feelings in the form of love and hate. However, they do not limit their inquiry to these phenomena, asserting that values permeate experience at large. A key aspect of the phenomenological method is to suspend preconceived ideas and judgments to understand the essence of experiences as they present themselves to consciousness. The analysis of concepts and ordinary language is another method of inquiry. By examining terms and sentences used to talk about values, value theorists aim to clarify their meanings, uncover crucial distinctions, and formulate arguments for and against axiological theories. For instance, a prominent dispute between naturalists and non-naturalists hinges on the conceptual analysis of the term good, in particular, whether its meaning can be analyzed through natural terms, like pleasure. In the social sciences, value theorists face the challenge of measuring the evaluative outlook of individuals and groups. Specifically, they aim to determine personal value hierarchies, for example, whether a person gives more weight to truth than to moral goodness or beauty. They distinguish between direct and indirect measurement methods. Direct methods involve asking people straightforward questions about what things they value and which value priorities they have. This approach assumes that people are aware of their evaluative outlook and able to articulate it accurately. Indirect methods do not share this assumption, asserting instead that values guide behavior and choices on an unconscious level. Consequently, they observe how people decide and act, seeking to infer the underlying value attitudes responsible for picking one course of action rather than another. Various catalogs or scales of values have been proposed in psychology and related social sciences to measure value priorities. The Rokeach Value Survey considers a total of 36 values divided into two groups: instrumental values, like honesty and capability, which serve as means to promote terminal values, such as freedom and family security. It asks participants to rank the values based on their impact on the participants' lives, aiming to understand the relative importance assigned to each of them. The Schwartz theory of basic human values is a modification of the Rokeach Value Survey that seeks to provide a more cross-cultural and universal assessment. It arranges the values in a circular manner to reflect that neighboring values are compatible with each other, such as openness to change and self-enhancement, while values on opposing sides may conflict with each other, such as openness to change and conservation. == In various fields == === Ethics === Ethics and value theory are overlapping fields of inquiry. Ethics studies moral phenomena, focusing on how people should act or which behaviors are morally right. Value theory investigates the nature, sources, and types of values in general. Some philosophers understand value theory as a subdiscipline of ethics. This is based on the idea that what people should do is affected by value considerations but not necessarily limited to them. Another view sees ethics as a subdiscipline of value theory. This outlook follows the idea that ethics is concerned with moral values affecting what people can control, whereas value theory examines a broader range of values, including those beyond anyone's control. Some perspectives contrast ethics and value theory, asserting that the normative concepts examined by ethics are distinct from the evaluative concepts examined by value theory. Axiological ethics is a subfield of ethics examining the nature and role of values from a moral perspective, with particular interest in determining which ends are worth pursuing. The ethical theory of consequentialism combines the perspectives of ethics and value theory, asserting that the rightness of an action depends on the value of its consequences. Consequentialists compare possible courses of action, saying that people should follow the one leading to the best overall consequences. The overall consequences of an action are the totality of its effects, or how it impacts the world by starting a causal chain of events that would not have occurred otherwise. Distinct versions of consequentialism rely on different theories of the sources of value. Classical utilitarianism, a prominent form of consequentialism, says that moral actions produce the greatest amount of pleasure for the greatest number of people. It combines a consequentialist outlook on right action with a hedonist outlook on pleasure as the only source of intrinsic value. === Economics === Economics is a social science studying how goods and services are produced, distributed, and consumed, both from the perspective of individual agents and societal systems. Economists view evaluations as a driving force underlying economic activity. They use the notion of economic value and related evaluative concepts to understand decision-making processes, resource allocation, and the impact of policies. The economic value or benefit of a commodity is the advantage it provides to an economic agent, often measured in terms of what people are willing to pay for it. Economic theories of value are frameworks to explain how economic value arises and which factors influence it. Prominent frameworks include the classical labor theory of value and the neo-classical marginal theory of value. The labor theory, initially developed by the economists Adam Smith and David Ricardo, distinguishes between use value—the utility or satisfaction a commodity provides—and exchange value—the proportion at which one commodity can be exchanged with another. It focuses on exchange value, which it says is determined by the amount of labor required to produce the commodity. In its simplest form, it directly correlates exchange value to labor time. For example, if the time needed to hunt a deer is twice the time needed to hunt a beaver then one deer is worth two beavers. The philosopher Karl Marx extended the labor theory of value in various ways. He introduced the concept of surplus value, which goes beyond the time and resources invested to explain how capitalists can profit from the labor of their employees. The marginal theory of value focuses on consumption rather than production. It says that the utility of a commodity is the source of its value. Specifically, it is interested in marginal utility, the additional satisfaction gained from consuming one more unit of the commodity. Marginal utility often diminishes if many units have already been consumed, leading to a decrease in the exchange value of commodities that are abundantly available. Both the labor theory and the marginal theory were later challenged by the Sraffian theory of value, which considers diverse forms of production costs, including but not limited to the quantity of labor. === Sociology === Sociology studies social behavior, relationships, institutions, and society at large. In their analyses and explanations of these phenomena, some sociologists use the concept of values to understand issues like social cohesion and conflict, the norms and practices people follow, and collective action. They usually understand values as subjective attitudes possessed by individuals and shared in social groups. According to this view, values are beliefs or priorities about goals worth pursuing that guide people to act in certain ways. For example, societies that value education may invest substantial resources to ensure high-quality schooling. This subjective conception of values as aspects of individuals and social groups contrasts with the objective conceptions of values more prominent in economics, which understand values as aspects of commodities. Shared values can help unite people in the pursuit of a common cause, fostering social cohesion. Value differences, by contrast, may divide people into antagonistic groups that promote conflicting projects. Some sociologists employ value research to predict how people will behave. Given the observation that someone values the environment, they may conclude that this person is more likely to recycle or support pro-environmental legislation. One approach to this type of research uses value scales, such as the Rokeach Value Survey and the Schwartz theory of basic human values, to measure the value outlook of individuals and groups. === Anthropology === Anthropology also studies human behavior and societies but does not limit itself to contemporary social structures, extending its focus to humanity both past and present. Similar to sociologists, many anthropologists understand values as social representations of goals worth pursuing. For them, values are embedded in mental structures associated with culture and ideology about what is desirable. A slightly different approach in anthropology focuses on the practical side of values, holding that values are constantly created through human activity. Anthropological value theorists use values to compare cultures. They can be employed to examine similarities as universal concerns present in every society. For example, anthropologist Clyde Kluckhohn and sociologist Fred Strodtbeck proposed a set of value orientations found in every culture. These orientations are centered on the topics of human nature, human activity, social organization, relation to nature, and a focus on past, present, or future. Values can also be used to analyze differences between cultures and value changes within a culture. Anthropologist Louis Dumont followed this idea, suggesting that the cultural meaning systems in distinct societies differ in their value priorities. He argued that values are ordered hierarchically around a set of paramount values that trump all other values. For example, Dumont analyzed the traditional Indian caste system as a cultural hierarchy based on the value of purity, extending from the pure Brahmins to the "untouchable" Dalits. The contrast between individualism and collectivism is an influential topic in cross-cultural value research. Individualism promotes values associated with the autonomy of individuals, such as self-directedness, independence, and the fulfillment of personal goals. Collectivism gives priority to group-related values, like cooperation, conformity, and foregoing personal advantages for the sake of collective benefits. As a rough simplification, it is often suggested that individualism is more prominent in Western cultures, whereas collectivism is more commonly observed in Eastern cultures. === Psychology === As the study of mental phenomena and behavior, psychology contrasts with sociology and anthropology by focusing more on the perspective of individuals than the broader social and cultural contexts. Psychologists tend to understand values as abstract motivational goals or general principles about what matters. From this perspective, values differ from specific plans and intentions since they are stable evaluative tendencies not bound to concrete situations. Various psychological theories of values establish a close link between an individual's evaluative outlook and their personality. An early theory, formulated by psychologists Philip E. Vernon and Gordon Allport, understands personality as a collection of aspects unified by a coherent value system. It distinguishes between six personality types corresponding to the value spheres of theory, economy, aesthetics, society, politics, and religion. For example, people with theoretical personalities place special importance on the value of knowledge and the discovery of truth. Influenced by Vernon and Allport, psychologist Milton Rokeach conceptualized values as enduring beliefs about what goals and conduct are preferable. He divided values into the categories of instrumental and terminal values. He thought that a central aspect of personality lies in how people prioritize the values within each category. Psychologist Shalom Schwartz refined this approach by linking values to emotion and motivation. He explored how value rankings affect decisions in which the values of different options conflict. == History == The origin of value theory lies in the ancient period, with early reflections on the good life and the ends worth pursuing. Socrates (c. 469–399 BCE) identified the highest good as the right combination of knowledge, pleasure, and virtue, holding that active inquiry is associated with pleasure while knowledge of the Good leads to virtuous action. Plato (c. 428–347 BCE) conceived the Good as a universal and changeless idea. It is the highest form in his theory of forms, acting as the source of all other forms and the foundation of reality and knowledge. Aristotle (384–322 BCE) saw eudaimonia as the highest good and ultimate goal of human life. He understood eudaimonia as a form of happiness or flourishing achieved through the exercise of virtues in accordance with reason, leading to the full realization of human potential. Epicurus (c. 341–271 BCE) proposed a nuanced egoistic hedonism, stating that personal pleasure is the greatest good while recommending moderation to avoid the negative effects of excessive desires and anxiety about the future. According to the Stoics, a virtuous life following nature and reason is the highest good. They thought that self-mastery and rationality lead to a pleasant equanimity independent of external circumstances. Influenced by Plato, Plotinus (c. 204/5–270 CE) held that the Good is the ultimate principle of reality from which everything emanates. For him, evil is not a distinct opposing principle but merely a deficiency or absence of being resulting from a missing connection to the Good. In ancient Indian philosophy, the idea that people are trapped in a cycle of rebirths arose around 600 BCE. Many traditions adopted it, arguing that liberation from this cycle is the highest good. Hindu philosophy distinguishes the four fundamental values of duty, economic wealth, sensory pleasure, and liberation. Many Hindu schools of thought prioritize the value of liberation. A similar outlook is found in ancient Buddhist philosophy, starting between the sixth and the fifth centuries BCE, where the cessation of suffering through the attainment of Nirvana is considered the ultimate goal. In ancient China, Confucius (c. 551–479 BCE) explored the role of self-cultivation in leading a virtuous life, viewing general benevolence towards humanity as the supreme virtue. In comparing the highest virtue to water, Laozi (6th century BCE) emphasized the importance of living in harmony with the natural order of the universe. Religious teachings influenced value theory in the medieval period. Early Christian thinkers, such as Augustine of Hippo (354–430 CE), adapted the theories of Plato and Plotinus into a religious framework. They identified God as the ultimate source of existence and goodness, seeing evil as a mere lack or privation of good. Drawing on Aristotelianism, Christian philosopher Thomas Aquinas (1224–1274 CE) said that communion with the divine, achieved through a beatific vision of God, is the highest end of humans. In Arabic–Persian philosophy, Avicenna (980–1037 CE) regarded the intellect as the highest human faculty. He thought that a contemplative life prepares humans for the greatest good, which is only attained in the afterlife when humans are free from bodily distractions. In Chinese thought, the early neo-Confucian philosopher Han Yu (768–824 CE) identified the sage as an ideal role model who, through self-cultivation, achieves personal integrity expressed in harmony between theory and action in daily life. In the early modern period, Thomas Hobbes (1588–1679) understood values as subjective phenomena that depend on a person's interests and examined mutual interests and benefits as a key principle of political decisions. David Hume (1711–1776) agreed with Hobbes's subjectivism, exploring how values differ from objective facts. Immanuel Kant (1724–1804) asserted that the highest good is happiness in proportion to moral virtue. He emphasized the primacy of virtue by respecting the moral law and the inherent value of people, adding that moral virtue is ideally, but not always, accompanied by personal happiness. Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873) formulated classical utilitarianism, combining a hedonist theory about value with a consequentialist theory about right action. Hermann Lotze (1817–1881) developed a philosophy of values, holding that values make the world meaningful as an ordered whole centered around goodness. Influenced by Lotze, the neo-Kantian philosopher Wilhelm Windelband (1848–1915) understood philosophy as a theory of values, claiming that universal values determine the principles that all subjects should follow, including the norms of knowledge and action. Friedrich Nietzsche (1844–1900) held that values are human creations. He criticized traditional values in general and Christian values in particular, calling for a revaluation of all values centered on life-affirmation, power, and excellence. In the early 20th century, Pragmatist philosopher John Dewey (1859–1952) defended axiological naturalism. He distinguished values from value judgments, adding that the skill of correct value assessment must be learned through experience. G. E. Moore (1873–1958) developed and refined various axiological concepts, such as organic unity and the contrast between intrinsic and extrinsic value. He defended non-naturalism about the nature of values and intuitionism about the knowledge of values. W. D. Ross (1877–1971) accepted and further elaborated on Moore's intuitionism, using it to formulate an axiological pluralism. R. B. Perry (1876–1957) and D. W. Prall (1886–1940) articulated systematic theories of value based on the idea that values originate in affective states such as interest and liking. Robert S. Hartman (1910–1973) developed formal axiology, saying that values measure the level to which a thing embodies its ideal concept. A. J. Ayer (1910–1989) proposed anti-realism about values, arguing that value statements merely express the speaker's approval or disapproval. A different type of anti-realism, introduced by J. L. Mackie (1917–1981), suggests that all value assertions are false since no values exist. G. H. von Wright (1916–2003) provided a conceptual analysis of the term good by distinguishing different meanings or varieties of goodness, such as the technical goodness of a good driver and the hedonic goodness of a good meal. In continental philosophy, Franz Brentano (1838–1917) formulated an early version of the fitting-attitude theory of value, saying that a thing is good if it is fitting to have a positive attitude towards it, such as love. In the 1890s, his students Alexius Meinong (1853–1920) and Christian von Ehrenfels (1859–1932) conceived the idea of a general theory of values. Edmund Husserl (1859–1938), another of Brentano's students, developed phenomenology and applied this approach to the study of values. Following Husserl's approach, Max Scheler (1874–1928) and Nicolai Hartmann (1882–1950) each proposed a comprehensive system of axiological ethics. Asserting that values have objective reality, they explored how different value types form a hierarchy and examined the problems of value conflicts and right decisions from this hierarchical perspective. Martin Heidegger (1889–1976) criticized value theory, claiming that it rests on a mistaken metaphysical perspective by understanding values as aspects of things. Existentialist philosopher Jean-Paul Sartre (1905–1980) suggested that values do not exist by themselves but are actively created, emphasizing the role of human freedom, responsibility, and authenticity in the process. == References == === Notes === === Citations === === Sources ===
Wikipedia/Value_theory
In the field of psychology, cognitive dissonance is described as a mental phenomenon in which people unknowingly hold fundamentally conflicting cognitions. Being confronted by situations that challenge this dissonance may ultimately result in some change in their cognitions or actions to cause greater alignment between them so as to reduce this dissonance. Relevant items of cognition include peoples' actions, feelings, ideas, beliefs, values, and things in the environment. Cognitive dissonance exists without signs but surfaces through psychological stress when persons participate in an action that goes against one or more of conflicting things. According to this theory, when an action or idea is psychologically inconsistent with the other, people automatically try to resolve the conflict, usually by reframing a side to make the combination congruent. Discomfort is triggered by beliefs clashing with new information or by having to conceptually resolve a matter that involves conflicting sides, whereby the individual tries to find a way to reconcile contradictions to reduce their discomfort. In When Prophecy Fails: A Social and Psychological Study of a Modern Group That Predicted the Destruction of the World (1956) and A Theory of Cognitive Dissonance (1957), Leon Festinger proposed that human beings strive for internal psychological consistency to function mentally in the real world. Persons who experience internal inconsistency tend to become psychologically uncomfortable and are motivated to reduce the cognitive dissonance. They tend to make changes to justify the stressful behavior, either by adding new parts to the cognition causing the psychological dissonance (rationalization), believing that "people get what they deserve" (just-world fallacy), taking in specific pieces of information while rejecting or ignoring others (selective perception), or by avoiding circumstances and contradictory information likely to increase the magnitude of the cognitive dissonance (confirmation bias). Festinger explains avoiding cognitive dissonance as "Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point." == Originator == Leon Festinger, born in 1919 in New York City, was an American social psychologist whose contributions to psychology include the cognitive dissonance theory, social comparison theory, and the proximity effect. Festinger graduated from the City College of New York in 1939; he then received his PhD in Child Psychology from the University of Iowa. He was initially inspired to enter the field of psychology by Kurt Lewin, known as the "father of modern social psychology", and his work in Gestalt psychology. Studying under Kurt Lewin for most of his academic career, Festinger returned to collaborate with Lewin at the Research Center for Group Dynamics at the Massachusetts Institute of Technology. In a 2002 American Psychological Association article, Festinger is cited as the fifth most eminent psychologist of the 20th century, just after B.F. Skinner, Jean Piaget, Sigmund Freud, and Albert Bandura, respectively. Festinger's cognitive dissonance theory is still one of the most influential social theories in modern social psychology. Throughout this research, Festinger noticed that people often like to stick to consistent habits and routines to maintain order within their lives. These habits may include everyday activities like preferring a specific seat during their daily commute or eating meals at consistent times. Any disturbance to this order can lead to mental unease, which may manifest in altered thought processes or beliefs. Festinger concluded that the sole means of alleviating this discomfort is by adjusting either their actions or beliefs to restore consistency. Since his publication of A Theory of Cognitive Dissonance in 1957, Festinger's findings have helped to understand peoples' personal biases, how people reframe situations in their heads to maintain a positive self-image, and why one may pursue certain behaviors that misalign with their judgments as they seek out or reject certain information. Coping with the nuances of contradictory ideas or experiences is mentally stressful, as it requires energy and effort to sit with those seemingly opposite things that all seem true. Festinger argued that some people would inevitably resolve the dissonance by blindly believing whatever they wanted to believe. == Relations among cognitions == To function in the reality of society, human beings continually adjust the correspondence of their mental attitudes and personal actions; such continual adjustments, between cognition and action, result in one of three relationships with reality: Consonant relationship: A cognition or action consistent with the other, e.g., not wanting to become drunk when out for dinner and ordering water rather than wine Irrelevant relationship: A cognition or action unrelated to the other, e.g. not wanting to become drunk when out and wearing a shirt Dissonant relationship: A cognition or action inconsistent with the other, e.g. not wanting to become drunk when out, but then drinking more wine anyway === Magnitude of dissonance === The term "magnitude of dissonance" refers to the level of discomfort caused to the person. This can be caused by the relationship between two different internal beliefs, or an action that is incompatible with the beliefs of the person. Two factors determine the degree of psychological dissonance caused by two conflicting cognitions or by two conflicting actions: The importance of cognitions: the greater the personal value of the elements, the greater the magnitude of the dissonance in the relation. When the value of the importance of the two dissonant items is high, it is difficult to determine which action or thought is correct. Both have had a place of truth, at least subjectively, in the mind of the person. Therefore, when the ideals or actions now clash, it is difficult for the individual to decide which takes priority. Ratio of cognitions: the proportion of dissonant-to-consonant elements. There is a level of discomfort within each person that is acceptable for living. When a person is within that comfort level, the dissonant factors do not interfere with functioning. However, when dissonant factors are abundant and not enough in line with each other, one goes through a process to regulate and bring the ratio back to an acceptable level. Once a subject chooses to keep one of the dissonant factors, they quickly forget the other to restore peace of mind. There is always some degree of dissonance within a person as they go about making decisions, due to the changing quantity and quality of knowledge and wisdom that they gain. The magnitude itself is a subjective measurement since the reports are self relayed, and there is no objective way as yet to get a clear measurement of the level of discomfort. == Reduction == Cognitive dissonance theory proposes that people seek psychological consistency between their expectations of life and the existential reality of the world. To function by that expectation of existential consistency, people continually reduce their cognitive dissonance in order to align their cognitions (perceptions of the world) with each other and their actions. The creation and establishment of psychological consistency allows the person affected with cognitive dissonance to lessen mental stress by actions that reduce the magnitude of the dissonance, realized either by changing with or by justifying against or by being indifferent to the existential contradiction that is inducing the mental stress. In practice, people reduce the magnitude of their cognitive dissonance in four ways: Change the behavior or the cognition ("I'll eat no more of this doughnut.") Justify the behavior or the cognition, by changing the conflicting cognition ("I'm allowed to cheat my diet every once in a while.") Justify the behavior or the cognition by adding new behaviors or cognitions ("I'll spend thirty extra minutes at the gymnasium to work off the doughnut.") Ignore or deny information that conflicts with existing beliefs ("This doughnut is not a high-sugar food.") Three cognitive bias theories are proposed proponents of cognitive dissonance (Note: they are not distinct, they draw from each other): 1. Bias Blind Spot — the tendency to perceive oneself as less susceptible to biases than others, 2. The Better-Than-Average-Effect — the tendency to believe that one is overall superior to others in terms of ability and character, and Confirmation bias — the tendency to interpret and understand information in a way that supports preexisting beliefs, thoughts, feelings, etc. Having congruent, or perceived as congruent cognition is required in order to function in the real world according to the results of The Psychology of Prejudice (2006), wherein people facilitate their functioning in the real world by employing human categories (i.e. sex and gender, age and race, etc.) with which they manage their social interactions with other people. Based on a brief overview of models and theories related to cognitive consistency from many different scientific fields, such as social psychology, perception, neurocognition, learning, motor control, system control, ethology, and stress, it has even been proposed that "all behaviour involving cognitive processing is caused by the activation of inconsistent cognitions and functions to increase perceived consistency"; that is, all behaviour functions to reduce cognitive inconsistency at some level of information processing. Indeed, the involvement of cognitive inconsistency has long been suggested for behaviors related to for instance curiosity, and aggression and fear, while it has also been suggested that the inability to satisfactorily reduce cognitive inconsistency may – dependent on the type and size of the inconsistency – result in stress. === Selective exposure === Another means to reduce cognitive dissonance is selective exposure. This theory has been discussed since the early days of Festinger's proposal of cognitive dissonance. He noticed that people would selectively expose themselves to some media over others; specifically, they would avoid dissonant messages and prefer consonant messages. Through selective exposure, people actively (and selectively) choose what to watch, view, or read that fit to their current state of mind, mood or beliefs. In other words, consumers select attitude-consistent information and avoid attitude-challenging information. This can be applied to media, news, music, and any other messaging channel. The idea is, choosing something that is in opposition to how you feel or believe in will increase cognitive dissonance. For example, a study was done in an elderly home in 1992 on the loneliest residents—those that did not have family or frequent visitors. The residents were shown a series of documentaries: three that featured a "very happy, successful elderly person", and three that featured an "unhappy, lonely elderly person." After watching the documentaries, the residents indicated they preferred the media featuring the unhappy, lonely person over the happy person. This can be attested to them feeling lonely, and experiencing cognitive dissonance watching somebody their age feeling happy and being successful. This study explains how people select media that aligns with their mood, as in selectively exposing themselves to people and experiences they are already experiencing. It is more comfortable to see a movie about a character that is similar to you than to watch one about someone who is your age who is more successful than you. Another example to note is how people mostly consume media that aligns with their political views. In a study done in 2015, participants were shown "attitudinally consistent, challenging, or politically balanced online news.": 3  Results showed that the participants trusted attitude-consistent news the most out of all the others, regardless of the source. It is evident that the participants actively selected media that aligns with their beliefs rather than opposing media. In fact, recent research has suggested that while a discrepancy between cognitions drives individuals to crave for attitude-consistent information, the experience of negative emotions drives individuals to avoid counter attitudinal information. In other words, it is the psychological discomfort which activates selective exposure as a dissonance-reduction strategy. == Paradigms == There are four theoretic paradigms of cognitive dissonance, the mental stress people experienced when exposed to information that is inconsistent with their beliefs, ideals or values: Belief Disconfirmation, Induced Compliance, Free Choice, and Effort Justification, which respectively explain what happens after a person acts inconsistently, relative to their intellectual perspectives; what happens after a person makes decisions and what are the effects upon a person who has expended much effort to achieve a goal. Common to each paradigm of cognitive-dissonance theory is the tenet: People invested in a given perspective shall—when confronted with contrary evidence—expend great effort to justify retaining the challenged perspective. === Belief disconfirmation === The contradiction of a belief, ideal, or system of values causes cognitive dissonance that can be resolved by changing the challenged belief, yet, instead of affecting change, the resultant mental stress restores psychological consonance to the person by misperception, rejection, or refutation of the contradiction, seeking moral support from people who share the contradicted beliefs or acting to persuade other people that the contradiction is unreal.: 123  The early hypothesis of belief contradiction presented in When Prophecy Fails (1956) reported that faith deepened among the members of an apocalyptic religious cult, despite the failed prophecy of an alien spacecraft soon to land on Earth to rescue them from earthly corruption. At the determined place and time, the cult assembled; they believed that only they would survive planetary destruction; yet the spaceship did not arrive to Earth. The confounded prophecy caused them acute cognitive-dissonance: Had they been victims of a hoax? Had they vainly donated away their material possessions? To resolve the dissonance between apocalyptic, end-of-the-world religious beliefs and earthly, material reality, most of the cult restored their psychological consonance by choosing to believe a less mentally-stressful idea to explain the missed landing: that the aliens had given planet Earth a second chance at existence, which, in turn, empowered them to re-direct their religious cult to environmentalism and social advocacy to end human damage to planet Earth. On overcoming the confounded belief by changing to global environmentalism, the cult increased in numbers by proselytism. The study of The Rebbe, the Messiah, and the Scandal of Orthodox Indifference (2008) reported the belief contradiction that occurred in the Chabad Orthodox Jewish congregation, who believed that their Rebbe, Menachem Mendel Schneerson, was the Messiah. When he died of a stroke in 1994, instead of accepting that their Rebbe was not the Messiah, some of the congregation proved indifferent to that contradictory fact, and continued claiming that Schneerson was the Messiah and that he would soon return from the dead. === Induced compliance === In the Cognitive Consequences of Forced Compliance (1959), the investigators Leon Festinger and Merrill Carlsmith asked students to spend an hour doing tedious tasks; e.g. turning pegs a quarter-turn, at fixed intervals. This procedure included seventy-one male students attending Stanford University. Students were asked to complete a series of repetitive, mundane tasks, then asked to convince a separate group of participants that the task was fun and exciting. Once the subjects had done the tasks, the experimenters asked one group of subjects to speak with another subject (an actor) and persuade that impostor-subject that the tedious tasks were interesting and engaging. Subjects of one group were paid twenty dollars ($20); those in a second group were paid one dollar ($1) and those in the control group were not asked to speak with the imposter-subject. At the conclusion of the study, when asked to rate the tedious tasks, the subjects of the second group (paid $1) rated the tasks more positively than did the subjects in the first group (paid $20), and the first group (paid $20) rated the tasks just slightly more positively than did the subjects of the control group; the responses of the paid subjects were evidence of cognitive dissonance. The researchers, Festinger and Carlsmith, proposed that the subjects experienced dissonance between the conflicting cognitions. "I told someone that the task was interesting" and "I actually found it boring." The subjects paid one dollar were induced to comply, compelled to internalize the "interesting task" mental attitude because they had no other justification. The subjects paid twenty dollars were induced to comply by way of an obvious, external justification for internalizing the "interesting task" mental attitude and experienced a lower degree of cognitive dissonance than did those only paid one dollar. They did not receive sufficient compensation for the lie they were asked to tell. Because of this insufficiency, the participants convinced themselves to believe that what they were doing was exciting. This way, they felt better about telling the next group of participants that it was exciting because, technically, they weren't lying. ==== Forbidden behavior paradigm ==== In the Effect of the Severity of Threat on the Devaluation of Forbidden Behavior (1963), a variant of the induced-compliance paradigm, by Elliot Aronson and Carlsmith, examined self-justification in children. Children were left in a room with toys, including a greatly desirable steam shovel, the forbidden toy. Upon leaving the room, the experimenter told one-half of the group of children that there would be severe punishment if they played with the steam-shovel toy and told the second half of the group that there would be a mild punishment for playing with the forbidden toy. All of the children refrained from playing with the forbidden toy (the steam shovel). Later, when the children were told that they could freely play with any toy they wanted, the children in the mild-punishment group were less likely to play with the steam shovel (the forbidden toy), despite the removal of the threat of mild punishment. The children threatened with mild punishment had to justify, to themselves, why they did not play with the forbidden toy. The degree of punishment was insufficiently strong to resolve their cognitive dissonance; the children had to convince themselves that playing with the forbidden toy was not worth the effort. In The Efficacy of Musical Emotions Provoked by Mozart's Music for the Reconciliation of Cognitive Dissonance (2012), a variant of the forbidden-toy paradigm, indicated that listening to music reduces the development of cognitive dissonance. Without music in the background, the control group of four-year-old children were told to avoid playing with a forbidden toy. After playing alone, the control-group children later devalued the importance of the forbidden toy. In the variable group, classical music played in the background while the children played alone. In the second group, the children did not later devalue the forbidden toy. The researchers, Nobuo Masataka and Leonid Perlovsky, concluded that music might inhibit cognitions that induce cognitive dissonance. Music is a stimulus that can diminish post-decisional dissonance; in an earlier experiment, Washing Away Postdecisional Dissonance (2010), the researchers indicated that the actions of hand-washing might inhibit the cognitions that induce cognitive dissonance. That study later failed to replicate. === Free choice === In the study Post-decision Changes in Desirability of Alternatives (1956) 225 female students rated domestic appliances and then were asked to choose one of two appliances as a gift. The results of the second round of ratings indicated that the women students increased their ratings of the domestic appliance they had selected as a gift and decreased their ratings of the appliances they rejected. This type of cognitive dissonance occurs in a person who is faced with a difficult decision and when the rejected choice may still have desirable characteristics to the chooser. The action of deciding provokes the psychological dissonance consequent to choosing X instead of Y, despite little difference between X and Y; the decision "I chose X" is dissonant with the cognition that "There are some aspects of Y that I like". The study Choice-induced Preferences in the Absence of Choice: Evidence from a Blind Two-choice Paradigm with Young Children and Capuchin Monkeys (2010) reports similar results in the occurrence of cognitive dissonance in human beings and in animals. Peer Effects in Pro-Social Behavior: Social Norms or Social Preferences? (2013) indicated that with internal deliberation, the structuring of decisions among people can influence how a person acts. The study suggested that social preferences and social norms can explain peer effects in decision making. The study observed that choices made by the second participant would influence the first participant's effort to make choices and that inequity aversion, the preference for fairness, is the paramount concern of the participants. === Effort justification === Cognitive dissonance occurs in a person who voluntarily engages in (physically or ethically) unpleasant activities to achieve a goal. The mental stress caused by the dissonance can be reduced by the person exaggerating the desirability of the goal. In The Effect of Severity of Initiation on Liking for a Group (1956), to qualify for admission to a discussion group, two groups of people underwent an embarrassing initiation of varied psychological severity. The first group of subjects were to read aloud twelve sexual words considered obscene; the second group of subjects were to read aloud twelve sexual words not considered obscene. Both groups were given headphones to unknowingly listen to a recorded discussion about animal sexual behaviour, which the researchers designed to be dull and banal. As the subjects of the experiment, the groups of people were told that the animal-sexuality discussion actually was occurring in the next room. The subjects whose strong initiation required reading aloud obscene words evaluated the people of their group as more-interesting persons than the people of the group who underwent the mild initiation to the discussion group. In Washing Away Your Sins: Threatened Morality and Physical Cleansing (2006), the results indicated that a person washing their hands is an action that helps resolve post-decisional cognitive dissonance because the mental stress usually was caused by the person's ethical–moral self-disgust, which is an emotion related to the physical disgust caused by a dirty environment. The study The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making (2011) indicated that participants rated 80 names and 80 paintings based on how much they liked the names and paintings. To give meaning to the decisions, the participants were asked to select names that they might give to their children. For rating the paintings, the participants were asked to base their ratings on whether or not they would display such art at home. The results indicated that when the decision is meaningful to the person deciding value, the likely rating is based on their attitudes (positive, neutral or negative) towards the name and towards the painting in question. The participants also were asked to rate some of the objects twice and believed that, at session's end, they would receive two of the paintings they had positively rated. The results indicated a great increase in the positive attitude of the participant towards the liked pair of things, whilst also increasing the negative attitude towards the disliked pair of things. The double-ratings of pairs of things, towards which the rating participant had a neutral attitude, showed no changes during the rating period. The existing attitudes of the participant were reinforced during the rating period and the participants experienced cognitive dissonance when confronted by a liked-name paired with a disliked-painting. In the study, Does effort increase or decrease reward validation? Considerations from cognitive dissonance theory (2024), the authors discovered that effort justification and effort discounting may determine the amount of reward valuation a person feels after completing a task. Effort justification is the term used for high efforts leading to high rewards. Effort discounting is the term used for high efforts leading to low rewards. These terms relate to Cognitive Dissonance because humans enjoy controlling the efforts that may lead to rewards. This study determined that having high control can lead to higher efforts, leading to higher rewards. Similarly, having low control can lead to higher efforts yet lower rewards. These results indicate that humans seek highly controllable situations and actions to receive rewards for their efforts. The ability to control one’s actions is crucial for eliminating the effects of Cognitive Dissonance. It is also essential in the process of decision-making without any influence, whether positive or negative, from others. == Examples == === Meat-eating === Meat-eating can involve discrepancies between the behavior of eating meat and various ideals that the person holds. Some researchers call this form of moral conflict the meat paradox. Hank Rothgerber posited that meat eaters may encounter a conflict between their eating behavior and their affections toward animals. This occurs when the dissonant state involves recognition of one's behavior as a meat eater and a belief, attitude, or value that this behavior contradicts. The person with this state may attempt to employ various methods, including avoidance, willful ignorance, dissociation, perceived behavioral change, and do-gooder derogation to prevent this form of dissonance from occurring. Once occurred, they may reduce it in the form of motivated cognitions, such as denigrating animals, offering pro-meat justifications, or denying responsibility for eating meat. The extent of cognitive dissonance with regard to meat eating can vary depending on the attitudes and values of the individual involved because these can affect whether or not they see any moral conflict with their values and what they eat. For example, individuals who are more dominance minded and who value having a masculine identity are less likely to experience cognitive dissonance because they are less likely to believe eating meat is morally wrong. Others cope with this cognitive dissonance often through ignorance (ignoring the known realities of their food source) or explanations loosely tied to taste. The psychological phenomenon intensifies if mind or human-like qualities of animals are explicitly mentioned. === Smoking === The study Patterns of Cognitive Dissonance-reducing Beliefs Among Smokers: A Longitudinal Analysis from the International Tobacco Control (ITC) Four Country Survey (2012) indicated that smokers use justification beliefs to reduce their cognitive dissonance about smoking tobacco and the negative consequences of smoking it. Continuing smokers (Smoking and no attempt to quit since the previous round of study) Successful quitters (Quit during the study and did not use tobacco from the time of the previous round of study) Failed quitters (Quit during the study, but relapsed to smoking at the time of the study) To reduce cognitive dissonance, the participant smokers adjusted their beliefs to correspond with their actions: Functional beliefs ("Smoking calms me down when I am stressed or upset."; "Smoking helps me concentrate better."; "Smoking is an important part of my life."; and "Smoking makes it easier for me to socialize.") Risk-minimizing beliefs ("The medical evidence that smoking is harmful is exaggerated."; "One has to die of something, so why not enjoy yourself and smoke?"; and "Smoking is no more risky than many other things people do.") === Littering === Disposing of trash outside, even when knowing this is against the law, wrong, and is harmful for the environment, is a prominent example of cognitive dissonance, especially if the person feels bad after littering but continues to do so. Between November 2015 and March 2016, a study by Xitou Nature Education Area in Taiwan examined littering of tourists. Researchers analyzed the relationships between tourists' environmental attitudes, cognitive dissonance, and vandalism. In this study, 500 questionnaires were distributed and 499 questionnaires were returned. The results of this study indicate that older tourists had better attitudes towards the environment and cared more. The tourists who were older and cared more for outdoor activities were less likely to litter. On the other hand, the younger tourists littered more and experienced more cognitive dissonance. This study showed that younger tourists littered more as a whole and regretted or thought about it after. === Unpleasant medical screenings === In a study titled Cognitive Dissonance and Attitudes Toward Unpleasant Medical Screenings (2016), researchers Michael R. Ent and Mary A. Gerend informed the study participants about a discomforting test for a specific (fictitious) virus called the "human respiratory virus-27". The study used a fake virus to prevent participants from having thoughts, opinions, and feeling about the virus that would interfere with the experiment. The study participants were in two groups; one group was told that they were actual candidates for the virus-27 test, and the second group were told they were not candidates for the test. The researchers reported, "We predicted that [study] participants who thought that they were candidates for the unpleasant test would experience dissonance associated with knowing that the test was both unpleasant and in their best interest—this dissonance was predicted to result in unfavorable attitudes toward the test." === Related phenomena === Cognitive dissonance may also occur when people seek to explain or justify their beliefs, often without questioning the validity of their claims. After the earthquake of 1934, Bihar, India, irrational rumors based upon fear quickly reached the adjoining communities unaffected by the disaster because those people, although not in physical danger, psychologically justified their anxieties about the earthquake. The same pattern can be observed when one's convictions are met with a contradictory order. In a study conducted among 6th grade students, after being induced to cheat in an academic examination, students judged cheating less harshly. Nonetheless, the confirmation bias identifies how people readily read information that confirms their established opinions and readily avoid reading information that contradicts their opinions. The confirmation bias is apparent when a person confronts deeply held political beliefs, i.e. when a person is greatly committed to their beliefs, values, and ideas. If a contradiction occurs between how a person feels and how a person acts, one's perceptions and emotions align to alleviate stress. The Ben Franklin effect refers to that statesman's observation that the act of performing a favor for a rival leads to increased positive feelings toward that individual. It is also possible that one's emotions be altered to minimize the regret of irrevocable choices. At a hippodrome, bettors had more confidence in their horses after the betting than before. == Applications == === Education === The management of cognitive dissonance readily influences the apparent motivation of a student to pursue education. The study Turning Play into Work: Effects of Adult Surveillance and Extrinsic Rewards on Children's Intrinsic Motivation (1975) indicated that the application of the effort justification paradigm increased student enthusiasm for education with the offer of an external reward for studying; students in pre-school who completed puzzles based upon an adult promise of reward were later less interested in the puzzles than were students who completed the puzzle-tasks without the promise of a reward. The incorporation of cognitive dissonance into models of basic learning-processes to foster the students' self-awareness of psychological conflicts among their personal beliefs, ideals, and values and the reality of contradictory facts and information, requires the students to defend their personal beliefs. Afterwards, the students are trained to objectively perceive new facts and information to resolve the psychological stress of the conflict between reality and the student's value system. Moreover, educational software that applies the derived principles facilitates the students' ability to successfully handle the questions posed in a complex subject. Meta-analysis of studies indicates that psychological interventions that provoke cognitive dissonance in order to achieve a directed conceptual change do increase students' learning in reading skills and about science. === Psychotherapy === The general effectiveness of psychotherapy and psychological intervention is partly explained by the theory of cognitive dissonance. In that vein, social psychology proposed that the mental health of the patient is positively influenced by his and her action in freely choosing a specific therapy and in exerting the required, therapeutic effort to overcome cognitive dissonance. That effective phenomenon was indicated in the results of the study Effects of Choice on Behavioral Treatment of Overweight Children (1983), wherein the children's belief that they freely chose the type of therapy received, resulted in each overweight child losing a greater amount of excessive body weight. In the study Reducing Fears and Increasing Attentiveness: The Role of Dissonance Reduction (1980), people with ophidiophobia (fear of snakes) who invested much effort in activities of little therapeutic value for them (experimentally represented as legitimate and relevant) showed improved alleviation of the symptoms of their phobia. Likewise, the results of Cognitive Dissonance and Psychotherapy: The Role of Effort Justification in Inducing Weight Loss (1985) indicated that the patient felt better in justifying their efforts and therapeutic choices towards effectively losing weight. That the therapy of effort expenditure can predict long-term change in the patient's perceptions. === Social behavior === Cognitive dissonance is used to promote social behaviours considered positive, such as increased condom use. Other studies indicate that cognitive dissonance can be used to encourage people to act pro-socially, such as campaigns against public littering, campaigns against racial prejudice, and compliance with anti-speeding campaigns. The theory can also be used to explain reasons for donating to charity. Cognitive dissonance can be applied in social areas such as racism and racial hatred. Acharya of Stanford, Blackwell and Sen of Harvard state cognitive dissonance increases when an individual commits an act of violence toward someone from a different ethnic or racial group and decreases when the individual does not commit any such act of violence. Research from Acharya, Blackwell and Sen shows that individuals committing violence against members of another group develop hostile attitudes towards their victims as a way of minimizing cognitive dissonance. Importantly, the hostile attitudes may persist even after the violence itself declines (Acharya, Blackwell, and Sen, 2015). The application provides a social psychological basis for the constructivist viewpoint that ethnic and racial divisions can be socially or individually constructed, possibly from acts of violence (Fearon and Laitin, 2000). Their framework speaks to this possibility by showing how violent actions by individuals can affect individual attitudes, either ethnic or racial animosity (Acharya, Blackwell, and Sen, 2015). === COVID-19 === The COVID-19 pandemic, an extreme public health crisis, cases rose to the hundred million and deaths at nearly four million worldwide. Reputable health organizations such as Lyu and Wehby studied the effects of wearing a face mask on the spread of COVID-19. They found evidence that suggests that COVID patients were reduced by 2%, averting nearly 200,000 cases by the end of the following month. Despite this fact having been proven and encouraged by major health organizations, there was still a resistance to wearing the mask and keeping a safe distance away from others. When the COVID-19 vaccine was eventually released to the public, this only made the resistance stronger. The Ad Council launched an extensive campaign advertising for people to follow the health guidelines established by the CDC and WHO and attempted to persuade people to become vaccinated eventually. After taking polls on public opinion about safety measures to prevent the spreading of the virus, it showed that between 80% and 90% of adults in the United States agree with these safety procedures and vaccines being necessary. The cognitive dissonance arose when people took polls on public behavior. Despite the general opinion that wearing a mask, social distancing, and receiving the vaccine are all things the public should be doing, only 50% of responders admitted to doing these things all or even most of the time. People believe that partaking in preventative measures is essential, but fail to follow through with actually doing them. To convince people to behave in line with their beliefs, it is essential to remind people of a fact that they believe is true, and then remind them of times in the past when they went against this. The hypocrisy paradigm is known for inconsistent cognition resolution through a change in behavior. Data were collected by participants that were asked to write statements supporting mask use and social distancing, which is something they agreed with. Then the participants were told to think about recent situations in which they failed to do this. The prediction was that the dissonance would be a motivating factor in getting people to be compliant with COVID-19 safety measures. After contacting participants one week later, they reported behaviors, including social distancing and mask-wearing. === Personal responsibility === A study conducted by Cooper and Worchel (1970) examined personal responsibility regarding cognitive dissonance. The goal was to investigate responsibility concerning foreseen consequences and how this might cause dissonance; 124 female participants were asked to complete problem-solving tasks while working with a partner. They had the option to either choose a partner with negative traits, or they were assigned one. A portion of the participants was aware of the negative traits their partner possessed; however, the remaining participants were unaware. Cooper hypothesized that if the participants knew about their negative partner beforehand, they would have cognitive dissonance; however, he also believed that the participants would be inclined to attempt to like their partners in an attempt to reduce this dissonance. The study shows that personal choice has the power to predict attitude changes. === Consumer behavior === Pleasure is one of the main factors in our modern culture of consumerism. Once a consumer has chosen to purchase a specific item, they often fear that another choice may have brought them more pleasure. Post-purchase dissonance occurs when a purchase is final, voluntary, and significant to the person. This dissonance is a mental discomfort arising from the possibility of dissatisfaction with the purchase, or the regret of not purchasing a different, potentially more useful or satisfactory good. Consequently, the buyer will "seek to reduce dissonance by increasing the perceived attractiveness of the chosen alternative and devaluing the non chosen item, seeking out information to confirm the decision, or changing attitudes to conform to the decision." In other words, the buyer justifies their purchase to themselves in whatever way they can, in an attempt to convince themselves that they made the right decision and to diminish regret. Usually these feelings of regret are more prevalent after online purchases as opposed to in-store purchases. This happens because an online consumer does not have the opportunity to experience the product in its entirety, and must rely on what information is available through photos and descriptions. On the other hand, in-store shopping can sometimes be even more of an issue for consumers in regard to impulse buying. While the ease of online shopping proves hard to resist for impulse buyers, in-store shoppers may be influenced by who they are with. Shopping with friends increases the risk of impulse buying, especially compared to shopping with people such as one's parents. Post-purchase dissonance does not only affect the consumer; brands are dependent on customer loyalty, and cognitive dissonance can influence that loyalty. The more positive experiences and emotions that a customer associates with a specific brand, the more likely they are to buy from that brand in the future, recommend it to friends, etc. The opposite is also true, meaning any feelings of discomfort, dissatisfaction, and regret will weaken the consumer's perception of the brand and make them less likely to return as a customer. When consumers encounter unexpected prices, they adopt three methods to reduce cognitive dissonance: (i) Employ a strategy of continual information; (ii) Employ a change in attitude; and (iii) Engage in minimisation. Consumers employ the strategy of continual information by engaging in bias and searching for information that supports prior beliefs. Consumers might search for information about other retailers and substitute products consistent with their beliefs. Alternatively, consumers might change attitude, such as re-evaluating price in relation to external reference-prices or associating high prices and low prices with quality. Minimisation reduces the importance of the elements of the dissonance; consumers tend to minimise the importance of money, and thus of shopping around, saving, and finding a better deal. High impulse buying is associated with increased post-purchase cognitive dissonance, where consumers experience discomfort and regret after purchasing. === Politics === Cognitive dissonance theory might suggest that since votes are an expression of preference or beliefs, even the act of voting might cause someone to defend the actions of the candidate for whom they voted, and if the decision was close then the effects of cognitive dissonance should be greater. This effect was studied over the 6 presidential elections of the United States between 1972 and 1996, and it was found that the opinion differential between the candidates changed more before and after the election than the opinion differential of non-voters. In addition, elections where the voter had a favorable attitude toward both candidates, making the choice more difficult, had the opinion differential of the candidates change more dramatically than those who only had a favorable opinion of one candidate. What was not studied were the cognitive dissonance effects in cases where the person had unfavorable attitudes toward both candidates. The 2016 U.S. election held historically high unfavorable ratings for both candidates. After the 2020 United States presidential election, which was won by Joe Biden, supporters of former President Donald Trump, who had lost the election to Biden, questioned the outcome of the election, citing voter fraud. This continued after such claims were dismissed as false by numerous judges, election officials, U.S. state governors, and federal government agencies. This was described as an example of Trump supporters experiencing cognitive dissonance. Electoral politics can feature more than just policy disagreements. People seek to reduce their cognitive dissonance when making any choice. Engagement in the electoral process can change policy references, drawing on the framework of cognitive dissonance theory. The idea suggests that cognitive dissonance created by being vocal about support and losing leads voters to align their preferences more closely with those of the supported candidate. Voting itself is a support activity that may led to preference changes. Modernly, social media has affected politics. Recognizing this, creators can profit from a social media relationship between votes and candidates. For example, a celebrity endorsing a candidate can cause their followers to lose sight of policy and focus on the opinion of the person they follow, causing cognitive dissonance. Social media trends like "Kamala is Brat" have rallied fans. As a result, voters are less focused on a candidates' plans for office, and more on the social media attention stirred. === Communication === Cognitive dissonance theory of communication was initially advanced by American psychologist Leon Festinger in the 1960s. Festinger theorized that cognitive dissonance usually arises when a person holds two or more incompatible beliefs simultaneously. This is a normal occurrence since people encounter different situations that invoke conflicting thought sequences. This conflict results in a psychological discomfort. According to Festinger, people experiencing a thought conflict try to reduce the psychological discomfort by attempting to achieve an emotional equilibrium. This equilibrium is achieved in three main ways. First, the person may downplay the importance of the dissonant thought. Second, the person may attempt to outweigh the dissonant thought with consonant thoughts. Lastly, the person may incorporate the dissonant thought into their current belief system. Dissonance plays an important role in persuasion. To persuade people, you must cause them to experience dissonance, and then offer your proposal as a way to resolve the discomfort. Although there is no guarantee your audience will change their minds, the theory maintains that without dissonance, there can be no persuasion. Without a feeling of discomfort, people are not motivated to change. Similarly, it is the feeling of discomfort which motivates people to perform selective exposure (i.e., avoiding disconfirming information) as a dissonance-reduction strategy. Dissonance also plays an essential role in social collaboration. In the study, Temporal interplay between cognitive conflict and attentional markers in social collaboration (2024), the authors determined that the context in social environments and demands affect one’s willingness to collaborate socially. Some social interactions require the ability to read social cues and body language, and others do not. The authors used robots to simulate different social interactions. They discovered that the human brain is designed to deal with the possible complex aspects of social collaboration. They also found that the brain will change its reaction to these aspects depending on the type of interaction the person faces. To summarize, Dissonance can affect how the brain reacts to specific social cues and interactions by making it difficult to differentiate between types of interactions. Dissonance can also make it difficult to collaborate socially with others. === Artificial intelligence === It is hypothesized that introducing cognitive dissonance into machine learning may be able to assist in the long-term aim of developing 'creative autonomy' on the part of agents, including in multi-agent systems (such as games), and ultimately to the development of 'strong' forms of artificial intelligence, including artificial general intelligence. Artificial intelligence has developed over the years and is used for writing, generating ideas, and generating art, among other things. Artificial intelligence is most commonly used in education. AI-driven education can contribute to cognitive dissonance. For example, as a result of a negative output from AI, it may create a system that is inconsistent with a student's self-concepts, past knowledge or expectations. Generative AI tools are already taking a forefront in education. With students using artificial intelligence for daily tasks, it is important that educators understand what this might mean for higher education practice. Students can be reluctant to have open conversation about their use of AI, making it difficult for educators to understand its effects on students in that environment. Because professors and other educators say one thing, and the AI application generates another, it causes students to develop a sense of cognitive dissonance. === Feminism === Cognitive Dissonance Theory can be applied to many aspects of feminism. For instance, the study, Dissonance and defensiveness: orienting affects in online feminist cultures (2024), found that social media culture provides conflicting ideas and thoughts of femininity. These thoughts and ideas may confuse those who identify with feminist qualities. The digital world can connect people from around the globe, but it can also spread hatred and falsifications about feminists and their beliefs. Cognitive Dissonance can pressure feminists through their education, interactions, and relationships with others. Modern life must note the importance of proper sourcing and research. Feminism and diversity have been major topics in politics under the Trump Administration. Cognitive Dissonance may be used to persuade followers and voters to follow one ideal over another. == Alternative paradigms == === Self-perception theory === In Self-perception: An alternative interpretation of cognitive dissonance phenomena (1967), the social psychologist Daryl Bem proposed the self-perception theory whereby people do not think much about their attitudes, even when engaged in a conflict with another person. The Theory of Self-perception proposes that people develop attitudes by observing their own behaviour, and concludes that their attitudes caused the behaviour observed by self-perception; especially true when internal cues either are ambiguous or weak. Therefore, the person is in the same position as an observer who must rely upon external cues to infer their inner state of mind. Self-perception theory proposes that people adopt attitudes without access to their states of mood and cognition. As such, the experimental subjects of the Festinger and Carlsmith study (Cognitive Consequences of Forced Compliance, 1959) inferred their mental attitudes from their own behaviour. When the subject-participants were asked: "Did you find the task interesting?", the participants decided that they must have found the task interesting, because that is what they told the questioner. Their replies suggested that the participants who were paid twenty dollars had an external incentive to adopt that positive attitude, and likely perceived the twenty dollars as the reason for saying the task was interesting, rather than saying the task actually was interesting. The theory of self-perception (Bem) and the theory of cognitive dissonance (Festinger) make identical predictions, but only the theory of cognitive dissonance predicts the presence of unpleasant arousal, of psychological distress, which were verified in laboratory experiments. In The Theory of Cognitive Dissonance: A Current Perspective (Aronson, Berkowitz, 1969), Elliot Aronson linked cognitive dissonance to the self-concept: That mental stress arises when the conflicts among cognitions threatens the person's positive self-image. This reinterpretation of the original Festinger and Carlsmith study, using the induced-compliance paradigm, proposed that the dissonance was between the cognitions "I am an honest person." and "I lied about finding the task interesting." The study Cognitive Dissonance: Private Ratiocination or Public Spectacle? (Tedeschi, Schlenker, etc. 1971) reported that maintaining cognitive consistency, rather than protecting a private self-concept, is how a person protects their public self-image. Moreover, the results reported in the study I'm No Longer Torn After Choice: How Explicit Choices Implicitly Shape Preferences of Odors (2010) contradict such an explanation, by showing the occurrence of revaluation of material items, after the person chose and decided, even after having forgotten the choice. === Balance theory === Fritz Heider proposed a motivational theory of attitudinal change that derives from the idea that humans are driven to establish and maintain psychological balance. The driving force for this balance is known as the consistency motive, which is an urge to maintain one's values and beliefs consistent over time. Heider's conception of psychological balance has been used in theoretical models measuring cognitive dissonance. According to balance theory, there are three interacting elements: (1) the self (P), (2) another person (O), and (3) an element (X). These are each positioned at one vertex of a triangle and share two relations: Unit relations – things and people that belong together based on similarity, proximity, fate, etc. Sentiment relations – evaluations of people and things (liking, disliking) Under balance theory, human beings seek a balanced state of relations among the three positions. This can take the form of three positives or two negatives and one positive: P = you O = your child X = picture your child drew "I love my child" "She drew me this picture" "I love this picture" People also avoid unbalanced states of relations, such as three negatives or two positives and one negative: P = you O = John X = John's dog "I don't like John" "John has a dog" "I don't like the dog either" === Cost–benefit analysis === In the study On the Measurement of the Utility of Public Works (1969), Jules Dupuit reported that behaviors and cognitions can be understood from an economic perspective, wherein people engage in the systematic process of comparing the costs and benefits of a decision. The psychological process of cost-benefit comparisons helps the person to assess and justify the feasibility (spending money) of an economic decision, and is the basis for determining if the benefit outweighs the cost, and to what extent. Moreover, although the method of cost-benefit analysis functions in economic circumstances, men and women remain psychologically inefficient at comparing the costs against the benefits of their economic decision. === Self-discrepancy theory === E. Tory Higgins proposed that people have three selves, to which they compare themselves: Actual self – representation of the attributes the person believes themself to possess (basic self-concept) Ideal self – ideal attributes the person would like to possess (hopes, aspiration, motivations to change) Ought self – ideal attributes the person believes they should possess (duties, obligations, responsibilities) When these self-guides are contradictory psychological distress (cognitive dissonance) results. People are motivated to reduce self-discrepancy (the gap between two self-guides). === Averse consequences vs. inconsistency === In the 1980s, Cooper and Fazio argued that dissonance was caused by aversive consequences, rather than inconsistency. According to this interpretation, the belief that lying is wrong and hurtful, not the inconsistency between cognitions, is what makes people feel bad. Subsequent research, however, found that people experience dissonance even when they believe they have not done anything wrong. For example, Harmon-Jones and colleagues showed that people experience dissonance even when the consequences of their statements are beneficial—as when they convince sexually active students to use condoms, when they, themselves are not using condoms. === Criticism of the free-choice paradigm === In the study How Choice Affects and Reflects Preferences: Revisiting the Free-choice Paradigm (Chen, Risen, 2010) the researchers criticized the free-choice paradigm as invalid, because the rank-choice-rank method is inaccurate for the study of cognitive dissonance. That the designing of research-models relies upon the assumption that, if the experimental subject rates options differently in the second survey, then the attitudes of the subject towards the options have changed. That there are other reasons why an experimental subject might achieve different rankings in the second survey; perhaps the subjects were indifferent between choices. Although the results of some follow-up studies (e.g. Do Choices Affect Preferences? Some Doubts and New Evidence, 2013) presented evidence of the unreliability of the rank-choice-rank method, the results of studies such as Neural Correlates of Cognitive Dissonance and Choice-induced Preference Change (2010) have not found the Choice-Rank-Choice method to be invalid, and indicate that making a choice can change the preferences of a person. === Action–motivation model === Festinger's original theory did not seek to explain how dissonance works. Why is inconsistency so aversive? The action–motivation model seeks to answer this question. It proposes that inconsistencies in a person's cognition cause mental stress because psychological inconsistency interferes with the person's functioning in the real world. Among techniques for coping, the person may choose to exercise a behavior that is inconsistent with their current attitude (a belief, an ideal, a value system), but later try to alter that belief to make it consistent with a current behavior; the cognitive dissonance occurs when the person's cognition does not match the action taken. If the person changes the current attitude, after the dissonance occurs, they are then obligated to commit to that course of behavior. Cognitive dissonance produces a state of negative affect, which motivates the person to reconsider the causative behavior in order to resolve the psychological inconsistency that caused the mental stress. As the affected person works towards a behavioral commitment, the motivational process then is activated in the left frontal cortex of the brain. === Predictive dissonance model === The predictive dissonance model proposes that cognitive dissonance is fundamentally related to the predictive coding (or predictive processing) model of cognition. A predictive processing account of the mind proposes that perception actively involves the use of a Bayesian hierarchy of acquired prior knowledge, which primarily serves the role of predicting incoming proprioceptive, interoceptive and exteroceptive sensory inputs. Therefore, the brain is an inference machine that attempts to actively predict and explain its sensations. Crucial to this inference is the minimization of prediction error. The predictive dissonance account proposes that the motivation for cognitive dissonance reduction is related to an organism's active drive for reducing prediction error. Moreover, it proposes that human (and perhaps other animal) brains have evolved to selectively ignore contradictory information (as proposed by dissonance theory) to prevent the overfitting of their predictive cognitive models to local and thus non-generalizing conditions. The predictive dissonance account is highly compatible with the action-motivation model since, in practice, prediction error can arise from unsuccessful behavior. == Neuroscience findings == Technological advances are allowing psychologists to study the biomechanics of cognitive dissonance. === Visualization === The study Neural Activity Predicts Attitude Change in Cognitive Dissonance (Van Veen, Krug, etc., 2009) identified the neural bases of cognitive dissonance with functional magnetic resonance imaging (fMRI); the neural scans of the participants replicated the basic findings of the induced-compliance paradigm. When in the fMRI scanner, some of the study participants argued that the uncomfortable, mechanical environment of the MRI machine nevertheless was a pleasant experience for them; some participants, from an experimental group, said they enjoyed the mechanical environment of the fMRI scanner more than did the control-group participants (paid actors) who argued about the uncomfortable experimental environment. The results of the neural scan experiment support the original theory of Cognitive Dissonance proposed by Festinger in 1957; and also support the psychological conflict theory, whereby the anterior cingulate functions, in counter-attitudinal response, to activate the dorsal anterior cingulate cortex and the anterior insular cortex; the degree of activation of said regions of the brain is predicted by the degree of change in the psychological attitude of the person. As an application of the free-choice paradigm, the study How Choice Reveals and Shapes Expected Hedonic Outcome (2009) indicates that after making a choice, neural activity in the striatum changes to reflect the person's new evaluation of the choice-object; neural activity increased if the object was chosen, neural activity decreased if the object was rejected. Moreover, studies such as The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making (2010) and How Choice Modifies Preference: Neural Correlates of Choice Justification (2011) confirm the neural bases of the psychology of cognitive dissonance. The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making (Jarcho, Berkman, Lieberman, 2010) applied the free-choice paradigm to fMRI examination of the brain's decision-making process whilst the study participant actively tried to reduce cognitive dissonance. The results indicated that the active reduction of psychological dissonance increased neural activity in the right-inferior frontal gyrus, in the medial fronto-parietal region, and in the ventral striatum, and that neural activity decreased in the anterior insula. That the neural activities of rationalization occur in seconds, without conscious deliberation on the part of the person; and that the brain engages in emotional responses whilst effecting decisions. === Emotional correlations === The results reported in Contributions from Research on Anger and Cognitive Dissonance to Understanding the Motivational Functions of Asymmetrical Frontal Brain Activity (Harmon-Jones, 2004) indicate that the occurrence of cognitive dissonance is associated with neural activity in the left frontal cortex, a brain structure also associated with the emotion of anger; moreover, functionally, anger motivates neural activity in the left frontal cortex. Applying a directional model of Approach motivation, the study Anger and the Behavioural Approach System (2003) indicated that the relationship between cognitive dissonance and anger is supported by neural activity in the left frontal cortex that occurs when a person takes control of the social situation causing the cognitive dissonance. Conversely, if the person cannot control or cannot change the psychologically stressful stimulation, they are without a motivation to change the circumstance, then there arise other, negative emotions to manage the cognitive dissonance, such as socially inappropriate behavior. The anterior cingulate cortex activity increases when errors occur and are being monitored as well as having behavioral conflicts with the self-concept as a form of higher-level thinking. A study was done to test the prediction that the left frontal cortex would have increased activity. University students had to write a paper depending on if they were assigned to a high-choice or low-choice condition. The low-choice condition required students to write about supporting a 10% increase in tuition at their university. The point of this condition was to see how significant the counter-choice may affect a person's ability to cope. The high-choice condition asked students to write in favor of tuition increase as if it were their completely voluntary choice. The researchers use EEG to analyze students before they wrote the essay, as dissonance is at its highest during this time (Beauvois and Joule, 1996). High-choice condition participants showed a higher level of the left frontal cortex than the low-choice participants. Results show that the initial experience of dissonance can be apparent in the anterior cingulate cortex, then the left frontal cortex is activated, which also activates the approach motivational system to reduce anger. === The psychology of mental stress === The results reported in The Origins of Cognitive Dissonance: Evidence from Children and Monkeys (Egan, Santos, Bloom, 2007) indicated that there might be evolutionary force behind the reduction of cognitive dissonance in the actions of pre-school-age children and Capuchin monkeys when offered a choice between two like options, decals and candies. The groups then were offered a new choice, between the choice-object not chosen and a novel choice-object that was as attractive as the first object. The resulting choices of the human and simian subjects concorded with the theory of cognitive dissonance when the children and the monkeys each chose the novel choice-object instead of the choice-object not chosen in the first selection, despite every object having the same value. The hypothesis of An Action-based Model of Cognitive-dissonance Processes (Harmon-Jones, Levy, 2015) proposed that psychological dissonance occurs consequent to the stimulation of thoughts that interfere with a goal-driven behavior. Researchers mapped the neural activity of the participant when performing tasks that provoked psychological stress when engaged in contradictory behaviors. A participant read aloud the printed name of a color. To test for the occurrence of cognitive dissonance, the name of the color was printed in a color different from the word read aloud by the participant. As a result, the participants experienced increased neural activity in the anterior cingulate cortex when the experimental exercises provoked psychological dissonance. The study Cognitive Neuroscience of Social Emotions and Implications for Psychopathology: Examining Embarrassment, Guilt, Envy, and Schadenfreude (Jankowski, Takahashi, 2014) identified neural correlations to specific social emotions (e.g. envy and embarrassment) as a measure of cognitive dissonance. The neural activity for the emotion of Envy (the feeling of displeasure at the good fortune of another person) was found to draw neural activity from the dorsal anterior cingulate cortex. That such increased activity in the dorsal anterior cingulate cortex occurred either when a person's self-concept was threatened or when the person experienced embarrassment (social pain) caused by salient, upward social-comparison, by social-class snobbery. That social emotions, such as embarrassment, guilt, envy, and Schadenfreude (joy at the misfortune of another person) are correlated to reduced activity in the insular lobe, and with increased activity in the striate nucleus; those neural activities are associated with a reduced sense of empathy (social responsibility) and an increased propensity towards antisocial behavior (delinquency). === Body image and health intervention === Some school programs discuss body image and eating disorders of children and adolescents. Disordered eating behaviors include binge eating episodes, excessive fasting, vomiting, and diet pills. National data from 2017 and 2018 highlights that since starting college, approximately 50 percent of college students reported becoming increasingly concerned with their weight and body shape. Studies examining eating disorders (ED) symptoms in college students reported that only 20 percent of those with positive ED got help. Less than 10 percent were diagnosed with an ED. This Body Project (BP) is rooted in the theory of cognitive dissonance. Cognitive dissonance occurs when a discrepancy emerges between beliefs and actions. The idea is centered around the notion that if beliefs and actions are inconsistent, then the individual will create a change to align the beliefs and actions. The BP uses cognitive dissonance to target ED, for example, social pressure from peers or not being satisfied with your appearance, to bring awareness and for a healthy and positive change, thoughts toward body image. === Modeling in neural networks === Artificial neural network models of cognition provide methods for integrating the results of empirical research about cognitive dissonance and attitudes into a single model that explains the formation of psychological attitudes and the mechanisms to change such attitudes. Among the artificial neural-network models that predict how cognitive dissonance might influence a person's attitudes and behavior, are: Parallel constraint satisfaction processes The meta-cognitive model (MCM) of attitudes Adaptive connectionist model of cognitive dissonance Attitudes as constraint satisfaction model == See also == == References == == Further reading == == External links == Cognitive dissonance entry in The Skeptic's Dictionary Festinger and Carlsmith's original paper Leon Festinger, An Introduction to the Theory of Cognitive Dissonance (1956)
Wikipedia/Cognitive_dissonance_theory
In chemistry, valence bond (VB) theory is one of the two basic theories, along with molecular orbital (MO) theory, that were developed to use the methods of quantum mechanics to explain chemical bonding. It focuses on how the atomic orbitals of the dissociated atoms combine to give individual chemical bonds when a molecule is formed. In contrast, molecular orbital theory has orbitals that cover the whole molecule. == History == In 1916, G. N. Lewis proposed that a chemical bond forms by the interaction of two shared bonding electrons, with the representation of molecules as Lewis structures. The chemist Charles Rugeley Bury suggested in 1921 that eight and eighteen electrons in a shell form stable configurations. Bury proposed that the electron configurations in transitional elements depended upon the valence electrons in their outer shell. In 1916, Kossel put forth his theory of the ionic chemical bond (octet rule), also independently advanced in the same year by Gilbert N. Lewis. Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904). Although there is no mathematical formula either in chemistry or quantum mechanics for the arrangement of electrons in the atom, the hydrogen atom can be described by the Schrödinger equation and the Matrix Mechanics equation both derived in 1925. However, for hydrogen alone, in 1927 the Heitler–London theory was formulated which for the first time enabled the calculation of bonding properties of the hydrogen molecule H2 based on quantum mechanical considerations. Specifically, Walter Heitler determined how to use Schrödinger's wave equation (1926) to show how two hydrogen atom wavefunctions join together, with plus, minus, and exchange terms, to form a covalent bond. He then called up his associate Fritz London and they worked out the details of the theory over the course of the night. Later, Linus Pauling used the pair bonding ideas of Lewis together with Heitler–London theory to develop two other key concepts in VB theory: resonance (1928) and orbital hybridization (1930). According to Charles Coulson, author of the noted 1952 book Valence, this period marks the start of "modern valence bond theory", as contrasted with older valence bond theories, which are essentially electronic theories of valence couched in pre-wave-mechanical terms. Linus Pauling published in 1931 his landmark paper on valence bond theory: "On the Nature of the Chemical Bond". Building on this article, Pauling's 1939 textbook: On the Nature of the Chemical Bond would become what some have called the bible of modern chemistry. This book helped experimental chemists to understand the impact of quantum theory on chemistry. However, the later edition in 1959 failed to adequately address the problems that appeared to be better understood by molecular orbital theory. The impact of valence theory declined during the 1960s and 1970s as molecular orbital theory grew in usefulness as it was implemented in large digital computer programs. Since the 1980s, the more difficult problems, of implementing valence bond theory into computer programs, have been solved largely, and valence bond theory has seen a resurgence. == Theory == According to this theory a covalent bond is formed between two atoms by the overlap of half filled valence atomic orbitals of each atom containing one unpaired electron. Valence Bond theory describes chemical bonding better than Lewis Theory, which states that atoms share or transfer electrons so that they achieve the octet rule. It does not take into account orbital interactions or bond angles, and treats all covalent bonds equally. A valence bond structure resembles a Lewis structure, but when a molecule cannot be fully represented by a single Lewis structure, multiple valence bond structures are used. Each of these VB structures represents a specific Lewis structure. This combination of valence bond structures is the main point of resonance theory. Valence bond theory considers that the overlapping atomic orbitals of the participating atoms form a chemical bond. Because of the overlapping, it is most probable that electrons should be in the bond region. Valence bond theory views bonds as weakly coupled orbitals (small overlap). Valence bond theory is typically easier to employ in ground state molecules. The core orbitals and electrons remain essentially unchanged during the formation of bonds. The overlapping atomic orbitals can differ. The two types of overlapping orbitals are sigma and pi. Sigma bonds occur when the orbitals of two shared electrons overlap head-to-head, with the electron density most concentrated between nuclei. Pi bonds occur when two orbitals overlap when they are parallel. For example, a bond between two s-orbital electrons is a sigma bond, because two spheres are always coaxial. In terms of bond order, single bonds have one sigma bond, double bonds consist of one sigma bond and one pi bond, and triple bonds contain one sigma bond and two pi bonds. However, the atomic orbitals for bonding may be hybrids. Hybridization is a model that describes how atomic orbitals combine to form new orbitals that better match the geometry of molecules. Atomic orbitals that are similar in energy combine to make hybrid orbitals. For example, the carbon in methane (CH4) undergoes sp3 hybridization to form four equivalent orbitals, resulting in a tetrahedral shape. Different types of hybridization, such as sp, sp2, and sp3, correspond to specific molecular geometries (linear, trigonal planar, and tetrahedral), influencing the bond angles observed in molecules. Hybrid orbitals provide additional directionality to sigma bonds, accurately explaining molecular geometries. == Comparison with MO theory == Valence bond theory complements molecular orbital theory (MO), which does not adhere to the valence bond idea that electron pairs are localized between two specific atoms in a molecule, but that they are distributed in sets of molecular orbitals which can extend over the entire molecule. Although both theories describe chemical bonding, MO generally offer a clearer and more reliable framework for predicting magnetic and ionization properties (and therefore optical and IR spectra). In particular, molecular orbitals can effectively account for paramagnetism arising from unpaired electrons, whereas VBT struggles. While both theories are theoretically mathematically equivalent, MO is a more popular approach than VB due to its easier implementation in the early days of computational chemistry. Valence bond theory views aromatic properties of molecules as due to spin coupling of the π orbitals. This is essentially still the old idea of resonance between Friedrich August Kekulé von Stradonitz and James Dewar structures. In contrast, molecular orbital theory views aromaticity as delocalization of the π-electrons. Valence bond treatments are restricted to relatively small molecules, largely due to the lack of orthogonality between valence bond orbitals and between valence bond structures. The molecular orbitals are always orthogonal. Valence bond theory cannot explain electronic transitions and spectroscopic properties as effectively as MO theory. While VB employs hybridization to explain bonding, it can oversimplify complex bonding situations, limiting its applicability in more intricate molecular geometries such as transition metal compounds. On the other hand, VB theory provides a much more intuitive picture of the reorganization of electronic charge that takes place when bonds are broken and formed during the course of a chemical reaction. Valence bond theory also correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms even in the simplest models, while similarly crude MO approaches predict dissociation into a mixture of atoms and ions. For example, the MO function for dihydrogen is an equal mixture of the covalent and ionic valence bond structures and so predicts incorrectly that the molecule would dissociate into an equal mixture of hydrogen atoms and hydrogen positive and negative ions. == Computational approaches == Modern valence bond theory replaces the overlapping atomic orbitals by overlapping valence bond orbitals that are expanded over a large number of basis functions, either centered each on one atom to give a classical valence bond picture, or centered on all atoms in the molecule. The resulting energies are more competitive with energies from calculations where electron correlation is introduced based on a Hartree–Fock reference wavefunction. The most recent text is by Shaik and Hiberty. == Applications == An important aspect of the valence bond theory is the condition of maximum overlap, which leads to the formation of the strongest possible bonds. This theory is used to explain the covalent bond formation in many molecules. For example, in the case of the F2 molecule, the F−F bond is formed by the overlap of pz orbitals of the two F atoms, each containing an unpaired electron. Since the nature of the overlapping orbitals are different in H2 and F2 molecules, the bond strength and bond lengths differ between H2 and F2 molecules. In methane (CH4), the carbon atom undergoes sp3 hybridization, allowing it to form four equivalent sigma bonds with hydrogen atoms, resulting in a tetrahedral geometry. Hybridization also explains the equal C-H bond strengths. In an HF molecule the covalent bond is formed by the overlap of the 1s orbital of H and the 2pz orbital of F, each containing an unpaired electron. Mutual sharing of electrons between H and F results in a covalent bond in HF. == See also == Valence bond programs == References ==
Wikipedia/Valence_bond_theory
In electrical engineering and electronics, a network is a collection of interconnected components. Network analysis is the process of finding the voltages across, and the currents through, all network components. There are many techniques for calculating these values; however, for the most part, the techniques assume linear components. Except where stated, the methods described in this article are applicable only to linear network analysis. == Definitions == == Equivalent circuits == A useful procedure in network analysis is to simplify the network by reducing the number of components. This can be done by replacing physical components with other notional components that have the same effect. A particular technique might directly reduce the number of components, for instance by combining impedances in series. On the other hand, it might merely change the form into one in which the components can be reduced in a later operation. For instance, one might transform a voltage generator into a current generator using Norton's theorem in order to be able to later combine the internal resistance of the generator with a parallel impedance load. A resistive circuit is a circuit containing only resistors, ideal current sources, and ideal voltage sources. If the sources are constant (DC) sources, the result is a DC circuit. Analysis of a circuit consists of solving for the voltages and currents present in the circuit. The solution principles outlined here also apply to phasor analysis of AC circuits. Two circuits are said to be equivalent with respect to a pair of terminals if the voltage across the terminals and current through the terminals for one network have the same relationship as the voltage and current at the terminals of the other network. If V 2 = V 1 {\displaystyle V_{2}=V_{1}} implies I 2 = I 1 {\displaystyle I_{2}=I_{1}} for all (real) values of V1, then with respect to terminals ab and xy, circuit 1 and circuit 2 are equivalent. The above is a sufficient definition for a one-port network. For more than one port, then it must be defined that the currents and voltages between all pairs of corresponding ports must bear the same relationship. For instance, star and delta networks are effectively three port networks and hence require three simultaneous equations to fully specify their equivalence. === Impedances in series and in parallel === Some two terminal network of impedances can eventually be reduced to a single impedance by successive applications of impedances in series or impedances in parallel. Impedances in series: Z e q = Z 1 + Z 2 + ⋯ + Z n . {\displaystyle Z_{\mathrm {eq} }=Z_{1}+Z_{2}+\,\cdots \,+Z_{n}.} Impedances in parallel: 1 Z e q = 1 Z 1 + 1 Z 2 + ⋯ + 1 Z n . {\displaystyle {\frac {1}{Z_{\mathrm {eq} }}}={\frac {1}{Z_{1}}}+{\frac {1}{Z_{2}}}+\,\cdots \,+{\frac {1}{Z_{n}}}.} The above simplified for only two impedances in parallel: Z e q = Z 1 Z 2 Z 1 + Z 2 . {\displaystyle Z_{\mathrm {eq} }={\frac {Z_{1}Z_{2}}{Z_{1}+Z_{2}}}.} === Delta-wye transformation === A network of impedances with more than two terminals cannot be reduced to a single impedance equivalent circuit. An n-terminal network can, at best, be reduced to n impedances (at worst ( n 2 ) {\displaystyle {\tbinom {n}{2}}} ). For a three terminal network, the three impedances can be expressed as a three node delta (Δ) network or four node star (Y) network. These two networks are equivalent and the transformations between them are given below. A general network with an arbitrary number of nodes cannot be reduced to the minimum number of impedances using only series and parallel combinations. In general, Y-Δ and Δ-Y transformations must also be used. For some networks the extension of Y-Δ to star-polygon transformations may also be required. For equivalence, the impedances between any pair of terminals must be the same for both networks, resulting in a set of three simultaneous equations. The equations below are expressed as resistances but apply equally to the general case with impedances. ==== Delta-to-star transformation equations ==== R a = R a c R a b R a c + R a b + R b c R b = R a b R b c R a c + R a b + R b c R c = R b c R a c R a c + R a b + R b c {\displaystyle {\begin{aligned}R_{a}&={\frac {R_{\mathrm {ac} }R_{\mathrm {ab} }}{R_{\mathrm {ac} }+R_{\mathrm {ab} }+R_{\mathrm {bc} }}}\\R_{b}&={\frac {R_{\mathrm {ab} }R_{\mathrm {bc} }}{R_{\mathrm {ac} }+R_{\mathrm {ab} }+R_{\mathrm {bc} }}}\\R_{c}&={\frac {R_{\mathrm {bc} }R_{\mathrm {ac} }}{R_{\mathrm {ac} }+R_{\mathrm {ab} }+R_{\mathrm {bc} }}}\end{aligned}}} ==== Star-to-delta transformation equations ==== R a c = R a R b + R b R c + R c R a R b R a b = R a R b + R b R c + R c R a R c R b c = R a R b + R b R c + R c R a R a {\displaystyle {\begin{aligned}R_{\mathrm {ac} }&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{b}}}\\R_{\mathrm {ab} }&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{c}}}\\R_{\mathrm {bc} }&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{a}}}\end{aligned}}} === General form of network node elimination === The star-to-delta and series-resistor transformations are special cases of the general resistor network node elimination algorithm. Any node connected by N resistors (R1 … RN) to nodes 1 … N can be replaced by resistors interconnecting the remaining N nodes. The resistance between any two nodes x, y is given by: R x y = R x R y ∑ i = 1 N 1 R i {\displaystyle R_{\mathrm {xy} }=R_{x}R_{y}\sum _{i=1}^{N}{\frac {1}{R_{i}}}} For a star-to-delta (N = 3) this reduces to: R a b = R a R b ( 1 R a + 1 R b + 1 R c ) = R a R b ( R a R b + R a R c + R b R c ) R a R b R c = R a R b + R b R c + R c R a R c {\displaystyle {\begin{aligned}R_{\mathrm {ab} }&=R_{a}R_{b}\left({\frac {1}{R}}_{a}+{\frac {1}{R}}_{b}+{\frac {1}{R}}_{c}\right)={\frac {R_{a}R_{b}(R_{a}R_{b}+R_{a}R_{c}+R_{b}R_{c})}{R_{a}R_{b}R_{c}}}\\&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{c}}}\end{aligned}}} For a series reduction (N = 2) this reduces to: R a b = R a R b ( 1 R a + 1 R b ) = R a R b ( R a + R b ) R a R b = R a + R b {\displaystyle R_{\mathrm {ab} }=R_{a}R_{b}\left({\frac {1}{R}}_{a}+{\frac {1}{R}}_{b}\right)={\frac {R_{a}R_{b}(R_{a}+R_{b})}{R_{a}R_{b}}}=R_{a}+R_{b}} For a dangling resistor (N = 1) it results in the elimination of the resistor because ( 1 2 ) = 0 {\displaystyle {\tbinom {1}{2}}=0} . === Source transformation === A generator with an internal impedance (i.e. non-ideal generator) can be represented as either an ideal voltage generator or an ideal current generator plus the impedance. These two forms are equivalent and the transformations are given below. If the two networks are equivalent with respect to terminals ab, then V and I must be identical for both networks. Thus, V s = R I s {\displaystyle V_{\mathrm {s} }=RI_{\mathrm {s} }\,\!} or I s = V s R {\displaystyle I_{\mathrm {s} }={\frac {V_{\mathrm {s} }}{R}}} Norton's theorem states that any two-terminal linear network can be reduced to an ideal current generator and a parallel impedance. Thévenin's theorem states that any two-terminal linear network can be reduced to an ideal voltage generator plus a series impedance. == Simple networks == Some very simple networks can be analysed without the need to apply the more systematic approaches. === Voltage division of series components === Consider n impedances that are connected in series. The voltage across any impedance Z i {\displaystyle Z_{i}} is === Current division of parallel components === Consider n admittances that are connected in parallel. The current I i {\displaystyle I_{i}} through any admittance Y i {\displaystyle Y_{i}} is I i = Y i V = ( Y i Y 1 + Y 2 + ⋯ + Y n ) I {\displaystyle I_{i}=Y_{i}V=\left({\frac {Y_{i}}{Y_{1}+Y_{2}+\cdots +Y_{n}}}\right)I} for i = 1 , 2 , . . . , n . {\displaystyle i=1,2,...,n.} ==== Special case: Current division of two parallel components ==== I 1 = ( Z 2 Z 1 + Z 2 ) I {\displaystyle I_{1}=\left({\frac {Z_{2}}{Z_{1}+Z_{2}}}\right)I} I 2 = ( Z 1 Z 1 + Z 2 ) I {\displaystyle I_{2}=\left({\frac {Z_{1}}{Z_{1}+Z_{2}}}\right)I} == Nodal analysis == Nodal analysis uses the concept of a node voltage and considers the node voltages to be the unknown variables.: 2-8 - 2-9  For all nodes, except a chosen reference node, the node voltage is defined as the voltage drop from the node to the reference node. Therefore, there are N-1 node voltages for a circuit with N nodes.: 2-10  In principle, nodal analysis uses Kirchhoff's current law (KCL) at N-1 nodes to get N-1 independent equations. Since equations generated with KCL are in terms of currents going in and out of nodes, these currents, if their values are not known, need to be represented by the unknown variables (node voltages). For some elements (such as resistors and capacitors) getting the element currents in terms of node voltages is trivial. For some common elements where this is not possible, specialized methods are developed. For example, a concept called supernode is used for circuits with independent voltage sources.: 2-12 - 2-13  Label all nodes in the circuit. Arbitrarily select any node as reference. Define a voltage variable from every remaining node to the reference. These voltage variables must be defined as voltage rises with respect to the reference node. Write a KCL equation for every node except the reference. Solve the resulting system of equations. == Mesh analysis == Mesh — a loop that does not contain an inner loop. Count the number of “window panes” in the circuit. Assign a mesh current to each window pane. Write a KVL equation for every mesh whose current is unknown. Solve the resulting equations == Superposition == In this method, the effect of each generator in turn is calculated. All the generators other than the one being considered are removed and either short-circuited in the case of voltage generators or open-circuited in the case of current generators. The total current through or the total voltage across a particular branch is then calculated by summing all the individual currents or voltages. There is an underlying assumption to this method that the total current or voltage is a linear superposition of its parts. Therefore, the method cannot be used if non-linear components are present. : 6–14  Superposition of powers cannot be used to find total power consumed by elements even in linear circuits. Power varies according to the square of total voltage or current and the square of the sum is not generally equal to the sum of the squares. Total power in an element can be found by applying superposition to the voltages and current independently and then calculating power from the total voltage and current. == Choice of method == Choice of method: 112–113  is to some extent a matter of taste. If the network is particularly simple or only a specific current or voltage is required then ad-hoc application of some simple equivalent circuits may yield the answer without recourse to the more systematic methods. Nodal analysis: The number of voltage variables, and hence simultaneous equations to solve, equals the number of nodes minus one. Every voltage source connected to the reference node reduces the number of unknowns and equations by one. Mesh analysis: The number of current variables, and hence simultaneous equations to solve, equals the number of meshes. Every current source in a mesh reduces the number of unknowns by one. Mesh analysis can only be used with networks which can be drawn as a planar network, that is, with no crossing components.: 94  Superposition is possibly the most conceptually simple method but rapidly leads to a large number of equations and messy impedance combinations as the network becomes larger. Effective medium approximations: For a network consisting of a high density of random resistors, an exact solution for each individual element may be impractical or impossible. Instead, the effective resistance and current distribution properties can be modelled in terms of graph measures and geometrical properties of networks. == Transfer function == A transfer function expresses the relationship between an input and an output of a network. For resistive networks, this will always be a simple real number or an expression which boils down to a real number. Resistive networks are represented by a system of simultaneous algebraic equations. However, in the general case of linear networks, the network is represented by a system of simultaneous linear differential equations. In network analysis, rather than use the differential equations directly, it is usual practice to carry out a Laplace transform on them first and then express the result in terms of the Laplace parameter s, which in general is complex. This is described as working in the s-domain. Working with the equations directly would be described as working in the time (or t) domain because the results would be expressed as time varying quantities. The Laplace transform is the mathematical method of transforming between the s-domain and the t-domain. This approach is standard in control theory and is useful for determining stability of a system, for instance, in an amplifier with feedback. === Two terminal component transfer functions === For two terminal components the transfer function, or more generally for non-linear elements, the constitutive equation, is the relationship between the current input to the device and the resulting voltage across it. The transfer function, Z(s), will thus have units of impedance, ohms. For the three passive components found in electrical networks, the transfer functions are; For a network to which only steady ac signals are applied, s is replaced with jω and the more familiar values from ac network theory result. Finally, for a network to which only steady dc is applied, s is replaced with zero and dc network theory applies. === Two port network transfer function === Transfer functions, in general, in control theory are given the symbol H(s). Most commonly in electronics, transfer function is defined as the ratio of output voltage to input voltage and given the symbol A(s), or more commonly (because analysis is invariably done in terms of sine wave response), A(jω), so that; A ( j ω ) = V o V i {\displaystyle A(j\omega )={\frac {V_{o}}{V_{i}}}} The A standing for attenuation, or amplification, depending on context. In general, this will be a complex function of jω, which can be derived from an analysis of the impedances in the network and their individual transfer functions. Sometimes the analyst is only interested in the magnitude of the gain and not the phase angle. In this case the complex numbers can be eliminated from the transfer function and it might then be written as; A ( ω ) = | V o V i | {\displaystyle A(\omega )=\left|{\frac {V_{o}}{V_{i}}}\right|} ==== Two port parameters ==== The concept of a two-port network can be useful in network analysis as a black box approach to analysis. The behaviour of the two-port network in a larger network can be entirely characterised without necessarily stating anything about the internal structure. However, to do this it is necessary to have more information than just the A(jω) described above. It can be shown that four such parameters are required to fully characterise the two-port network. These could be the forward transfer function, the input impedance, the reverse transfer function (i.e., the voltage appearing at the input when a voltage is applied to the output) and the output impedance. There are many others (see the main article for a full listing), one of these expresses all four parameters as impedances. It is usual to express the four parameters as a matrix; [ V 1 V 0 ] = [ z ( j ω ) 11 z ( j ω ) 12 z ( j ω ) 21 z ( j ω ) 22 ] [ I 1 I 0 ] {\displaystyle {\begin{bmatrix}V_{1}\\V_{0}\end{bmatrix}}={\begin{bmatrix}z(j\omega )_{11}&z(j\omega )_{12}\\z(j\omega )_{21}&z(j\omega )_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{0}\end{bmatrix}}} The matrix may be abbreviated to a representative element; [ z ( j ω ) ] {\displaystyle \left[z(j\omega )\right]} or just [ z ] {\displaystyle \left[z\right]} These concepts are capable of being extended to networks of more than two ports. However, this is rarely done in reality because, in many practical cases, ports are considered either purely input or purely output. If reverse direction transfer functions are ignored, a multi-port network can always be decomposed into a number of two-port networks. ==== Distributed components ==== Where a network is composed of discrete components, analysis using two-port networks is a matter of choice, not essential. The network can always alternatively be analysed in terms of its individual component transfer functions. However, if a network contains distributed components, such as in the case of a transmission line, then it is not possible to analyse in terms of individual components since they do not exist. The most common approach to this is to model the line as a two-port network and characterise it using two-port parameters (or something equivalent to them). Another example of this technique is modelling the carriers crossing the base region in a high frequency transistor. The base region has to be modelled as distributed resistance and capacitance rather than lumped components. ==== Image analysis ==== Transmission lines and certain types of filter design use the image method to determine their transfer parameters. In this method, the behaviour of an infinitely long cascade connected chain of identical networks is considered. The input and output impedances and the forward and reverse transmission functions are then calculated for this infinitely long chain. Although the theoretical values so obtained can never be exactly realised in practice, in many cases they serve as a very good approximation for the behaviour of a finite chain as long as it is not too short. == Time-based network analysis with simulation == Most analysis methods calculate the voltage and current values for static networks, which are circuits consisting of memoryless components only but have difficulties with complex dynamic networks. In general, the equations that describe the behaviour of a dynamic circuit are in the form of a differential-algebraic system of equations (DAEs). DAEs are challenging to solve and the methods for doing so are not yet fully understood and developed (as of 2010). Also, there is no general theorem that guarantees solutions to DAEs will exist and be unique. : 204–205  In special cases, the equations of the dynamic circuit will be in the form of an ordinary differential equations (ODE), which are easier to solve, since numerical methods for solving ODEs have a rich history, dating back to the late 1800s. One strategy for adapting ODE solution methods to DAEs is called direct discretization and is the method of choice in circuit simulation. : 204-205  Simulation-based methods for time-based network analysis solve a circuit that is posed as an initial value problem (IVP). That is, the values of the components with memories (for example, the voltages on capacitors and currents through inductors) are given at an initial point of time t0, and the analysis is done for the time t 0 ≤ t ≤ t f {\displaystyle t_{0}\leq t\leq t_{f}} . : 206-207  Since finding numerical results for the infinite number of time points from t0 to tf is not possible, this time period is discretized into discrete time instances, and the numerical solution is found for every instance. The time between the time instances is called the time step and can be fixed throughout the whole simulation or may be adaptive. In an IVP, when finding a solution for time tn+1, the solution for time tn is already known. Then, temporal discretization is used to replace the derivatives with differences, such as x ′ ( t n + 1 ) ≈ x n + 1 − x n h n + 1 {\displaystyle x'(t_{n+1})\approx {\frac {x_{n+1}-x_{n}}{h_{n+1}}}} for the backward Euler method, where hn+1 is the time step. : 266  If all circuit components were linear or the circuit was linearized beforehand, the equation system at this point is a system of linear equations and is solved with numerical linear algebra methods. Otherwise, it is a nonlinear algebraic equation system and is solved with nonlinear numerical methods such as Root-finding algorithms. === Comparison to other methods === Simulation methods are much more applicable than Laplace transform based methods, such as transfer functions, which only work for simple dynamic networks with capacitors and inductors. Also, the input signals to the network cannot be arbitrarily defined for Laplace transform based methods. == Non-linear networks == Most electronic designs are, in reality, non-linear. There are very few that do not include some semiconductor devices. These are invariably non-linear, the transfer function of an ideal semiconductor p-n junction is given by the very non-linear relationship; where; i and v are the instantaneous current and voltage. Io is an arbitrary parameter called the reverse leakage current whose value depends on the construction of the device. VT is a parameter proportional to temperature called the thermal voltage and equal to about 25mV at room temperature. There are many other ways that non-linearity can appear in a network. All methods utilising linear superposition will fail when non-linear components are present. There are several options for dealing with non-linearity depending on the type of circuit and the information the analyst wishes to obtain. === Constitutive equations === The diode equation above is an example of an element constitutive equation of the general form, f ( v , i ) = 0 {\displaystyle f(v,i)=0} This can be thought of as a non-linear resistor. The corresponding constitutive equations for non-linear inductors and capacitors are respectively; f ( v , φ ) = 0 {\displaystyle f(v,\varphi )=0} f ( v , q ) = 0 {\displaystyle f(v,q)=0} where f is any arbitrary function, φ is the stored magnetic flux and q is the stored charge. === Existence, uniqueness and stability === An important consideration in non-linear analysis is the question of uniqueness. For a network composed of linear components there will always be one, and only one, unique solution for a given set of boundary conditions. This is not always the case in non-linear circuits. For instance, a linear resistor with a fixed current applied to it has only one solution for the voltage across it. On the other hand, the non-linear tunnel diode has up to three solutions for the voltage for a given current. That is, a particular solution for the current through the diode is not unique, there may be others, equally valid. In some cases there may not be a solution at all: the question of existence of solutions must be considered. Another important consideration is the question of stability. A particular solution may exist, but it may not be stable, rapidly departing from that point at the slightest stimulation. It can be shown that a network that is absolutely stable for all conditions must have one, and only one, solution for each set of conditions. === Methods === ==== Boolean analysis of switching networks ==== A switching device is one where the non-linearity is utilised to produce two opposite states. CMOS devices in digital circuits, for instance, have their output connected to either the positive or the negative supply rail and are never found at anything in between except during a transient period when the device is switching. Here the non-linearity is designed to be extreme, and the analyst can take advantage of that fact. These kinds of networks can be analysed using Boolean algebra by assigning the two states ("on"/"off", "positive"/"negative" or whatever states are being used) to the Boolean constants "0" and "1". The transients are ignored in this analysis, along with any slight discrepancy between the state of the device and the nominal state assigned to a Boolean value. For instance, Boolean "1" may be assigned to the state of +5V. The output of the device may be +4.5V but the analyst still considers this to be Boolean "1". Device manufacturers will usually specify a range of values in their data sheets that are to be considered undefined (i.e. the result will be unpredictable). The transients are not entirely uninteresting to the analyst. The maximum rate of switching is determined by the speed of transition from one state to the other. Happily for the analyst, for many devices most of the transition occurs in the linear portion of the devices transfer function and linear analysis can be applied to obtain at least an approximate answer. It is mathematically possible to derive Boolean algebras that have more than two states. There is not too much use found for these in electronics, although three-state devices are passingly common. ==== Separation of bias and signal analyses ==== This technique is used where the operation of the circuit is to be essentially linear, but the devices used to implement it are non-linear. A transistor amplifier is an example of this kind of network. The essence of this technique is to separate the analysis into two parts. Firstly, the dc biases are analysed using some non-linear method. This establishes the quiescent operating point of the circuit. Secondly, the small signal characteristics of the circuit are analysed using linear network analysis. Examples of methods that can be used for both these stages are given below. ==== Graphical method of dc analysis ==== In a great many circuit designs, the dc bias is fed to a non-linear component via a resistor (or possibly a network of resistors). Since resistors are linear components, it is particularly easy to determine the quiescent operating point of the non-linear device from a graph of its transfer function. The method is as follows: from linear network analysis the output transfer function (that is output voltage against output current) is calculated for the network of resistor(s) and the generator driving them. This will be a straight line (called the load line) and can readily be superimposed on the transfer function plot of the non-linear device. The point where the lines cross is the quiescent operating point. Perhaps the easiest practical method is to calculate the (linear) network open circuit voltage and short circuit current and plot these on the transfer function of the non-linear device. The straight line joining these two point is the transfer function of the network. In reality, the designer of the circuit would proceed in the reverse direction to that described. Starting from a plot provided in the manufacturers data sheet for the non-linear device, the designer would choose the desired operating point and then calculate the linear component values required to achieve it. It is still possible to use this method if the device being biased has its bias fed through another device which is itself non-linear, a diode for instance. In this case however, the plot of the network transfer function onto the device being biased would no longer be a straight line and is consequently more tedious to do. ==== Small signal equivalent circuit ==== This method can be used where the deviation of the input and output signals in a network stay within a substantially linear portion of the non-linear devices transfer function, or else are so small that the curve of the transfer function can be considered linear. Under a set of these specific conditions, the non-linear device can be represented by an equivalent linear network. It must be remembered that this equivalent circuit is entirely notional and only valid for the small signal deviations. It is entirely inapplicable to the dc biasing of the device. For a simple two-terminal device, the small signal equivalent circuit may be no more than two components. A resistance equal to the slope of the v/i curve at the operating point (called the dynamic resistance), and tangent to the curve. A generator, because this tangent will not, in general, pass through the origin. With more terminals, more complicated equivalent circuits are required. A popular form of specifying the small signal equivalent circuit amongst transistor manufacturers is to use the two-port network parameters known as [h] parameters. These are a matrix of four parameters as with the [z] parameters but in the case of the [h] parameters they are a hybrid mixture of impedances, admittances, current gains and voltage gains. In this model the three terminal transistor is considered to be a two port network, one of its terminals being common to both ports. The [h] parameters are quite different depending on which terminal is chosen as the common one. The most important parameter for transistors is usually the forward current gain, h21, in the common emitter configuration. This is designated hfe on data sheets. The small signal equivalent circuit in terms of two-port parameters leads to the concept of dependent generators. That is, the value of a voltage or current generator depends linearly on a voltage or current elsewhere in the circuit. For instance the [z] parameter model leads to dependent voltage generators as shown in this diagram; There will always be dependent generators in a two-port parameter equivalent circuit. This applies to the [h] parameters as well as to the [z] and any other kind. These dependencies must be preserved when developing the equations in a larger linear network analysis. ==== Piecewise linear method ==== In this method, the transfer function of the non-linear device is broken up into regions. Each of these regions is approximated by a straight line. Thus, the transfer function will be linear up to a particular point where there will be a discontinuity. Past this point the transfer function will again be linear but with a different slope. A well known application of this method is the approximation of the transfer function of a pn junction diode. The transfer function of an ideal diode has been given at the top of this (non-linear) section. However, this formula is rarely used in network analysis, a piecewise approximation being used instead. It can be seen that the diode current rapidly diminishes to -Io as the voltage falls. This current, for most purposes, is so small it can be ignored. With increasing voltage, the current increases exponentially. The diode is modelled as an open circuit up to the knee of the exponential curve, then past this point as a resistor equal to the bulk resistance of the semiconducting material. The commonly accepted values for the transition point voltage are 0.7V for silicon devices and 0.3V for germanium devices. An even simpler model of the diode, sometimes used in switching applications, is short circuit for forward voltages and open circuit for reverse voltages. The model of a forward biased pn junction having an approximately constant 0.7V is also a much used approximation for transistor base-emitter junction voltage in amplifier design. The piecewise method is similar to the small signal method in that linear network analysis techniques can only be applied if the signal stays within certain bounds. If the signal crosses a discontinuity point then the model is no longer valid for linear analysis purposes. The model does have the advantage over small signal however, in that it is equally applicable to signal and dc bias. These can therefore both be analysed in the same operations and will be linearly superimposable. === Time-varying components === In linear analysis, the components of the network are assumed to be unchanging, but in some circuits this does not apply, such as sweep oscillators, voltage controlled amplifiers, and variable equalisers. In many circumstances the change in component value is periodic. A non-linear component excited with a periodic signal, for instance, can be represented as a periodically varying linear component. Sidney Darlington disclosed a method of analysing such periodic time varying circuits. He developed canonical circuit forms which are analogous to the canonical forms of Ronald M. Foster and Wilhelm Cauer used for analysing linear circuits. === Vector circuit theory === Generalization of circuit theory based on scalar quantities to vectorial currents is a necessity for newly evolving circuits such as spin circuits. Generalized circuit variables consist of four components: scalar current and vector spin current in x, y, and z directions. The voltages and currents each become vector quantities with conductance described as a 4x4 spin conductance matrix. == See also == Bartlett's bisection theorem Kirchhoff's circuit laws Millman's theorem Modified nodal analysis Ohm's law Reciprocity (electrical networks) Tellegen's theorem Symbolic circuit analysis == References == == External links == The Feynman Lectures on Physics Vol. II Ch. 22: AC Circuits
Wikipedia/Circuit_theory
In organic chemistry, ring strain is a type of instability that exists when bonds in a molecule form angles that are abnormal. Strain is most commonly discussed for small rings such as cyclopropanes and cyclobutanes, whose internal angles are substantially smaller than the idealized value of approximately 109°. Because of their high strain, the heat of combustion for these small rings is elevated. Ring strain results from a combination of angle strain, conformational strain or Pitzer strain (torsional eclipsing interactions), and transannular strain, also known as van der Waals strain or Prelog strain. The simplest examples of angle strain are small cycloalkanes such as cyclopropane and cyclobutane. Ring strain energy can be attributed to the energy required for the distortion of bond and bond angles in order to close a ring. Ring strain energy is believed to be the cause of accelerated rates in altering ring reactions. Its interactions with traditional bond energies change the enthalpies of compounds effecting the kinetics and thermodynamics of ring strain reactions. == History == Ring strain theory was first developed by German chemist Adolf von Bayer in 1890. Previously, the only bonds believed to exist were torsional and steric; however, Bayer's theory became based on the interactions between the two strains. Bayer's theory was based on the assumption that ringed compounds were flat. Later, around the same time, Hermann Sachse formed his postulation that compound rings were not flat and potentially existed in a "chair" formation. Ernst Mohr later combined the two theories to explain the stability of six-membered rings and their frequency in nature, as well as the energy levels of other ring structures. == Angle strain (Baeyer strain) == === Alkanes === In alkanes, optimum overlap of atomic orbitals is achieved at 109.5°. The most common cyclic compounds have five or six carbons in their ring. Adolf von Baeyer received a Nobel Prize in 1905 for the discovery of the Baeyer strain theory, which was an explanation of the relative stabilities of cyclic molecules in 1885. Angle strain occurs when bond angles deviate from the ideal bond angles to achieve maximum bond strength in a specific chemical conformation. Angle strain typically affects cyclic molecules, which lack the flexibility of acyclic molecules. Angle strain destabilizes a molecule, as manifested in higher reactivity and elevated heat of combustion. Maximum bond strength results from effective overlap of atomic orbitals in a chemical bond. A quantitative measure for angle strain is strain energy. Angle strain and torsional strain combine to create ring strain that affects cyclic molecules. C n H 2 n + 3 n 2 O 2 ⟶ n CO 2 + n H 2 O − Δ H combustion {\displaystyle {\ce {C}}_{n}{\ce {H}}_{2n}+{\tfrac {3n}{2}}{\ce {O2}}\longrightarrow n{\ce {CO2}}+n{\ce {H2O}}-\Delta H_{\text{combustion}}} Normalized energies that allow comparison of ring strains are obtained by measuring per methylene group (CH2) of the molar heat of combustion in the cycloalkanes. ΔHcombustion per CH2 − 658.6 kJ = strain per CH2 The value 658.6 kJ per mole is obtained from an unstrained long-chain alkane. Cycloalkanes generally have less ring strain than cycloalkenes, which is seen when comparing cyclopropane and cyclopropene. === Angle strain in alkenes === Cyclic alkenes are subject to strain resulting from distortion of the sp2-hybridized carbon centers. Illustrative is C60 where the carbon centres are pyramidalized. This distortion enhances the reactivity of this molecule. Angle strain also is the basis of Bredt's rule which dictates that bridgehead carbon centers are not incorporated in alkenes because the resulting alkene would be subject to extreme angle strain. Small trans-cycloalkenes have so much ring strain they cannot exist for extended periods of time. For instance, the smallest trans-cycloalkane that has been isolated is trans-cyclooctene. Trans-cycloheptene has been detected via spectrophotometry for minute time periods, and trans-cyclohexene is thought to be an intermediate in some reactions. No smaller trans-cycloalkenes are known. On the contrary, while small cis-cycloalkenes do have ring strain, they have much less ring strain than small trans-cycloalkenes. In general, the increased levels of unsaturation in alkenes leads to higher ring strain. Increasing unsaturation leads to greater ring strain in cyclopropene. Therefore, cyclopropene is an alkene that has the most ring strain between the two mentioned. The differing hybridizations and geometries between cyclopropene and cyclopropane contribute to the increased ring strain. Cyclopropene also has an increased angle strain, which also contributes to the greater ring strain. However, this trend does not always work for every alkane and alkene. == Torsional strain (Pitzer strain) == In some molecules, torsional strain can contribute to ring strain in addition to angle strain. One example of such a molecule is cyclopropane. Cyclopropane's carbon-carbon bonds form angles of 60°, far from the preferred angle of 109.5° angle in alkanes, so angle strain contributes most to cyclopropane's ring strain. However, as shown in the Newman projection of the molecule, the hydrogen atoms are eclipsed, causing some torsional strain as well. == Examples == In cycloalkanes, each carbon is bonded nonpolar covalently to two carbons and two hydrogen. The carbons have sp3 hybridization and should have ideal bond angles of 109.5°. Due to the limitations of cyclic structure, however, the ideal angle is only achieved in a six carbon ring — cyclohexane in chair conformation. For other cycloalkanes, the bond angles deviate from ideal. Molecules with a high amount of ring strain consist of three, four, and some five-membered rings, including: cyclopropanes, cyclopropenes, cyclobutanes, cyclobutenes, [1,1,1]propellanes, [2,2,2]propellanes, epoxides, aziridines, cyclopentenes, and norbornenes. These molecules have bond angles between ring atoms which are more acute than the optimal tetrahedral (109.5°) and trigonal planar (120°) bond angles required by their respective sp3 and sp2 bonds. Because of the smaller bond angles, the bonds have higher energy and adopt more p-character to reduce the energy of the bonds. In addition, the ring structures of cyclopropanes/enes and cyclclobutanes/enes offer very little conformational flexibility. Thus, the substituents of ring atoms exist in an eclipsed conformation in cyclopropanes and between gauche and eclipsed in cyclobutanes, contributing to higher ring strain energy in the form of van der Waals repulsion. monocycles cyclopropane (29 kcal/mol), C3H6 — the C-C-C bond angles are 60° whereas tetrahedral 109.5° bond angles are expected. The intense angle strain leads to nonlinear orbital overlap of its sp3 orbitals. Because of the bond's instability, cyclopropane is more reactive than other alkanes. Since any three points make a plane and cyclopropane has only three carbons, cyclopropane is planar. The H-C-H bond angle is 115° whereas 106° is expected as in the CH2 groups of propane. cyclobutane (26.3 kcal/mol), C4H8 — if cyclobutane were completely square planar, its bond angles would be 90° whereas tetrahedral 109.5° bond angles are expected. However, the actual C-C-C bond angle is 88° because it has a slightly folded form to relieve some torsional strain at the expense of slightly more angle strain. The high strain energy of cyclobutane is primarily from angle strain. cyclopentane (7.4 kcal/mol), C5H10 — if it was a completely regular planar pentagon its bond angles would be 108°, but tetrahedral 109.5° bond angles are expected. However, it has an unfixed puckered shape that undulates up and down. cyclohexane (1.3 kcal/mol), C6H12 — Although the chair conformation is able to achieve ideal angles, the unstable half-chair conformation has angle strain in the C-C-C angles which range from 109.86° to 119.07°. Bicyclics bicyclo[1.1.0]butane (66.3 kcal/mol), C4H6 bicyclo[1.2.0]pentane (54.7 kcal/mol), C5H8 [[bicyclo[1.3.0]hexane]] (26 kcal/mol), C6H10 norbornane (16.6 kcal/mol), C7H12 Ring strain can be considerably higher in bicyclic systems. For example, bicyclobutane, C4H6, is noted for being one of the most strained compounds that is isolatable on a large scale; its strain energy is estimated at 63.9 kcal mol−1 (267 kJ mol−1). Cyclopropane has a lesser amount of ring strain since it has the least amount of unsaturation; as a result, increasing the amount of unsaturation leads to greater ring strain. For example, cyclopropene has a greater amount of ring strain than cyclopropane because it has more unsaturation. == Applications == The potential energy and unique bonding structure contained in the bonds of molecules with ring strain can be used to drive reactions in organic synthesis. Examples of such reactions are ring opening metathesis polymerisation, photo-induced ring opening of cyclobutenes, and nucleophilic ring-opening of epoxides and aziridines. Increased potential energy from ring strain also can be used to increase the energy released by explosives or increase their shock sensitivity. For example, the shock sensitivity of the explosive 1,3,3-Trinitroazetidine could partially or primarily explained by its ring strain. == See also == Strain (chemistry) Alkane stereochemistry == References ==
Wikipedia/Baeyer_strain_theory
Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics. A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics. The names of bosonic partners of fermions are prefixed with s-, because they are scalar particles. For example, if the electron existed in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron. In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner. The spin of a particle's superpartner is different by a half-integer. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass. Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been applied to high-energy physics, where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model. However, no supersymmetric extensions of the Standard Model have been experimentally verified, and some physicists are saying the theory is dead. == History == A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time. J. L. Gervais and B. Sakita (in 1971), Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972), independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971 in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu. In 1974, Julius Wess and Bruno Zumino identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics, critical phenomena, quantum mechanics to statistical physics, and supersymmetry remains a vital part of many proposed theories in many branches of physics. In particle physics, the first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem. Supersymmetry was coined by Abdus Salam and John Strathdee in 1974 as a simplification of the term super-gauge symmetry used by Wess and Zumino, although Zumino also used the same term at around the same time. The term supergauge was in turn coined by Neveu and Schwarz in 1971 when they devised supersymmetry in the context of string theory. == Applications == === Extension of possible symmetry groups === One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. In 1975, the Haag–Łopuszański–Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories. ==== The supersymmetry algebra ==== Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra. The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation: { Q α , Q ¯ β ˙ } = 2 ( σ μ ) α β ˙ P μ {\displaystyle \{Q_{\alpha },{\bar {Q}}_{\dot {\beta }}\}=2(\sigma ^{\mu })_{\alpha {\dot {\beta }}}P_{\mu }} and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression Pμ = −i ∂μ are the generators of translation and σμ are the Pauli matrices. There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup. === Supersymmetric quantum mechanics === Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right. SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy. === Supersymmetry in quantum field theory === In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently. While supersymmetry has not been discovered at high energy, see Section Supersymmetry in particle physics, supersymmetry was found to be effectively realized at the intermediate energy of hadronic physics where baryons and mesons are superpartners. An exception is the pion that appears as a zero mode in the mass spectrum and thus protected by the supersymmetry: It has no baryonic partner. The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson. === Supersymmetry in condensed matter physics === SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker–Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' do not matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see Efetov (1997). In 2021, a group of researchers showed that, in theory, N = ( 0 , 1 ) {\displaystyle N=(0,1)} SUSY could be realised at the edge of a Moore–Read quantum Hall state. However, to date, no experiments have been done yet to realise it at an edge of a Moore–Read state. In 2022, a different group of researchers created a computer simulation of atoms in 1 dimensions that had supersymmetric topological quasiparticles. === Supersymmetry in optics === In 2013, integrated optics was found to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics. === Supersymmetry in dynamical systems === All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry. In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space. The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory. The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect. From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos, turbulence, self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as ⁠1/f⁠ noise, butterfly effect, and the scale-free statistics of sudden (instantonic) processes, such as earthquakes, neuroavalanches, and solar flares, known as the Zipf's law and the Richter scale. ==== In finance ==== In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in finance, and to financial networks. === Supersymmetry in mathematics === SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories that interchanges particles and monopoles. The proof of the Atiyah–Singer index theorem is much simplified by the use of supersymmetric quantum mechanics. === Supersymmetry in string theory === Supersymmetry is an integral part of string theory, a possible theory of everything. There are two types of string theory, supersymmetric string theory or superstring theory, and non-supersymmetric string theory. By definition of superstring theory, supersymmetry is required in superstring theory at some level. However, even in non-supersymmetric string theory, a type of supersymmetry called misaligned supersymmetry is still required in the theory in order to ensure no physical tachyons appear. Any string theories without some kind of supersymmetry, such as bosonic string theory and the E 7 × E 7 {\displaystyle E_{7}\times E_{7}} , S U ( 16 ) {\displaystyle SU(16)} , and E 8 {\displaystyle E_{8}} heterotic string theories, will have a tachyon and therefore the spacetime vacuum itself would be unstable and would decay into some tachyon-free string theory usually in a lower spacetime dimension. There is no experimental evidence that either supersymmetry or misaligned supersymmetry holds in our universe, and many physicists have moved on from supersymmetry and string theory entirely due to the non-detection of supersymmetry at the LHC. Despite the null results for supersymmetry at the LHC so far, some particle physicists have nevertheless moved to string theory in order to resolve the naturalness crisis for certain supersymmetric extensions of the Standard Model. According to the particle physicists, there exists a concept of "stringy naturalness" in string theory, where the string theory landscape could have a power law statistical pull on soft SUSY breaking terms to large values (depending on the number of hidden sector SUSY breaking fields contributing to the soft terms). If this is coupled with an anthropic requirement that contributions to the weak scale not exceed a factor between 2 and 5 from its measured value (as argued by Agrawal et al.), then the Higgs mass is pulled up to the vicinity of 125 GeV while most sparticles are pulled to values beyond the current reach of LHC. (The Higgs was determined to have a mass of 125 GeV ±0.15 GeV in 2022.) An exception occurs for higgsinos which gain mass not from SUSY breaking but rather from whatever mechanism solves the SUSY mu problem. Light higgsino pair production in association with hard initial state jet radiation leads to a soft opposite-sign dilepton plus jet plus missing transverse energy signal. == Supersymmetry in particle physics == In particle physics, a supersymmetric extension of the Standard Model is a possible candidate for undiscovered particle physics, and seen by some physicists as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete and where limitations of current theories are well established. In particular, one supersymmetric extension of the Standard Model, the Minimal Supersymmetric Standard Model (MSSM), became popular in theoretical particle physics, as the Minimal Supersymmetric Standard Model is the simplest supersymmetric extension of the Standard Model that could resolve major hierarchy problems within the Standard Model, by guaranteeing that quadratic divergences of all orders will cancel out in perturbation theory. If a supersymmetric extension of the Standard Model is correct, superpartners of the existing elementary particles would be new and undiscovered particles and supersymmetry is expected to be spontaneously broken. There is no experimental evidence that a supersymmetric extension to the Standard Model is correct, or whether or not other extensions to current models might be more accurate. It is only since around 2010 that particle accelerators specifically designed to study physics beyond the Standard Model have become operational (i.e. the Large Hadron Collider (LHC)), and it is not known where exactly to look, nor the energies required for a successful search. However, the negative results from the LHC since 2010 have already ruled out some supersymmetric extensions to the Standard Model, and many physicists believe that the Minimal Supersymmetric Standard Model, while not ruled out, is no longer able to fully resolve the hierarchy problem. === Supersymmetric extensions of the Standard Model === Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model. One of the original motivations for the Minimal Supersymmetric Standard Model came from the hierarchy problem. Due to the quadratically divergent contributions to the Higgs mass squared in the Standard Model, the quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. Furthermore, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. This problem is known as the hierarchy problem. Supersymmetry close to the electroweak scale, such as in the Minimal Supersymmetric Standard Model, would solve the hierarchy problem that afflicts the Standard Model. It would reduce the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions, and Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale would be achieved in a natural manner, without extraordinary fine-tuning. If supersymmetry were restored at the weak scale, then the Higgs mass would be related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions. Another motivation for the Minimal Supersymmetric Standard Model comes from grand unification, the idea that the gauge symmetry groups should unify at high-energy. In the Standard Model, however, the weak, strong and electromagnetic gauge couplings fail to unify at high energy. In particular, the renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model. After incorporating minimal SUSY at the electroweak scale, the running of the gauge couplings are modified, and joint convergence of the gauge coupling constants is projected to occur at approximately 1016 GeV. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking. In many supersymmetric extensions of the Standard Model, such as the Minimal Supersymmetric Standard Model, there is a heavy stable particle (such as the neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity. Supersymmetry at the electroweak scale (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations. The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking. All of these supersymmetric partners (sparticles) are hypothetical and have not been observed experimentally. They are predicted by various supersymmetric extensions of the Standard Model. === Searches and constraints for supersymmetry === SUSY extensions of the standard model are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Fermilab; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC. In fact, CERN publicly states that if a supersymmetric model of the Standard Model "is correct, supersymmetric particles should appear in collisions at the LHC." Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits, which in 2006 were extended by the D0 experiment at the Tevatron. From 2003 to 2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetric extensions of the Standard Model, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density. Prior to the beginning of the LHC, in 2009, fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV. The first runs of the LHC surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges. In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV. The LHC found no previously unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for any supersymmetric extension of the Standard Model. Indirect methods include the search for a permanent electric dipole moment (EDM) in the known Standard Model particles, which can arise when the Standard Model particle interacts with the supersymmetric particles. The current best constraint on the electron electric dipole moment put it to be smaller than 10−28 e·cm, equivalent to a sensitivity to new physics at the TeV scale and matching that of the current best particle colliders. A permanent EDM in any fundamental particle points towards time-reversal violating physics, and therefore also CP-symmetry violation via the CPT theorem. Such EDM experiments are also much more scalable than conventional particle accelerators and offer a practical alternative to detecting physics beyond the standard model as accelerator experiments become increasingly costly and complicated to maintain. The current best limit for the electron's EDM has already reached a sensitivity to rule out so called 'naive' versions of supersymmetric extensions of the Standard Model. Research in the late 2010s and early 2020s from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the LHC. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. === Current status === The negative findings in the experiments disappointed many physicists, who believed that supersymmetric extensions of the Standard Model (and other theories relying upon it) were by far the most promising theories for "new" physics beyond the Standard Model, and had hoped for signs of unexpected results from the experiments. In particular, the LHC result seems problematic for the Minimal Supersymmetric Standard Model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists consider to be "unnatural" (see naturalness and fine tuning). In response to the so-called "naturalness crisis" in the Minimal Supersymmetric Standard Model, some researchers have abandoned naturalness and the original motivation to solve the hierarchy problem naturally with supersymmetry, while other researchers have moved on to other supersymmetric models such as split supersymmetry. Still others have moved to string theory as a result of the naturalness crisis. Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory in particle physics. However, some researchers suggested that this "naturalness" crisis was premature because various calculations were too optimistic about the limits of masses which would allow a supersymmetric extension of the Standard Model as a solution. == General supersymmetry == Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions. === Extended supersymmetry === It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators. The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg–Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton. For four dimensions there are the following theories, with the corresponding multiplets (CPT adds a copy, whenever they are not invariant under such symmetry): === Supersymmetry in alternate numbers of dimensions === It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven. === Fractional supersymmetry === Fractional supersymmetry is a generalization of the notion of supersymmetry in which the minimal positive amount of spin does not have to be ⁠1/2⁠ but can be an arbitrary ⁠1/N⁠ for integer value of N. Such a generalization is possible in two or fewer spacetime dimensions. == See also == == References == == Further reading == === Theoretical introductions, free and online === === Monographs === === On experiments === == External links == Supersymmetry – European Organization for Nuclear Research (CERN) The status of supersymmetry – Symmetry Magazine (Fermilab/SLAC), January 12, 2021 As Supersymmetry Fails Tests, Physicists Seek New Ideas – Quanta Magazine, November 20, 2012 What is Supersymmetry? – Fermilab, May 21, 2013 Why Supersymmetry? – Fermilab, May 31, 2013 The Standard Model and Supersymmetry – World Science Festival, March 4, 2015 SUSY running out of hiding places – BBC, December 11, 2012
Wikipedia/Supersymmetric_theory
In chemistry, frontier molecular orbital theory is an application of molecular orbital theory describing HOMO–LUMO interactions. == History == In 1952, Kenichi Fukui published a paper in the Journal of Chemical Physics titled "A molecular theory of reactivity in aromatic hydrocarbons." Though widely criticized at the time, he later shared the Nobel Prize in Chemistry with Roald Hoffmann for his work on reaction mechanisms. Hoffman's work focused on creating a set of four pericyclic reactions in organic chemistry, based on orbital symmetry, which he coauthored with Robert Burns Woodward, entitled "The Conservation of Orbital Symmetry." Fukui's own work looked at the frontier orbitals, and in particular the effects of the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) on reaction mechanisms, which led to it being called frontier molecular orbital theory (FMO theory). He used these interactions to better understand the conclusions of the Woodward–Hoffmann rules. == Theory == Fukui realized that a good approximation for reactivity could be found by looking at the frontier orbitals (HOMO/LUMO). This was based on three main observations of molecular orbital theory as two molecules interact: The occupied orbitals of different molecules repel each other. Positive charges of one molecule attract the negative charges of the other. The occupied orbitals of one molecule and the unoccupied orbitals of the other (especially the HOMO and LUMO) interact with each other causing attraction. In general, the total energy change of the reactants on approach of the transition state is described by the Klopman–Salem equation, derived from perturbational MO theory. The first and second observations correspond to taking into consideration the filled–filled interaction and Coulombic interaction terms of the equation, respectively. With respect to the third observation, primary consideration of the HOMO–LUMO interaction is justified by the fact that the largest contribution in the filled–unfilled interaction term of the Klopman-Salem equation comes from molecular orbitals r and s that are closest in energy (i.e., smallest Er − Es value). From these observations, frontier molecular orbital (FMO) theory simplifies prediction of reactivity to analysis of the interaction between the more energetically matched HOMO–LUMO pairing of the two reactants. In addition to providing a unified explanation of diverse aspects of chemical reactivity and selectivity, it agrees with the predictions of the Woodward–Hoffmann orbital symmetry and Dewar–Zimmerman aromatic transition state treatments of thermal pericyclic reactions, which are summarized in the following selection rule: "A ground-state pericyclic change is symmetry-allowed when the total number of (4q+2)s and (4r)a components is odd" (4q+2)s refers to the number of aromatic, suprafacial electron systems; likewise, (4r)a refers to antiaromatic, antarafacial systems. It can be shown that if the total number of these systems is odd then the reaction is thermally allowed. == Applications == === Cycloadditions === A cycloaddition is a reaction that simultaneously forms at least two new bonds, and in doing so, converts two or more open-chain molecules into rings. The transition states for these reactions typically involve the electrons of the molecules moving in continuous rings, making it a pericyclic reaction. These reactions can be predicted by the Woodward–Hoffmann rules and thus are closely approximated by FMO theory. The Diels–Alder reaction between maleic anhydride and cyclopentadiene is allowed by the Woodward–Hoffmann rules because there are six electrons moving suprafacially and no electrons moving antarafacially. Thus, there is one (4q + 2)s component and no (4r)a component, which means the reaction is allowed thermally. FMO theory also finds that this reaction is allowed and goes even further by predicting its stereoselectivity, which is unknown under the Woodward-Hoffmann rules. Since this is a [4 + 2], the reaction can be simplified by considering the reaction between butadiene and ethene. The HOMO of butadiene and the LUMO of ethene are both antisymmetric (rotationally symmetric), meaning the reaction is allowed.* In terms of the stereoselectivity of the reaction between maleic anhydride and cyclopentadiene, the endo-product is favored, a result best explained through FMO theory. The maleic anhydride is an electron-withdrawing species that makes the dieneophile electron deficient, forcing the regular Diels–Alder reaction. Thus, only the reaction between the HOMO of cyclopentadiene and the LUMO of maleic anhydride is allowed. Furthermore, though the exo-product is the more thermodynamically stable isomer, there are secondary (non-bonding) orbital interactions in the endo- transition state, lowering its energy and making the reaction towards the endo- product faster, and therefore more kinetically favorable. Since the exo-product has primary (bonding) orbital interactions, it can still form; but since the endo-product forms faster, it is the major product. *Note: The HOMO of ethene and the LUMO of butadiene are both symmetric, meaning the reaction between these species is allowed as well. This is referred to as the "inverse electron demand Diels–Alder." === Sigmatropic reactions === A sigmatropic rearrangement is a reaction in which a sigma bond moves across a conjugated pi system with a concomitant shift in the pi bonds. The shift in the sigma bond may be antarafacial or suprafacial. In the example of a [1,5] shift in pentadiene, if there is a suprafacial shift, there is 6 e− moving suprafacially and none moving antarafacially, implying this reaction is allowed by the Woodward–Hoffmann rules. For an antarafacial shift, the reaction is not allowed. These results can be predicted with FMO theory by observing the interaction between the HOMO and LUMO of the species. To use FMO theory, the reaction should be considered as two separate ideas: (1) whether or not the reaction is allowed, and (2) which mechanism the reaction proceeds through. In the case of a [1,5] shift on pentadiene, the HOMO of the sigma bond (i.e., a constructive bond) and the LUMO of butadiene on the remaining 4 carbons is observed. Assuming the reaction happens suprafacially, the shift results with the HOMO of butadiene on the four carbons that are not involved in the sigma bond of the product. Since the pi system changed from the LUMO to the HOMO, this reaction is allowed (though it would not be allowed if the pi system went from LUMO to LUMO). To explain why the reaction happens suprafacially, first notice that the terminal orbitals are in the same phase. For there to be a constructive sigma bond formed after the shift, the reaction would have to be suprafacial. If the species shifted antarafacially then it would form an antibonding orbital and there would not be a constructive sigma shift. In propene the shift would have to be antarafacial, but since the molecule is very small, that twist is not possible and the reaction is not allowed. === Electrocyclic reactions === An electrocyclic reaction is a pericyclic reaction involving the net loss of a pi bond and creation of a sigma bond with formation of a ring. This reaction proceeds through either a conrotatory or disrotatory mechanism. In the conrotatory ring opening of cyclobutene, there are two electrons moving suprafacially (on the pi bond) and two moving antarafacially (on the sigma bond). This means there is one 4q + 2 suprafacial system and no 4r antarafacial system; thus, the conrotatory process is thermally allowed by the Woodward–Hoffmann rules. The HOMO of the sigma bond (i.e., a constructive bond) and the LUMO of the pi bond are important in the FMO theory consideration. If the ring opening uses a conrotatory process, then the reaction results with the HOMO of butadiene. As in the previous examples, the pi system moves from a LUMO species to a HOMO species, meaning this reaction is allowed. == See also == Addition to pi ligands Klopman–Salem equation Oxy Cope elimination pericyclic reaction == References ==
Wikipedia/Frontier_molecular_orbital_theory
The Brønsted–Lowry theory (also called proton theory of acids and bases) is an acid–base reaction theory which was developed independently in 1923 by physical chemists Johannes Nicolaus Brønsted (in Denmark) and Thomas Martin Lowry (in the United Kingdom). The basic concept of this theory is that when an acid and a base react with each other, the acid forms its conjugate base, and the base forms its conjugate acid by exchange of a proton (the hydrogen cation, or H+). This theory generalises the Arrhenius theory. == Definitions of acids and bases == In the Arrhenius theory, acids are defined as substances that dissociate in aqueous solutions to give H+ (hydrogen cations or protons), while bases are defined as substances that dissociate in aqueous solutions to give OH− (hydroxide ions). In 1923, physical chemists Johannes Nicolaus Brønsted in Denmark and Thomas Martin Lowry in England both independently proposed the theory named after them. In the Brønsted–Lowry theory acids and bases are defined by the way they react with each other, generalising them. This is best illustrated by an equilibrium equation. acid + base ⇌ conjugate base + conjugate acid. With an acid, HA, the equation can be written symbolically as: HA + B ⇌ A− + HB+ The equilibrium sign, ⇌, is used because the reaction can occur in both forward and backward directions (is reversible). The acid, HA, is a proton donor which can lose a proton to become its conjugate base, A−. The base, B, is a proton acceptor which can become its conjugate acid, HB+. Most acid–base reactions are fast, so the substances in the reaction are usually in dynamic equilibrium with each other. == Aqueous solutions == Consider the following acid–base reaction: CH 3 COOH + H 2 O ↽ − − ⇀ CH 3 COO − + H 3 O + {\displaystyle {\ce {CH3 COOH + H2O <=> CH3 COO- + H3O+}}} Acetic acid, CH3COOH, is an acid because it donates a proton to water (H2O) and becomes its conjugate base, the acetate ion (CH3COO−). H2O is a base because it accepts a proton from CH3COOH and becomes its conjugate acid, the hydronium ion, (H3O+). The reverse of an acid–base reaction is also an acid–base reaction, between the conjugate acid of the base in the first reaction and the conjugate base of the acid. In the above example, ethanoate is the base of the reverse reaction and hydronium ion is the acid. H 3 O + + CH 3 COO − ↽ − − ⇀ CH 3 COOH + H 2 O {\displaystyle {\ce {H3O+ + CH3 COO- <=> CH3COOH + H2O}}} One feature of the Brønsted–Lowry theory in contrast to Arrhenius theory is that it does not require an acid to dissociate. == Amphoteric substances == The essence of Brønsted–Lowry theory is that an acid is only such in relation to a base, and vice versa. Water is amphoteric as it can act as an acid or as a base. In the image shown at the right one molecule of H2O acts as a base and gains H+ to become H3O+ while the other acts as an acid and loses H+ to become OH−. Another example is illustrated by substances like aluminium hydroxide, Al(OH)3. Al ( OH ) 3 ( acid ) + OH − ↽ − − ⇀ Al ( OH ) 4 − {\displaystyle {\ce {{\overset {(acid)}{Al(OH)3}}{}+ OH- <=> Al(OH)4^-}}} 3 H + + Al ( OH ) 3 ( base ) ↽ − − ⇀ 3 H 2 O + Al ( aq ) 3 + {\displaystyle {\ce {3H+{}+ {\overset {(base)}{Al(OH)3}}<=> 3H2O{}+ Al_{(aq)}^3+}}} === Non-aqueous solutions === The hydrogen ion, or hydronium ion, is a Brønsted–Lowry acid when dissolved in H2O and the hydroxide ion is a base because of the autoionization of water reaction H 2 O + H 2 O ↽ − − ⇀ H 3 O + + OH − {\displaystyle {\ce {H2O + H2O <=> H3O+ + OH-}}} An analogous reaction occurs in liquid ammonia NH 3 + NH 3 ↽ − − ⇀ NH 4 + + NH 2 − {\displaystyle {\ce {NH3 + NH3 <=> NH4+ + NH2-}}} Thus, the ammonium ion, NH+4, in liquid ammonia corresponds to the hydronium ion in water and the amide ion, NH−2 in ammonia, to the hydroxide ion in water. Ammonium salts behave as acids, and metal amides behave as bases. Some non-aqueous solvents can behave as bases, i.e. accept protons, in relation to Brønsted–Lowry acids. HA + S ↽ − − ⇀ A − + SH + {\displaystyle {\ce {HA + S <=> A- + SH+}}} where S stands for a solvent molecule. The most important of such solvents are dimethylsulfoxide, DMSO, and acetonitrile, CH3CN, as these solvents have been widely used to measure the acid dissociation constants of carbon-containing molecules. Because DMSO accepts protons more strongly than H2O the acid becomes stronger in this solvent than in water. Indeed, many molecules behave as acids in non-aqueous solutions but not in aqueous solutions. An extreme case occurs with carbon acids, where a proton is extracted from a C−H bond. Some non-aqueous solvents can behave as acids. An acidic solvent will make dissolved substances more basic. For example, the compound CH3COOH is known as acetic acid since it behaves as an acid in water. However, it behaves as a base in liquid hydrogen fluoride, a much more acidic solvent. CH 3 COOH + 2 HF ↽ − − ⇀ CH 3 C ( OH ) 2 + + HF 2 − {\displaystyle {\ce {CH3COOH + 2HF <=> CH3C(OH)2+ + HF2-}}} == Comparison with Lewis acid–base theory == In the same year that Brønsted and Lowry published their theory, G. N. Lewis created an alternative theory of acid–base reactions. The Lewis theory is based on electronic structure. A Lewis base is a compound that can give an electron pair to a Lewis acid, a compound that can accept an electron pair. Lewis's proposal explains the Brønsted–Lowry classification using electronic structure. HA + B ↽ − − ⇀ A − + BH + {\displaystyle {\ce {HA + B <=> A- + BH+}}} In this representation both the base, B, and the conjugate base, A−, are shown carrying a lone pair of electrons and the proton, which is a Lewis acid, is transferred between them. Lewis later wrote "To restrict the group of acids to those substances that contain hydrogen interferes as seriously with the systematic understanding of chemistry as would the restriction of the term oxidizing agent to substances containing oxygen." In Lewis theory an acid, A, and a base, B, form an adduct, AB, where the electron pair forms a dative covalent bond between A and B. This is shown when the adduct H3N−BF3 forms from ammonia and boron trifluoride, a reaction that cannot occur in water because boron trifluoride hydrolizes in water. 4 BF 3 + 3 H 2 O ⟶ B ( OH ) 3 + 3 HBF 4 {\displaystyle {\ce {4BF3 + 3H2O -> B(OH)3 + 3HBF4}}} The reaction above illustrates that BF3 is an acid in both Lewis and Brønsted–Lowry classifications and shows that the theories agree with each other. Boric acid is recognised as a Lewis acid because of the reaction B ( OH ) 3 + H 2 O ↽ − − ⇀ B ( OH ) 4 − + H + {\displaystyle {\ce {B(OH)3 + H2O <=> B(OH)4- + H+}}} In this case the acid does not split up but the base, H2O, does. A solution of B(OH)3 is acidic because hydrogen ions are given off in this reaction. There is strong evidence that dilute aqueous solutions of ammonia contain minute amounts of the ammonium ion H 2 O + NH 3 ⟶ OH − + NH 4 + {\displaystyle {\ce {H2O + NH3 -> OH- + NH+4}}} and that, when dissolved in water, ammonia functions as a Lewis base. == Comparison with the Lux–Flood theory == The reactions between oxides in the solid or liquid states are excluded in the Brønsted–Lowry theory. For example, the reaction 2 MgO + SiO 2 ⟶ Mg 2 SiO 4 {\displaystyle {\ce {2MgO + SiO2 -> Mg2 SiO4}}} is not covered in the Brønsted–Lowry definition of acids and bases. On the other hand, magnesium oxide acts as a base when it reacts with an aqueous solution of an acid (instead, the reaction can be considered a lewis acid-base reaction). 2 H + + MgO ( s ) ⟶ Mg 2 + ( aq ) + H 2 O {\displaystyle {\ce {2H+ + MgO(s) -> Mg^{2+}(aq) + H2O}}} Dissolved silicon dioxide, SiO2, has been predicted to be a weak acid in the Brønsted–Lowry sense. SiO 2 ( s ) + 2 H 2 O ↽ − − ⇀ Si ( OH ) 4 ( solution ) {\displaystyle {\ce {SiO2(s) + 2H2O <=> Si(OH)4 (solution)}}} Si ( OH ) 4 ↽ − − ⇀ Si ( OH ) 3 O − + H + {\displaystyle {\ce {Si(OH)4 <=> Si(OH)3O- + H+}}} According to the Lux–Flood theory, oxides like MgO and SiO2 in the solid state may be called acids or bases. For example, the mineral olivine may be known as a compound of a basic oxide, MgO, and silicon dioxide, SiO2, as an acidic oxide. This is important in geochemistry. == References == == Bibliography == Stoker, H. Stephen (2012). General, Organic, and Biological Chemistry. Cengage Learning. ISBN 978-1-133-10394-3. Myers, Richard (2003). The Basics of Chemistry. Greenwood Publishing Group. ISBN 978-0-313-31664-7. Patrick, Graham (2004). Instant Notes in Organic Chemistry. Taylor & Francis. ISBN 978-1-135-32125-3. Srivastava, H. C. (2003). Nootan - ISC Chemistry (7 ed.). India: Nageen Prakashan. ISBN 978-93-80088-89-1. Ramakrishna, A. (2014). Goyal's IIT FOUNDATION COURSE CHEMISTRY: For Class-10. Goyal Brothers Prakashan. p. 85. GGKEY:DKWFNS6PECF. Masterton, William; Hurley, Cecile; Neth, Edward (2011). Chemistry: Principles and Reactions. Cengage Learning. ISBN 978-1-133-38694-0. Whitten, Kenneth; Davis, Raymond; Peck, Larry; Stanley, George (2013). Chemistry. Cengage Learning. ISBN 978-1-133-61066-3. Ebbing, Darrell; Gammon, Steven D. (2010). General Chemistry, Enhanced Edition. Cengage Learning. pp. 644–645. ISBN 978-0-538-49752-7.
Wikipedia/Brønsted–Lowry_acid–base_theory
Justification (also called epistemic justification) is a property of beliefs that fulfill certain norms about what a person should believe. Epistemologists often identify justification as a component of knowledge distinguishing it from mere true opinion. They study the reasons why someone holds a belief. Epistemologists are concerned with various features of belief, which include the ideas of warrant (a proper justification for holding a belief), knowledge, rationality, and probability, among others. Debates surrounding epistemic justification often involve the structure of justification, including whether there are foundational justified beliefs or whether mere coherence is sufficient for a system of beliefs to qualify as justified. Another major subject of debate is the sources of justification, which might include perceptual experience (the evidence of the senses), reason, and authoritative testimony, among others. == Justification and knowledge == "Justification" involves the reasons why someone holds a belief that one should hold based on one's current evidence. Justification is a property of beliefs insofar as they are held blamelessly. In other words, a justified belief is a belief that a person is entitled to hold. Many philosophers from Plato onward have treated "justified true belief" (JTB) as constituting knowledge. It is particularly associated with a theory discussed in his dialogues Meno and Theaetetus. While in fact Plato seems to disavow justified true belief as constituting knowledge at the end of Theaetetus, the claim that Plato unquestioningly accepted this view of knowledge stuck until the proposal of the Gettier problem. The subject of justification has played a major role in the value of knowledge as "justified true belief". Some contemporary epistemologists, such as Jonathan Kvanvig, assert that justification isn't necessary in getting to the truth and avoiding errors. Kvanvig attempts to show that knowledge is no more valuable than true belief, and in the process dismissed the necessity of justification due to justification not being connected to the truth. == Conceptions of justification == William P. Alston identifies two conceptions of justification.: 15–16  One conception is "deontological" justification, which holds that justification evaluates the obligation and responsibility of a person having only true beliefs. This conception implies, for instance, that a person who has made his best effort but is incapable of concluding the correct belief from his evidence is still justified. The deontological conception of justification corresponds to epistemic internalism. Another conception is "truth-conducive" justification, which holds that justification is based on having sufficient evidence or reasons that entails that the belief is at least likely to be true. The truth-conductive conception of justification corresponds to epistemic externalism. == Theories of justification == There are several different views as to what entails justification, mostly focusing on the question "How beliefs are justified?". Different theories of justification require different conditions before a belief can be considered justified. Theories of justification generally include other aspects of epistemology, such as defining knowledge. Notable theories of justification include: Foundationalism – Basic beliefs justify other, non-basic beliefs. Epistemic coherentism – Beliefs are justified if they cohere with other beliefs a person holds, each belief is justified if it coheres with the overall system of beliefs. Infinitism – Beliefs are justified by infinite chains of reasons. Foundherentism – Both fallible foundations and coherence are components of justification—proposed by Susan Haack. Internalism and externalism – The believer must be able to justify a belief through internal knowledge (internalism), or outside sources of knowledge (externalism). Reformed epistemology – Beliefs are warranted by proper cognitive function—proposed by Alvin Plantinga. Evidentialism – Beliefs depend solely on the evidence for them. Reliabilism – A belief is justified if it is the result of a reliable process. Infallibilism – Knowledge is incompatible with the possibility of being wrong. Fallibilism – Claims can be accepted even though they cannot be conclusively proven or justified. Non-justificationism – Knowledge is produced by attacking claims and refuting them instead of justifying them. Skepticism – Knowledge is impossible or undecidable. == Criticism of theories of justification == Robert Fogelin claims to detect a suspicious resemblance between the theories of justification and Agrippa's five modes leading to the suspension of belief. He concludes that the modern proponents have made no significant progress in responding to the ancient modes of Pyrrhonian skepticism. William P. Alston criticizes the very idea of a theory of justification. He claims: "There isn't any unique, epistemically crucial property of beliefs picked out by 'justified'. Epistemologists who suppose the contrary have been chasing a will-o'-the-wisp. What has really been happening is this. Different epistemologists have been emphasizing, concentrating on, "pushing" different epistemic desiderata, different features of belief that are positively valuable from the standpoint of the aims of cognition.": 22  == See also == Dream argument Regress argument (epistemology) Münchhausen trilemma == References == == External links == Stanford Encyclopedia of Philosophy entry on Foundationalist Theories of Epistemic Justification Stanford Encyclopedia of Philosophy entry on Epistemology, 2. What is Justification? Stanford Encyclopedia of Philosophy entry on Internalist vs. Externalist Conceptions of Epistemic Justification Stanford Encyclopedia of Philosophy entry on Coherentist Theories of Epistemic Justification === Internet Encyclopedia of Philosophy === Internet Encyclopedia of Philosophy entry on Epistemic Justification Internet Encyclopedia of Philosophy entry on Epistemic Entitlement Internet Encyclopedia of Philosophy entry on Internalism and Externalism in Epistemology Internet Encyclopedia of Philosophy entry on Epistemic Consequentialism Internet Encyclopedia of Philosophy entry on Coherentism in Epistemology Internet Encyclopedia of Philosophy entry on Contextualism in Epistemology Internet Encyclopedia of Philosophy entry on Knowledge-First Theories of Justification
Wikipedia/Theory_of_justification
Rubber elasticity is the ability of solid rubber to be stretched up to a factor of 10 from its original length, and return to close to its original length upon release. This process can be repeated many times with no apparent degradation to the rubber. Rubber, like all materials, consists of molecules. Rubber's elasticity is produced by molecular processes that occur due to its molecular structure. Rubber's molecules are polymers, or large, chain-like molecules. Polymers are produced by a process called polymerization. This process builds polymers up by sequentially adding short molecular backbone units to the chain through chemical reactions. A rubber polymer follows a random winding path in three dimensions, intermingling with many other rubber polymers. Natural rubbers, such as polybutadiene and polyisoprene, are extracted from plants as a fluid colloid and then solidified in a process called Vulcanization. During the process, a small amount of a cross-linking molecule, usually sulfur, is added. When heat is applied, sections of rubber's polymer chains chemically bond to the cross-linking molecule. These bonds cause rubber polymers to become cross-linked, or joined to each other by the bonds made with the cross-linking molecules. Because each rubber polymer is very long, each one participates in many crosslinks with many other rubber molecules, forming a continuous network. The resulting molecular structure demonstrates elasticity, making rubber a member of the class of elastic polymers called elastomers. == History == Following its introduction to Europe from America in the late 15th century, natural rubber (polyisoprene) was regarded mostly as a curiosity. Its most useful application was its ability to erase pencil marks on paper by rubbing, hence its name. One of its most peculiar properties is a slight (but detectable) increase in temperature that occurs when a sample of rubber is stretched. If it is allowed to quickly retract, an equal amount of cooling is observed. This phenomenon caught the attention of the English physicist John Gough. In 1805 he published some qualitative observations on this characteristic as well as how the required stretching force increased with temperature. By the mid-nineteenth century, the theory of thermodynamics was being developed and within this framework, the English mathematician and physicist Lord Kelvin showed that the change in mechanical energy required to stretch a rubber sample should be proportional to the increase in temperature. This would later be associated with a change in entropy. The connection to thermodynamics was firmly established in 1859 when the English physicist James Joule published the first careful measurements of the temperature increase that occurred as a rubber sample was stretched. This work confirmed the theoretical predictions of Lord Kelvin. In 1838 the American inventor Charles Goodyear found that natural rubber's elastic properties could be immensely improved by adding a small amount of sulfur to produce chemical cross-links between adjacent polyisoprene molecules. Before it is cross-linked, the liquid natural rubber consists of very long polymer molecules, containing thousands of isoprene backbone units, connected head-to-tail (commonly referred to as chains). Every chain follows a random, three-dimensional path through the polymer liquid and is in contact with thousands of other nearby chains. When heated to about 150C, reactive cross-linker molecules, such as sulfur or dicumyl peroxide, can decompose and the subsequent chemical reactions produce a chemical bond between adjacent chains. A crosslink can be visualized as the letter 'X' but with some of its arms pointing out of the plane. The result is a three dimensional molecular network. All of the polyisoprene molecules are connected together at multiple points by these chemical bonds (network nodes) resulting in a single giant molecule and all information about the original long polymers is lost. A rubber band is a single molecule, as is a latex glove. The sections of polyisoprene between two adjacent cross-links are called network chains and can contain up to several hundred isoprene units. In natural rubber, each cross-link produces a network node with four chains emanating from it. It is the network that gives rise to these elastic properties. Because of the enormous economic and technological importance of rubber, predicting how a molecular network responds to mechanical strains has been of enduring interest to scientists and engineers. To understand the elastic properties of rubber, theoretically, it is necessary to know both the physical mechanisms that occur at the molecular level and how the random-walk nature of the polymer chain defines the network. The physical mechanisms that occur within short sections of the polymer chains produce the elastic forces and the network morphology determines how these forces combine to produce the macroscopic stress that is observed when a rubber sample is deformed (e.g. subjected to tensile strain). == Molecular-level models == There are actually several physical mechanisms that produce the elastic forces within the network chains as a rubber sample is stretched. Two of these arise from entropy changes and one is associated with the distortion of the molecular bond angles along the chain backbone. These three mechanisms are immediately apparent when a moderately thick rubber sample is stretched manually. Initially, the rubber feels quite stiff (i.e. the force must be increased at a high rate with respect to the strain). At intermediate strains, the required increase in force is much lower to cause the same amount of stretch. Finally, as the sample approaches the breaking point, its stiffness increases markedly. What the observer is noticing are the changes in the modulus of elasticity that are due to the different molecular mechanisms. These regions can be seen in Fig. 1, a typical stress vs. strain measurement for natural rubber. The three mechanisms (labelled Ia, Ib, and II) predominantly correspond to the regions shown on the plot. The concept of entropy comes to us from the area of mathematical physics called statistical mechanics which is concerned with the study of large thermal systems, e.g. rubber networks at room temperature. Although the detailed behavior of the constituent chains are random and far too complex to study individually, we can obtain very useful information about their "average" behavior from a statistical mechanics analysis of a large sample. There are no other examples of how entropy changes can produce a force in our everyday experience. One may regard the entropic forces in polymer chains as arising from the thermal collisions that their constituent atoms experience with the surrounding material. It is this constant jostling that produces a resisting (elastic) force in the chains as they are forced to become straight. While stretching a rubber sample is the most common example of elasticity, it also occurs when rubber is compressed. Compression may be thought of as a two dimensional expansion as when a balloon is inflated. The molecular mechanisms that produce the elastic force are the same for all types of strain. When these elastic force models are combined with the complex morphology of the network, it is not possible to obtain simple analytic formulae to predict the macroscopic stress. It is only via numerical simulations on computers that it is possible to capture the complex interaction between the molecular forces and the network morphology to predict the stress and ultimate failure of a rubber sample as it is strained. === The Molecular Kink Paradigm for rubber elasticity === The Molecular Kink Paradigm proceeds from the intuitive notion that molecular chains that make up a natural rubber (polyisoprene) network are constrained by surrounding chains to remain within a "tube." Elastic forces produced in a chain, as a result of some applied strain, are propagated along the chain contour within this tube. Fig. 2 shows a representation of a four-carbon isoprene backbone unit with an extra carbon atom at each end to indicate its connections to adjacent units on a chain. It has three single C-C bonds and one double bond. It is principally by rotating about the C-C single bonds that a polyisoprene chain randomly explores its possible conformations. Sections of chain containing between two and three isoprene units have sufficient flexibility that they may be considered statistically de-correlated from one another. That is, there is no directional correlation along the chain for distances greater than this distance, referred to as a Kuhn length. These non-straight regions evoke the concept of "kinks" and are in fact a manifestation of the random-walk nature of the chain. Since a kink is composed of several isoprene units, each having three carbon-carbon single bonds, there are many possible conformations available to a kink, each with a distinct energy and end-to-end distance. Over time scales of seconds to minutes, only these relatively short sections of the chain (i.e. kinks) have sufficient volume to move freely amongst their possible rotational conformations. The thermal interactions tend to keep the kinks in a state of constant flux, as they make transitions between all of their possible rotational conformations. Because the kinks are in thermal equilibrium, the probability that a kink resides in any rotational conformation is given by a Boltzmann distribution and we may associate an entropy with its end-to-end distance. The probability distribution for the end-to-end distance of a Kuhn length is approximately Gaussian and is determined by the Boltzmann probability factors for each state (rotational conformation). As a rubber network is stretched, some kinks are forced into a restricted number of more extended conformations having a greater end-to-end distance and it is the resulting decrease in entropy that produces an elastic force along the chain. There are three distinct molecular mechanisms that produce these forces, two of which arise from changes in entropy that is referred to as the low chain extension regime, Ia and the moderate chain extension regime, Ib. The third mechanism occurs at high chain extension, as it is extended beyond its initial equilibrium contour length by the distortion of the chemical bonds along its backbone. In this case, the restoring force is spring-like and is referred to as regime II. The three force mechanisms are found to roughly correspond to the three regions observed in tensile stress vs. strain experiments, shown in Fig. 1. The initial morphology of the network, immediately after chemical cross-linking, is governed by two random processes: (1) The probability for a cross-link to occur at any isoprene unit and, (2) the random walk nature of the chain conformation. The end-to-end distance probability distribution for a fixed chain length (i.e. fixed number of isoprene units) is described by a random walk. It is the joint probability distribution of the network chain lengths and the end-to-end distances between their cross-link nodes that characterizes the network morphology. Because both the molecular physics mechanisms that produce the elastic forces and the complex morphology of the network must be treated simultaneously, simple analytic elasticity models are not possible; an explicit 3-dimensional numerical model is required to simulate the effects of strain on a representative volume element of a network. ==== Low chain extension regime, Ia ==== The Molecular Kink Paradigm envisions a representative network chain as a series of vectors that follow the chain contour within its tube. Each vector represents the equilibrium end-to-end distance of a kink. The actual 3-dimensional path of the chain is not pertinent, since all elastic forces are assumed to operate along the chain contour. In addition to the chain's contour length, the only other important parameter is its tortuosity, the ratio of its contour length to its end-to-end distance. As the chain is extended, in response to an applied strain, the induced elastic force is assumed to propagate uniformly along its contour. Consider a network chain whose end points (network nodes) are more or less aligned with the tensile strain axis. As the initial strain is applied to the rubber sample, the network nodes at the ends of the chain begin to move apart and all of the kink vectors along the contour are stretched simultaneously. Physically, the applied strain forces the kinks to stretch beyond their thermal equilibrium end-to-end distances, causing a decrease in their entropy. The increase in free energy associated with this change in entropy, gives rise to a (linear) elastic force that opposes the strain. The force constant for the low strain regime can be estimated by sampling molecular dynamics (MD) trajectories of a kink (i.e. short chains) composed of 2–3 isoprene units, at relevant temperatures (e.g. 300K). By taking many samples of the coordinates over the course of the simulations, the probability distributions of end-to-end distance for a kink can be obtained. Since these distributions (which turn out to be approximately Gaussian) are directly related to the number of states, they may be associated with the entropy of the kink at any end-to-end distance. By numerically differentiating the probability distribution, the change in entropy, and hence free energy, with respect to the kink end-to-end distance can be found. The force model for this regime is found to be linear and proportional to the temperature divided by the chain tortuosity. ==== Moderate chain extension regime, Ib ==== At some point in the low extension regime (i.e. as all of the kinks along the chain are being extended simultaneously) it becomes energetically more favourable to have one kink transition to an extended conformation in order to stretch the chain further. The applied strain can force a single isoprene unit within a kink into an extended conformation, slightly increasing the end-to-end distance of the chain, and the energy required to do this is less than that needed to continue extending all of the kinks simultaneously. Numerous experiments strongly suggest that stretching a rubber network is accompanied by a decrease in entropy. As shown in Fig. 2, an isoprene unit has three single C-C bonds and there are two or three preferred rotational angles (orientations) about these bonds that have energy minima. Of the 18 allowed rotational conformations, only 6 have extended end-to-end distances and forcing the isoprene units in a chain to reside in some subset of the extended states must reduce the number of rotational conformations available for thermal motion. It is this reduction in the number of available states that causes the entropy to decrease. As the chain continues to straighten, all of the isoprene units in the chain are eventually forced into extended conformations and the chain is considered to be "taut." A force constant for chain extension can be estimated from the resulting change in free energy associated with this entropy change. As with regime IA, the force model for this regime is linear and proportional to the temperature divided by the chain tortuosity. ==== High chain extension regime, II ==== When all of the isoprene units in a network chain have been forced to reside in just a few extended rotational conformations, the chain becomes taut. It may be regarded as sensibly straight, except for the zigzag path that the C-C bonds make along the chain contour. However, further extension is still possible by bond distortions (e.g. bond angle increases), bond stretches, and dihedral angle rotations. These forces are spring-like and are not associated with entropy changes. A taut chain can be extended by only about 40%. At this point the force along the chain is sufficient to mechanically rupture the C-C covalent bond. This tensile force limit has been calculated via quantum chemistry simulations and it is approximately 7 nN, about a factor of a thousand greater than the entropic chain forces at low strain. The angles between adjacent backbone C-C bonds in an isoprene unit vary between about 115–120 degrees and the forces associated with maintaining these angles are quite large, so within each unit, the chain backbone always follows a zigzag path, even at bond rupture. This mechanism accounts for the steep upturn in the elastic stress, observed at high strains (Fig. 1). ==== Network morphology ==== Although the network is completely described by only two parameters (the number of network nodes per unit volume and the statistical de-correlation length of the polymer, the Kuhn length), the way in which the chains are connected is actually quite complicated. There is a wide variation in the lengths of the chains and most of them are not connected to the nearest neighbor network node. Both the chain length and its end-to-end distance are described by probability distributions. The term "morphology" refers to this complexity. If the cross-linking agent is thoroughly mixed, there is an equal probability for any isoprene unit to become a network node. For dicumyl peroxide, the cross linking efficiency in natural rubber is unity, but this is not the case for sulfur. The initial morphology of the network is dictated by two random processes: the probability for a cross-link to occur at any isoprene unit and the Markov random walk nature of a chain conformation. The probability distribution function for how far one end of a chain end can ‘wander’ from the other is generated by a Markov sequence. This conditional probability density function relates the chain length n {\displaystyle n} in units of the Kuhn length b {\displaystyle b} to the end-to-end distance r {\displaystyle r} : The probability that any isoprene unit becomes part of a cross-link node is proportional to the ratio of the concentrations of the cross-linker molecules (e.g., dicumyl-peroxide) to the isoprene units: p x = 2 [cross-link] [isoprene] {\displaystyle p_{x}=2{\frac {\text{[cross-link]}}{\text{[isoprene]}}}} The factor of two comes about because two isoprene units (one from each chain) participate in the cross-link. The probability for finding a chain containing N {\displaystyle N} isoprene units is given by: where N ≥ 1 {\displaystyle N\geq 1} . The equation can be understood as simply the probability that an isoprene unit is NOT a cross-link (1−px) in N−1 successive units along a chain. Since P(N) decreases with N, shorter chains are more probable than longer ones. Note that the number of statistically independent backbone segments is not the same as the number of isoprene units. For natural rubber networks, the Kuhn length contains about 2.2 isoprene units, so N ∼ 2.2 n {\displaystyle N\sim 2.2n} . The product of equations (1) and (3) (the joint probability distribution) relates the network chain length ( N {\displaystyle N} ) and end-to-end distance ( r {\displaystyle r} ) between its terminating cross-link nodes: The complex morphology of a natural rubber network can be seen in Fig. 3, which shows the probability density vs. end-to-end distance (in units of mean node spacing) for an "average" chain. For the common experimental cross-link density of 4x1019 cm−3, an average chain contains about 116 isoprene units (52 Kuhn lengths) and has a contour length of about 50 nm. Fig. 3 shows that a significant fraction of chains span several node spacings, i.e., the chain ends overlap other network chains. Natural rubber, cross-linked with dicumyl peroxide, has tetra-functional cross-links (i.e. each cross-link node has 4 network chains emanating from it). Depending on their initial tortuosity and the orientation of their endpoints with respect to the strain axis, each chain associated with an active cross-link node can have a different elastic force constant as it resists the applied strain. To preserve force equilibrium (zero net force) on each cross-link node, a node may be forced to move in tandem with the chain having the highest force constant for chain extension. It is this complex node motion, arising from the random nature of the network morphology, that makes the study of the mechanical properties of rubber networks so difficult. As the network is strained, paths composed of these more extended chains emerge that span the entire sample, and it is these paths that carry most of the stress at high strains. ==== Numerical network simulation model ==== To calculate the elastic response of a rubber sample, the three chain force models (regimes Ia, Ib and II) and the network morphology must be combined in a micro-mechanical network model. Using the joint probability distribution in equation (4) and the force extension models, it is possible to devise numerical algorithms to both construct a faithful representative volume element of a network and to simulate the resulting mechanical stress as it is subjected to strain. An iterative relaxation algorithm is used to maintain approximate force equilibrium at each network node as strain is imposed. When the force constant obtained for kinks having 2 or 3 isoprene units (approximately one Kuhn length) is used in numerical simulations, the predicted stress is found to be consistent with experiments. The results of such a calculation are shown in Fig. 1 (dashed red line) for sulphur cross-linked natural rubber and compared with experimental data (solid blue line). These simulations also predict a steep upturn in the stress as network chains become taut and, ultimately, material failure due to bond rupture. In the case of sulphur cross-linked natural rubber, the S-S bonds in the cross-link are much weaker than the C-C bonds on the chain backbone and are the network failure points. The plateau in the simulated stress, starting at a strain of about 7, is the limiting value for the network. Stresses greater than about 7 MPa cannot be supported and the network fails. Near this stress limit, the simulations predict that less than 10% of the chains are taut, i.e. in the high chain extension regime and less than 0.1% of the chains have ruptured. While the very low rupture fraction may seem surprising, it is not inconsistent with the common experience of stretching a rubber band until it breaks. The elastic response of the rubber after breaking is not noticeably different from the original. == Experiments == === Variation of tensile stress with temperature === For molecular systems in thermal equilibrium, the addition of energy (e.g. by mechanical work) can cause a change in entropy. This is known from the theories of thermodynamics and statistical mechanics. Specifically, both theories assert that the change in energy must be proportional to the entropy change times the absolute temperature. This rule is only valid so long as the energy is restricted to thermal states of molecules. If a rubber sample is stretched far enough, energy may reside in nonthermal states such as the distortion of chemical bonds and the rule does not apply. At low to moderate strains, theory predicts that the required stretching force is due to a change in entropy in the network chains. It is therefore expected that the force necessary to stretch a sample to some value of strain should be proportional to the temperature of the sample. Measurements showing how the tensile stress in a stretched rubber sample varies with temperature are shown in Fig. 4. In these experiments, the strain of a stretched rubber sample was held fixed as the temperature was varied between 10 and 70 degrees Celsius. For each value of fixed strain, it is seen that the tensile stress varied linearly (to within experimental error). These experiments provide the most compelling evidence that entropy changes are the fundamental mechanism for rubber elasticity. The positive linear behaviour of the stress with temperature sometimes leads to the mistaken notion that rubber has a negative coefficient of thermal expansion (i.e. the length of a sample shrinks when heated). Experiments have shown conclusively that, like almost all other materials, the coefficient of thermal expansion natural rubber is positive. === Snap-back velocity === When stretching a piece of rubber (e.g. a rubber band) it will deform lengthwise in a uniform manner. When one end of the sample is released, it snaps back to its original length too quickly for the naked eye to resolve the process. An intuitive expectation is that it returns to its original length in the same manner as when it was stretched (i.e. uniformly). Experimental observations by Mrowca et al. suggest that this expectation is inaccurate. To capture the extremely fast retraction dynamics, they utilized an experimental method devised by Exner and Stefan in 1874. Their method consisted of a rapidly rotating glass cylinder which, after being coated with lamp black, was placed next to the stretched rubber sample. Styli, attached to the mid-point and free end of the rubber sample, were held in contact with the glass cylinder. Then, as the free end of the rubber snapped back, the styli traced out helical paths in the lamp black coating of the rotating cylinder. By adjusting the rotation speed of the cylinder, they could record the position of the styli in less than one complete rotation. The trajectories were transferred to a graph by rolling the cylinder on a piece of damp blotter paper. The mark left by a stylus appeared as a white line (no lamp black) on the paper. Their data, plotted as the graph in Fig. 5, shows the position of end and midpoint styli as the sample rapidly retracts to its original length. The sample was initially stretched 9.5 in (~24 cm) beyond its unstrained length and then released. The styli returned to their original positions (i.e. a displacement of 0 in) in a little over 6 Ms. The linear behaviour of the displacement vs. time indicates that, after a brief acceleration, both the end and the midpoint of the sample snapped back at a constant velocity of about 50 m/s or 112 mph. However, the midpoint stylus did not start to move until about 3 Ms after the end was released. Evidently, the retraction process travels as a wave, starting at the free end. At high extensions some of the energy stored in the stretched network chain is due to a change in its entropy, but most of the energy is stored in bond distortions (regime II, above) which do not involve an entropy change. If one assumes that all of the stored energy is converted to kinetic energy, the retraction velocity may be calculated directly from the familiar conservation equation E = 1⁄2 mv2. Numerical simulations, based on the molecular kink paradigm, predict velocities consistent with this experiment. == Historical approaches to elasticity theory == Eugene Guth and Hubert M. James proposed the entropic origins of rubber elasticity in 1941. === Thermodynamics === Temperature affects the elasticity of elastomers in an unusual way. When the elastomer is assumed to be in a stretched state, heating causes them to contract. Vice versa, cooling can cause expansion. This can be observed with an ordinary rubber band. Stretching a rubber band will cause it to release heat, while releasing it after it has been stretched will lead it to absorb heat, causing its surroundings to become cooler. This phenomenon can be explained with the Gibbs free energy. Rearranging ΔG=ΔH−TΔS, where G is the free energy, H is the enthalpy, and S is the entropy, we obtain T ΔS = ΔH − ΔG. Since stretching is nonspontaneous, as it requires external work, TΔS must be negative. Since T is always positive (it can never reach absolute zero), the ΔS must be negative, implying that the rubber in its natural state is more entangled (with more microstates) than when it is under tension. Thus, when the tension is removed, the reaction is spontaneous, leading ΔG to be negative. Consequently, the cooling effect must result in a positive ΔH, so ΔS will be positive there. The result is that an elastomer behaves somewhat like an ideal monatomic gas, inasmuch as (to good approximation) elastic polymers do not store any potential energy in stretched chemical bonds or elastic work done in stretching molecules, when work is done upon them. Instead, all work done on the rubber is "released" (not stored) and appears immediately in the polymer as thermal energy. In the same way, all work that the elastic does on the surroundings results in the disappearance of thermal energy in order to do the work (the elastic band grows cooler, like an expanding gas). This last phenomenon is the critical clue that the ability of an elastomer to do work depends (as with an ideal gas) only on entropy-change considerations, and not on any stored (i.e. potential) energy within the polymer bonds. Instead, the energy to do work comes entirely from thermal energy, and (as in the case of an expanding ideal gas) only the positive entropy change of the polymer allows its internal thermal energy to be converted efficiently into work. === Polymer chain theories === Invoking the theory of rubber elasticity, a polymer chain in a cross-linked network may be seen as an entropic spring. When the chain is stretched, the entropy is reduced by a large margin because there are fewer conformations available. As such there is a restoring force which causes the polymer chain to return to its equilibrium or unstretched state, such as a high entropy random coil configuration, once the external force is removed. This is the reason why rubber bands return to their original state. Two common models for rubber elasticity are the freely-jointed chain model and the worm-like chain model. ==== Freely-jointed chain model ==== The freely joined chain, also called an ideal chain, follows the random walk model. Microscopically, the 3D random walk of a polymer chain assumes the overall end-to-end distance is expressed in terms of the x, y and z directions: R → = R x x ^ + R y y ^ + R z z ^ {\displaystyle {\vec {R}}=R_{x}{\hat {x}}+R_{y}{\hat {y}}+R_{z}{\hat {z}}} In the model, b {\displaystyle b} is the length of a rigid segment, N {\displaystyle N} is the number of segments of length b {\displaystyle b} , R {\displaystyle R} is the distance between the fixed and free ends, and L c {\displaystyle L_{\text{c}}} is the "contour length" or N b {\displaystyle Nb} . Above the glass transition temperature, the polymer chain oscillates and r {\displaystyle r} changes over time. The probability distribution of the chain is the product of the probability distributions of the individual components, given by the following Gaussian distribution: P ( R → ) = P ( R x ) P ( R y ) P ( R z ) = ( 2 n b 2 π 3 ) − 3 / 2 exp ⁡ ( − 3 R 2 2 N b 2 ) {\displaystyle P({\vec {R}})=P(R_{x})P(R_{y})P(R_{z})=\left({\frac {2nb^{2}\pi }{3}}\right)^{-{3}/{2}}\exp \left({\frac {-3R^{2}}{2Nb^{2}}}\right)} Therefore, the ensemble average end-to-end distance is simply the standard integral of the probability distribution over all space. Note that the movement could be backwards or forwards, so the net average ⟨ R ⟩ {\displaystyle \langle R\rangle } will be zero. However, the root mean square can be a useful measure of the distance. ⟨ R ⟩ = 0 ⟨ R 2 ⟩ = ∫ 0 ∞ R 2 4 π R 2 P ( R → ) d R = N b 2 ⟨ R 2 ⟩ 1 2 = N b {\displaystyle {\begin{aligned}\langle R\rangle &=0\\\langle R^{2}\rangle &=\int _{0}^{\infty }R^{2}4\pi R^{2}P({\vec {R}})dR=Nb^{2}\\\langle R^{2}\rangle ^{\frac {1}{2}}&={\sqrt {N}}b\end{aligned}}} The Flory theory of rubber elasticity suggests that rubber elasticity has primarily entropic origins. By using the following basic equations for Helmholtz free energy and its discussion about entropy, the force generated from the deformation of a rubber chain from its original unstretched conformation can be derived. The Ω {\displaystyle \Omega } is the number of conformations of the polymer chain. Since the deformation does not involve enthalpy change, the change in free energy can simply be calculated as the change in entropy − T Δ S {\displaystyle -T\Delta S} . Note that the force equation resembles the behaviour of a spring and follows Hooke's law: F = k x {\displaystyle F=kx} , where F is the force, k is the spring constant and x is the distance. Usually, the neo-Hookean model can be used on cross-linked polymers to predict their stress-strain relations: Ω = C exp ⁡ ( − 3 R → 2 2 N b 2 ) {\displaystyle \Omega =C\exp \left({\frac {-3{\vec {R}}^{2}}{2Nb^{2}}}\right)} S = k B ln ⁡ Ω ≈ − 3 k B R → 2 2 N b 2 {\displaystyle S=k_{\text{B}}\ln \Omega \,\approx {\frac {-3k_{\text{B}}{\vec {R}}^{2}}{2Nb^{2}}}} Δ F ( R → ) ≈ − T Δ S d ( R → 2 ) = C + 3 k B T N b 2 R → 2 {\displaystyle \Delta F({\vec {R}})\approx -T\Delta S_{d}({\vec {R}}^{2})=C+{\frac {3k_{\text{B}}T}{Nb^{2}}}{\vec {R}}^{2}} f = d F ( R → ) d R → = d d R → ( 3 k B T R → 2 2 N b 2 ) = 3 k B T N b 2 R → {\displaystyle f={\frac {dF({\vec {R}})}{d{\vec {R}}}}={\frac {d}{d{\vec {R}}}}\left({\frac {3k_{\text{B}}T{\vec {R}}^{2}}{2Nb^{2}}}\right)={\frac {3k_{\text{B}}T}{Nb^{2}}}{\vec {R}}} Note that the elastic coefficient 3 k B T / N b {\displaystyle 3k_{\text{B}}T/Nb} is temperature dependent. If rubber temperature increases, the elastic coefficient increases as well. This is the reason why rubber under constant stress shrinks when its temperature increases. We can further expand the Flory theory into a macroscopic view, where bulk rubber material is discussed. Assume the original dimension of the rubber material is L x {\displaystyle L_{x}} , L y {\displaystyle L_{y}} and L z {\displaystyle L_{z}} , a deformed shape can then be expressed by applying an individual extension ratio λ i {\displaystyle \lambda _{i}} to the length ( λ x L x {\displaystyle \lambda _{x}L_{x}} , λ y L y {\displaystyle \lambda _{y}L_{y}} , λ z L z {\displaystyle \lambda _{z}L_{z}} ). So microscopically, the deformed polymer chain can also be expressed with the extension ratio: λ x R x {\displaystyle \lambda _{x}R_{x}} , λ y R y {\displaystyle \lambda _{y}R_{y}} , λ z R z {\displaystyle \lambda _{z}R_{z}} . The free energy change due to deformation can then be expressed as follows: Δ F def ( R → ) = − 3 k B T R → 2 2 N b 2 = − 3 k B T ( ( R x 2 − R x 0 2 ) + ( R y 2 − R y 0 2 ) + ( R z 2 − R z 0 2 ) ) 2 N b 2 = − 3 k B T ( ( λ x 2 − 1 ) R x 0 2 + ( λ y 2 − 1 ) R y 0 2 + ( λ z 2 − 1 ) R z 0 2 ) 2 N b 2 {\displaystyle {\begin{aligned}\Delta F_{\text{def}}({\vec {R}})&=-{\frac {3k_{\text{B}}T{\vec {R}}^{2}}{2Nb^{2}}}=-{\frac {3k_{\text{B}}T\left(\left(R_{x}^{2}-R_{x0}^{2}\right)+\left(R_{y}^{2}-R_{y0}^{2}\right)+\left(R_{z}^{2}-R_{z0}^{2}\right)\right)}{2Nb^{2}}}\\&=-{\frac {3k_{\text{B}}T\left(\left(\lambda _{x}^{2}-1\right)R_{x0}^{2}+\left(\lambda _{y}^{2}-1\right)R_{y0}^{2}+\left(\lambda _{z}^{2}-1\right)R_{z0}^{2}\right)}{2Nb^{2}}}\end{aligned}}} Assume that the rubber is cross-linked and isotropic, the random walk model gives R x {\displaystyle R_{x}} , R y {\displaystyle R_{y}} and R z {\displaystyle R_{z}} are distributed according to a normal distribution. Therefore, they are equal in space, and all of them are 1/3 of the overall end-to-end distance of the chain: ⟨ R x 0 2 ⟩ = ⟨ R y 0 2 ⟩ = ⟨ R z 0 2 ⟩ = ⟨ R 2 ⟩ / 3 {\displaystyle \langle R_{x0}^{2}\rangle =\langle R_{y0}^{2}\rangle =\langle R_{z0}^{2}\rangle =\langle R^{2}\rangle /3} . Plugging in the change of free energy equation above, it is easy to get: Δ F def ( R → ) = − k B T n s ⟨ R 2 ⟩ ( λ x 2 + λ y 2 + λ z 2 − 3 ) 2 N b 2 = − k B T n s ⟨ R 2 ⟩ ( λ x 2 + λ y 2 + λ z 2 − 3 ) 2 R 0 2 {\displaystyle {\begin{aligned}\Delta F_{\text{def}}({\vec {R}})&=-{\frac {k_{\text{B}}Tn_{s}\langle R^{2}\rangle \left(\lambda _{x}^{2}+\lambda _{y}^{2}+\lambda _{z}^{2}-3\right)}{2Nb^{2}}}\\&=-{\frac {k_{\text{B}}Tn_{s}\langle R^{2}\rangle \left(\lambda _{x}^{2}+\lambda _{y}^{2}+\lambda _{z}^{2}-3\right)}{2R_{0}^{2}}}\end{aligned}}} The free energy change per volume is just: Δ f def = Δ F def ( R → ) V = − k B T v s β ( λ x 2 + λ y 2 + λ z 2 − 3 ) 2 {\displaystyle \Delta f_{\text{def}}={\frac {\Delta F_{\text{def}}({\vec {R}})}{V}}=-{\frac {k_{\text{B}}Tv_{s}\beta \left(\lambda _{x}^{2}+\lambda _{y}^{2}+\lambda _{z}^{2}-3\right)}{2}}} where n s {\displaystyle n_{s}} is the number of strands in network, the subscript "def" means "deformation", v s = n s / V {\displaystyle v_{s}=n_{s}/V} , which is the number density per volume of polymer chains, β = ⟨ R 2 ⟩ / R 0 2 {\displaystyle \beta =\langle R^{2}\rangle /R_{0}^{2}} which is the ratio between the end-to-end distance of the chain and the theoretical distance that obey random walk statistics. If we assume incompressibility, the product of extension ratios is 1, implying no change in the volume: λ x λ y λ z = 1 {\displaystyle \lambda _{x}\lambda _{y}\lambda _{z}=1} . Case study: Uniaxial deformation: In a uniaxial deformed rubber, because λ x λ y λ z = 1 {\displaystyle \lambda _{x}\lambda _{y}\lambda _{z}=1} it is assumed that λ x = λ y = λ z − 1 / 2 {\displaystyle \lambda _{x}=\lambda _{y}=\lambda _{z}^{-1/2}} . So the previous free energy per volume equation is: Δ f def = Δ F def ( R → ) V = − k B T v s β ( λ x 2 + λ y 2 + λ z 2 − 3 ) 2 = k B T v s β 2 ( λ z 2 + 2 λ z − 3 ) {\displaystyle \Delta f_{\text{def}}={\frac {\Delta F_{\text{def}}({\vec {R}})}{V}}=-{\frac {k_{\text{B}}Tv_{s}\beta \left(\lambda _{x}^{2}+\lambda _{y}^{2}+\lambda _{z}^{2}-3\right)}{2}}={\frac {k_{\text{B}}Tv_{s}\beta }{2}}\left(\lambda _{z}^{2}+{\frac {2}{\lambda _{z}}}-3\right)} The engineering stress (by definition) is the first derivative of the energy in terms of the extension ratio, which is equivalent to the concept of strain: σ eng = d ( Δ f def ) λ z = k B T v s β ( λ z − 1 λ z 2 ) {\displaystyle \sigma _{\text{eng}}={\frac {d(\Delta f_{\text{def}})}{\lambda _{z}}}=k_{\text{B}}Tv_{s}\beta \left(\lambda _{z}-{\frac {1}{\lambda _{z}^{2}}}\right)} and the Young's Modulus E {\displaystyle E} is defined as derivative of the stress with respect to strain, which measures the stiffness of the rubber in laboratory experiments. E = d ( σ eng ) d λ z = k B T v s β ( 1 + 2 λ z 3 ) | λ z = 1 = 3 k B T v s β = 3 ρ β R T M s {\displaystyle E={\frac {d(\sigma _{\text{eng}})}{d\lambda _{z}}}=k_{\text{B}}Tv_{s}\beta \left.\left(1+{\frac {2}{\lambda _{z}^{3}}}\right)\right|_{\lambda _{z}=1}=3k_{\text{B}}Tv_{s}\beta ={\frac {3\rho \beta RT}{M_{s}}}} where v s = ρ N a / M s {\displaystyle v_{s}=\rho N_{a}/M_{s}} , ρ {\displaystyle \rho } is the mass density of the chain, M s {\displaystyle M_{s}} is the number average molecular weight of a network strand between crosslinks. Here, this type of analysis links the thermodynamic theory of rubber elasticity to experimentally measurable parameters. In addition, it gives insights into the cross-linking condition of the materials. ==== Worm-like chain model ==== The worm-like chain model (WLC) takes the energy required to bend a molecule into account. The variables are the same except that L p {\displaystyle L_{\text{p}}} , the persistence length, replaces b {\displaystyle b} . Then, the force follows this equation: F ≈ k B T L p ( 1 4 ( 1 − r L c ) 2 − 1 4 + r L c ) {\displaystyle F\approx {\frac {k_{\text{B}}T}{L_{\text{p}}}}\left({\frac {1}{4\left(1-{\frac {r}{L_{\rm {c}}}}\right)^{2}}}-{\frac {1}{4}}+{\frac {r}{L_{\text{c}}}}\right)} Therefore, when there is no distance between chain ends (r=0), the force required to do so is zero, and to fully extend the polymer chain ( r = L c {\displaystyle r=L_{\text{c}}} ), an infinite force is required, which is intuitive. Graphically, the force begins at the origin and initially increases linearly with r {\displaystyle r} . The force then plateaus but eventually increases again and approaches infinity as the chain length approaches L c {\displaystyle L_{\text{c}}} . == See also == Elasticity (physics) Hyperelastic material Polymers Thermodynamics == References ==
Wikipedia/Thermodynamic_theory_of_polymer_elasticity
Metaphysics (Greek: των μετὰ τὰ φυσικά, "those after the physics"; Latin: Metaphysica) is one of the principal works of Aristotle, in which he develops the doctrine that he calls First Philosophy. The work is a compilation of various texts treating abstract subjects, notably substance theory, different kinds of causation, form and matter, the existence of mathematical objects and the cosmos, which together constitute much of the branch of philosophy later known as metaphysics. == Date, style and composition == Many of Aristotle's works are extremely compressed, and many scholars believe that in their current form, they are likely lecture notes. Subsequent to the arrangement of Aristotle's works by Andronicus of Rhodes in the first century BC, a number of his treatises were referred to as the writings "after ("meta") the Physics", the origin of the current title for the collection Metaphysics. Some have interpreted the expression "meta" to imply that the subject of the work goes "beyond" that of Aristotle's Physics or that it is metatheoretical in relation to the Physics. But others believe that "meta" referred simply to the work's place in the canonical arrangement of Aristotle's writings, which is at least as old as Andronicus of Rhodes or even Hermippus of Smyrna. In other surviving works of Aristotle, the metaphysical treatises are referred to as "the [writings] concerning first philosophy"; which was the term Aristotle used for metaphysics. It is notoriously difficult to specify the date at which Aristotle wrote these treatises as a whole or even individually, especially because the Metaphysics is, in Jonathan Barnes' words, "a farrago, a hotch-potch", and more generally because of the difficulty of dating any of Aristotle's writings. The order in which the books were written is not known; their arrangement is due to later editors. In the manuscripts, books are referred to by Greek letters. For many scholars, it is customary to refer to the books by their letter names. Book 1 is called Alpha (Α); 2, little alpha (α); 3, Beta (Β); 4, Gamma (Γ); 5, Delta (Δ); 6, Epsilon (Ε); 7, Zeta (Ζ); 8, Eta (Η); 9, Theta (Θ); 10, Iota (Ι); 11, Kappa (Κ); 12, Lambda (Λ); 13, Mu (Μ); 14, Nu (Ν). == Outline == === Books I–VI: Alpha, little Alpha, Beta, Gamma, Delta and Epsilon === Book I or Alpha begins by discussing the nature of knowledge and compares knowledge gained from the senses and from memory, arguing that knowledge is acquired from memory through experience. It then defines "wisdom" (sophia) as a knowledge of the first principles (arche) or causes of things. Because those who are wise understand the first principles and causes, they know the why of things, unlike those who only know that things are a certain way based on their memory and sensations. The wise are able to teach because they know the why of things, and so they are better fitted to command, rather than to obey. He then surveys the first principles and causes of previous philosophers, starting with the material monists of the Ionian school and continuing up until Plato. Book II or "little alpha": Book II addresses a possible objection to the account of how we understand first principles and thus acquire wisdom, that attempting to discover the first principle would lead to an infinite series of causes. It argues in response that the idea of an infinite causal series is absurd, and argues that only things that are created or destroyed require a cause, and that thus there must be a primary cause that is eternal, an idea he develops later in Book Lambda. Book III or Beta lists the main problems or puzzles (aporia) of philosophy. Book IV or Gamma: Chapters 2 and 3 argue for its status as a subject in its own right. The rest is a defense of (a) what we now call the principle of contradiction, the principle that it is not possible for the same proposition to be (the case) and not to be (the case), and (b) what we now call the principle of excluded middle: tertium non datur — there cannot be an intermediary between contradictory statements. Book V or Delta ("philosophical lexicon") is a list of definitions of about thirty key terms such as cause, nature, one, and many. Book VI or Epsilon has two main concerns. The first concern is the hierarchy of the sciences: productive, practical or theoretical. Aristotle considers theoretical sciences superior because they study beings for their own sake—for example, Physics studies beings that can be moved—and do not have a target (τέλος telos, "end, goal"; τέλειος, "complete, perfect") beyond themselves. He argues that the study of being qua being, or First Philosophy, is superior to all the other theoretical sciences because it is concerned with the ultimate causes of all reality, not just the secondary causes of a part of reality. The second concern of Epsilon is the study of "accidents" (κατὰ συμβεβηκός), those attributes that do not depend on (τέχνη) or exist by necessity, which Aristotle believes do not deserve to be studied as a science. === Books VII–IX: Zeta, Eta, and Theta === Books Zeta, Eta, and Theta are generally considered the core of the Metaphysics. Book Zeta (VII) begins by stating that "being" has several senses, the purpose of philosophy is to understand the primary kind of being, called substance (ousia) and determine what substances there are, a concept that Aristotle develops in the Categories. Zeta goes on to consider four candidates for substance: (i) the 'essence' or 'what it is to be' of a thing (ii) the universal, (iii) the genus to which a substance belongs and (iv) the material substrate that underlies all the properties of a thing. He dismisses the idea that matter can be substance, for if we eliminate everything that is a property from what can have the property, such as matter and the shape, we are left with something that has no properties at all. Such 'ultimate matter' cannot be substance. Separability and 'this-ness' are fundamental to our concept of substance. Aristotle then describes his theory that essence is the criterion of substantiality. The essence of something is what is included in a secundum se ('according to itself') account of a thing, i.e. which tells what a thing is by its very nature. You are not musical by your very nature. But you are a human by your very nature. Your essence is what is mentioned in the definition of you. Aristotle then considers, and dismisses, the idea that substance is the universal or the genus, criticizing the Platonic theory of Ideas. Aristotle argues that if genus and species are individual things, then different species of the same genus contain the genus as individual thing, which leads to absurdities. Moreover, individuals are incapable of definition. Finally, he concludes book Zeta by arguing that substance is really a cause. Book Eta consists of a summary of what has been said so far (i.e., in Book Zeta) about substance, and adds a few further details regarding difference and unity. Book Theta sets out to define potentiality and actuality. Chapters 1–5 discuss potentiality, the potential of something to change: potentiality is "a principle of change in another thing or in the thing itself qua other." In chapter 6 Aristotle turns to actuality. We can only know actuality through observation or "analogy;" thus "as that which builds is to that which is capable of building, so is that which is awake to that which is asleep...or that which is separated from matter to matter itself". Actuality is the completed state of something that had the potential to be completed. The relationship between actuality and potentiality can be thought of as the relationship between form and matter, but with the added aspect of time. Actuality and potentiality are distinctions that occur over time (diachronic), whereas form and matter are distinctions that can be made at fixed points in time (synchronic). === Books X–XIV: Iota, Kappa, Lambda, Mu, and Nu === Book X or Iota: Discussion of unity, one and many, sameness and difference. Book XI or Kappa: Briefer versions of other chapters and of parts of the Physics. Book XII or Lambda: Further remarks on beings in general, first principles, and God or gods. This book includes Aristotle's famous description of the unmoved mover, "the most divine of things observed by us", as "the thinking of thinking". Books XIII and XIV, or Mu and Nu: Philosophy of mathematics, in particular how numbers exist. == Legacy == The Metaphysics is considered to be one of the greatest philosophical works. Its influence on the Greeks, the Muslim philosophers, Maimonides thence the scholastic philosophers and even writers such as Dante was immense. In the 3rd century, Alexander of Aphrodisias wrote a commentary on the first five books of the Metaphysics, and a commentary transmitted under his name exists for the final nine, but modern scholars doubt that this part was written by him. Themistius wrote an epitome of the work, of which book 12 survives in a Hebrew translation. The Neoplatonists Syrianus and Asclepius of Tralles also wrote commentaries on the work, where they attempted to synthesize Aristotle's doctrines with Neoplatonic cosmology. Aristotle's works gained a reputation for complexity that is never more evident than with the Metaphysics — Avicenna said that he had read the Metaphysics of Aristotle forty times, but did not understand it until he also read al-Farabi's Purposes of the Metaphysics of Aristotle.I read the Metaphysics [of Aristotle], but I could not comprehend its contents, and its author's object remained obscure to me, even when I had gone back and read it forty times and had got to the point where I had memorized it. In spite of this I could not understand it nor its object, and I despaired of myself and said, "This is a book which there is no way of understanding." But one day in the afternoon when I was at the booksellers' quarter a salesman approached with a book in his hand which he was calling out for sale. (...) So I bought it and, lo and behold, it was Abu Nasr al-Farabi's book on the objects of the Metaphysics. I returned home and was quick to read it, and in no time the objects of that book became clear to me because I had got to the point of having memorized it by heart. The flourishing of Arabic Aristotelian scholarship reached its peak with the work of Ibn Rushd (Latinized: Averroes), whose extensive writings on Aristotle's work led to his later designation as "The Commentator" by future generations of scholars. Maimonides wrote the Guide to the Perplexed in the 12th century, to demonstrate the compatibility of Aristotelian science with Biblical revelation. The Fourth Crusade (1202–1204) facilitated the discovery and delivery of many original Greek manuscripts to Western Europe. William of Moerbeke's translations of the work formed the basis of the commentaries on the Metaphysics by Albert the Great, Thomas Aquinas and Duns Scotus. They were also used by modern scholars for Greek editions, as William had access to Greek manuscripts that are now lost. Werner Jaeger lists William's translation in his edition of the Greek text in the Scriptorum Classicorum Bibliotheca Oxoniensis (Oxford 1962). == Textual criticism == In the 19th century, with the rise of textual criticism, the Metaphysics was examined anew. Critics, noting the wide variety of topics and the seemingly illogical order of the books, concluded that it was actually a collection of shorter works thrown together haphazardly. In the 20th century two general editions have been produced by W. D. Ross (1924) and by W. Jaeger (1957). Based on a careful study of the content and of the cross-references within them, W. D. Ross concluded that books A, B, Γ, E, Z, H, Θ, M, N, and I "form a more or less continuous work", while the remaining books α, Δ, Κ and Λ were inserted into their present locations by later editors. However, Ross cautions that books A, B, Γ, E, Z, H, Θ, M, N, and I — with or without the insertion of the others — do not constitute "a complete work". Werner Jaeger further maintained that the different books were taken from different periods of Aristotle's life. Everyman's Library, for their 1000th volume, published the Metaphysics in a rearranged order that was intended to make the work easier for readers. Editing the Metaphysics has become an open issue in works and studies of the new millennium. New critical editions have been produced of books Gamma, Alpha, and Lambda. Differences from the more-familiar 20th Century critical editions of Ross and Jaeger mainly depend on the stemma codicum of Aristotle's Metaphysics, of which different versions have been proposed since 1970. == Editions and translations == Greek text with commentary: Aristotle's Metaphysics. W. D. Ross. 2 Vols. Oxford: Clarendon Press, 1924. Reprinted in 1953 with corrections. Greek text: Aristotelis Metaphysica. Ed. Werner Jaeger. Oxford Classical Texts. Oxford University Press, 1957. ISBN 978-0-19-814513-4. Greek text with English: Metaphysics. Trans. Hugh Tredennick. 2 vols. Loeb Classical Library 271, 287. Harvard U. Press, 1933–35. ISBN 0-674-99299-7, ISBN 0-674-99317-9. Aristotle's Metaphysics. Trans. Hippocrates Gorgias Apostle. Bloomington: Indiana U. Press, 1966. Aristotle - Metaphysics. Translated by Hope, Richard. Ann Arbor: U. Michigan P. 1960 [1952]. Aristotle's Metaphysics. Translated by Sachs, Joe (2 ed.). Santa Fe, NM: Green Lion P. 2002. ISBN 1-888009-03-9. Aristotle. The Metaphysics. Penguin Classics. Translated by Lawson-Tancred, Hugh. London: Penguin. 2004 [1998]. ISBN 978-0-140-44619-7. === Ancient and medieval commentaries === Commentary on Aristotle's Metaphysics (in Greek, Latin, and English). Vol. 3. Translated by Aquinas, Thomas; Rowan, John P. William of Moerbeke (1st ed.). Chicago: Henry Regnery Company (Library of Living Catholic Thought). 1961. OCLC 312731. Archived from the original on October 28, 2011. {{cite book}}: |website= ignored (help)CS1 maint: others (link) (rpt. Notre Dame, Ind.: Dumb Ox, 1995). == Notes == == Citations == == References == Wolfgang Class: Aristotle's Metaphysics, A Philological Commentary: Volume I: Textual Criticism, ISBN 978-3-9815841-2-7, Saldenburg 2014; Volume II: The Composition of the Metaphysics, ISBN 978-3-9815841-3-4, Saldenburg 2015; Volume III: Sources and Parallels, ISBN 978-3-9815841-6-5, Saldenburg 2017; Volume IV: Reception and Criticism, ISBN 978-3-9820267-0-1, Saldenburg 2018. Copleston, Frederick, S.J. A History of Philosophy: Volume I Greece and Rome (Parts I and II) New York: Image Books, 1962. Aristotle's Metaphysics. Translated by Lawson-Tancred, Hugh. Penguin. 1998. ISBN 0140446192. == Further reading == Ackrill, J. L., 1963, Aristotle: Categories and De Interpretatione, Oxford: Clarendon Press. Alexandrou, S., 2014, Aristotle's Metaphysics Lambda: Annotated Critical Edition, Leiden: Brill. Anagnostopoulos, Georgios (ed.), 2009, A Companion to Aristotle, Chichester: Wiley-Blackwell. Elders, L., 1972, Aristotle's Theology: A Commentary on Book Λ of the Metaphysics, Assen: Van Gorcum. Gerson, Lloyd P. (ed.) and Joseph Owens, 2007, Aristotle's Gradations of Being in Metaphysics E-Z, South Bend: St Augustine's Press. Gill, Mary Louise, 1989, Aristotle on Substance: The Paradox of Unity, Princeton: Princeton University Press. == External links == Available bundled with Organon and other works – can be downloaded as .epub, .mobi and other formats. English translation and original Greek at Perseus. Translation by Hugh Tredennick from the Loeb Classical Library. English translation by W. D. Ross at MIT's Internet Classics Archive. Averroes' commentary on the Metaphysics, in Latin, together with the 'old' (Arabic) and new translation based on William of Moerbeke at Gallica. Aristotle: Metaphysics entry by Joe Sachs in the Internet Encyclopedia of Philosophy Cohen, S. Marc. "Aristotle's Metaphysics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. A good summary of scholarly comments at: Theory and History of Ontology Metaphysics public domain audiobook at LibriVox
Wikipedia/Metaphysics_(Aristotle)
Rules of inference are ways of deriving conclusions from premises. They are integral parts of formal logic, serving as norms of the logical structure of valid arguments. If an argument with true premises follows a rule of inference then the conclusion cannot be false. Modus ponens, an influential rule of inference, connects two premises of the form "if P {\displaystyle P} then Q {\displaystyle Q} " and " P {\displaystyle P} " to the conclusion " Q {\displaystyle Q} ", as in the argument "If it rains, then the ground is wet. It rains. Therefore, the ground is wet." There are many other rules of inference for different patterns of valid arguments, such as modus tollens, disjunctive syllogism, constructive dilemma, and existential generalization. Rules of inference include rules of implication, which operate only in one direction from premises to conclusions, and rules of replacement, which state that two expressions are equivalent and can be freely swapped. Rules of inference contrast with formal fallacies—invalid argument forms involving logical errors. Rules of inference belong to logical systems, and distinct logical systems use different rules of inference. Propositional logic examines the inferential patterns of simple and compound propositions. First-order logic extends propositional logic by articulating the internal structure of propositions. It introduces new rules of inference governing how this internal structure affects valid arguments. Modal logics explore concepts like possibility and necessity, examining the inferential structure of these concepts. Intuitionistic, paraconsistent, and many-valued logics propose alternative inferential patterns that differ from the traditionally dominant approach associated with classical logic. Various formalisms are used to express logical systems. Some employ many intuitive rules of inference to reflect how people naturally reason while others provide minimalistic frameworks to represent foundational principles without redundancy. Rules of inference are relevant to many areas, such as proofs in mathematics and automated reasoning in computer science. Their conceptual and psychological underpinnings are studied by philosophers of logic and cognitive psychologists. == Definition == A rule of inference is a way of drawing a conclusion from a set of premises. Also called inference rule and transformation rule, it is a norm of correct inferences that can be used to guide reasoning, justify conclusions, and criticize arguments. As part of deductive logic, rules of inference are argument forms that preserve the truth of the premises, meaning that the conclusion is always true if the premises are true. An inference is deductively correct or valid if it follows a valid rule of inference. Whether this is the case depends only on the form or syntactical structure of the premises and the conclusion. As a result, the actual content or concrete meaning of the statements does not affect validity. For instance, modus ponens is a rule of inference that connects two premises of the form "if P {\displaystyle P} then Q {\displaystyle Q} " and " P {\displaystyle P} " to the conclusion " Q {\displaystyle Q} ", where P {\displaystyle P} and Q {\displaystyle Q} stand for statements. Any argument with this form is valid, independent of the specific meanings of P {\displaystyle P} and Q {\displaystyle Q} , such as the argument "If it rains, then the ground is wet. It rains. Therefore, the ground is wet". In addition to modus ponens, there are many other rules of inference, such as modus tollens, disjunctive syllogism, hypothetical syllogism, constructive dilemma, and destructive dilemma. There are different formats to represent rules of inference. A common approach is to use a new line for each premise and separate the premises from the conclusion using a horizontal line. With this format, modus ponens is written as: P → Q P Q {\displaystyle {\begin{array}{l}P\to Q\\P\\\hline Q\end{array}}} Some logicians employ the therefore sign ( ∴ {\displaystyle \therefore } ) together or instead of the horizontal line to indicate where the conclusion begins. The sequent notation, a different approach, uses a single line in which the premises are separated by commas and connected to the conclusion with the turnstile symbol ( ⊢ {\displaystyle \vdash } ), as in P → Q , P ⊢ Q {\displaystyle P\to Q,P\vdash Q} . The letters P {\displaystyle P} and Q {\displaystyle Q} in these formulas are so-called metavariables: they stand for any simple or compound proposition. Rules of inference belong to logical systems and distinct logical systems may use different rules of inference. For example, universal instantiation is a rule of inference in the system of first-order logic but not in propositional logic. Rules of inference play a central role in proofs as explicit procedures for arriving at a new line of a proof based on the preceding lines. Proofs involve a series of inferential steps and often use various rules of inference to establish the theorem they intend to demonstrate. Rules of inference are definitory rules—rules about which inferences are allowed. They contrast with strategic rules, which govern the inferential steps needed to prove a certain theorem from a specific set of premises. Mastering definitory rules by itself is not sufficient for effective reasoning since they provide little guidance on how to reach the intended conclusion. As standards or procedures governing the transformation of symbolic expressions, rules of inference are similar to mathematical functions taking premises as input and producing a conclusion as output. According to one interpretation, rules of inference are inherent in logical operators found in statements, making the meaning and function of these operators explicit without adding any additional information. Logicians distinguish two types of rules of inference: rules of implication and rules of replacement. Rules of implication, like modus ponens, operate only in one direction, meaning that the conclusion can be deduced from the premises but the premises cannot be deduced from the conclusion. Rules of replacement, by contrast, operate in both directions, stating that two expressions are equivalent and can be freely replaced with each other. In classical logic, for example, a proposition ( P {\displaystyle P} ) is equivalent to the negation of its negation ( ¬ ¬ P {\displaystyle \lnot \lnot P} ). As a result, one can infer one from the other in either direction, making it a rule of replacement. Other rules of replacement include De Morgan's laws as well as the commutative and associative properties of conjunction and disjunction. While rules of implication apply only to complete statements, rules of replacement can be applied to any part of a compound statement. One of the earliest discussions of formal rules of inference is found in antiquity in Aristotle's logic. His explanations of valid and invalid syllogisms were further refined in medieval and early modern philosophy. The development of symbolic logic in the 19th century led to the formulation of many additional rules of inference belonging to classical propositional and first-order logic. In the 20th and 21st centuries, logicians developed various non-classical systems of logic with alternative rules of inference. == Basic concepts == Rules of inference describe the structure of arguments, which consist of premises that support a conclusion. Premises and conclusions are statements or propositions about what is true. For instance, the assertion "The door is open." is a statement that is either true or false, while the question "Is the door open?" and the command "Open the door!" are not statements and have no truth value. An inference is a step of reasoning from premises to a conclusion while an argument is the outward expression of an inference. Logic is the study of correct reasoning and examines how to distinguish good from bad arguments. Deductive logic is the branch of logic that investigates the strongest arguments, called deductively valid arguments, for which the conclusion cannot be false if all the premises are true. This is expressed by saying that the conclusion is a logical consequence of the premises. Rules of inference belong to deductive logic and describe argument forms that fulfill this requirement. In order to precisely assess whether an argument follows a rule of inference, logicians use formal languages to express statements in a rigorous manner, similar to mathematical formulas. They combine formal languages with rules of inference to construct formal systems—frameworks for formulating propositions and drawing conclusions. Different formal systems may employ different formal languages or different rules of inference. The basic rules of inference within a formal system can often be expanded by introducing new rules of inference, known as admissible rules. Admissible rules do not change which arguments in a formal system are valid but can simplify proofs. If an admissible rule can be expressed through a combination of the system's basic rules, it is called a derived or derivable rule. Statements that can be deduced in a formal system are called theorems of this formal system. Widely-used systems of logic include propositional logic, first-order logic, and modal logic. Rules of inference only ensure that the conclusion is true if the premises are true. An argument with false premises can still be valid, but its conclusion could be false. For example, the argument "If pigs can fly, then the sky is purple. Pigs can fly. Therefore, the sky is purple." is valid because it follows modus ponens, even though it contains false premises. A valid argument is called sound argument if all premises are true. Rules of inference are closely related to tautologies. In logic, a tautology is a statement that is true only because of the logical vocabulary it uses, independent of the meanings of its non-logical vocabulary. For example, the statement "if the tree is green and the sky is blue then the tree is green" is true independently of the meanings of terms like tree and green, making it a tautology. Every argument following a rule of inference can be transformed into a tautology. This is achieved by forming a conjunction (and) of all premises and connecting it through implication (if ... then ...) to the conclusion, thereby combining all the individual statements of the argument into a single statement. For example, the valid argument "The tree is green and the sky is blue. Therefore, the tree is green." can be transformed into the tautology "if the tree is green and the sky is blue then the tree is green". Rules of inference are also closely related to laws of thought, which are basic principles of logic that can take the form tautologies. For example, the law of identity asserts that each entity is identical to itself. Other traditional laws of thought include the law of non-contradiction and the law of excluded middle. Rules of inference are not the only way to demonstrate that an argument is valid. Alternative methods include the use of truth tables, which applies to propositional logic, and truth trees, which can also be employed in first-order logic. == Systems of logic == === Classical === ==== Propositional logic ==== Propositional logic examines the inferential patterns of simple and compound propositions. It uses letters, such as P {\displaystyle P} and Q {\displaystyle Q} , to represent simple propositions. Compound propositions are formed by modifying or combining simple propositions with logical operators, such as ¬ {\displaystyle \lnot } (not), ∧ {\displaystyle \land } (and), ∨ {\displaystyle \lor } (or), and → {\displaystyle \to } (if ... then ...). For example, if P {\displaystyle P} stands for the statement "it is raining" and Q {\displaystyle Q} stands for the statement "the streets are wet", then ¬ P {\displaystyle \lnot P} expresses "it is not raining" and P → Q {\displaystyle P\to Q} expresses "if it is raining then the streets are wet". These logical operators are truth-functional, meaning that the truth value of a compound proposition depends only on the truth values of the simple propositions composing it. For instance, the compound proposition P ∧ Q {\displaystyle P\land Q} is only true if both P {\displaystyle P} and Q {\displaystyle Q} are true; in all other cases, it is false. Propositional logic is not concerned with the concrete meaning of propositions other than their truth values. Key rules of inference in propositional logic are modus ponens, modus tollens, hypothetical syllogism, disjunctive syllogism, and double negation elimination. Further rules include conjunction introduction, conjunction elimination, disjunction introduction, disjunction elimination, constructive dilemma, destructive dilemma, absorption, and De Morgan's laws. ==== First-order logic ==== First-order logic also employs the logical operators from propositional logic but includes additional devices to articulate the internal structure of propositions. Basic propositions in first-order logic consist of a predicate, symbolized with uppercase letters like P {\displaystyle P} and Q {\displaystyle Q} , which is applied to singular terms, symbolized with lowercase letters like a {\displaystyle a} and b {\displaystyle b} . For example, if a {\displaystyle a} stands for "Aristotle" and P {\displaystyle P} stands for "is a philosopher", the formula P ( a ) {\displaystyle P(a)} means that "Aristotle is a philosopher". Another innovation of first-order logic is the use of the quantifiers ∃ {\displaystyle \exists } and ∀ {\displaystyle \forall } , which express that a predicate applies to some or all individuals. For instance, the formula ∃ x P ( x ) {\displaystyle \exists xP(x)} expresses that philosophers exist while ∀ x P ( x ) {\displaystyle \forall xP(x)} expresses that everyone is a philosopher. The rules of inference from propositional logic are also valid in first-order logic. Additionally, first-order logic introduces new rules of inference that govern the role of singular terms, predicates, and quantifiers in arguments. Key rules of inference are universal instantiation and existential generalization. Other rules of inference include universal generalization and existential instantiation. === Modal logics === Modal logics are formal systems that extend propositional logic and first-order logic with additional logical operators. Alethic modal logic introduces the operator ◊ {\displaystyle \Diamond } to express that something is possible and the operator ◻ {\displaystyle \Box } to express that something is necessary. For example, if the P {\displaystyle P} means that "Parvati works", then ◊ P {\displaystyle \Diamond P} means that "It is possible that Parvati works" while ◻ P {\displaystyle \Box P} means that "It is necessary that Parvati works". These two operators are related by a rule of replacement stating that ◻ P {\displaystyle \Box P} is equivalent to ¬ ◊ ¬ P {\displaystyle \lnot \Diamond \lnot P} . In other words: if something is necessarily true then it is not possible that it is not true. Further rules of inference include the necessitation rule, which asserts that a statement is necessarily true if it is provable in a formal system without any additional premises, and the distribution axiom, which allows one to derive ◊ P → ◊ Q {\displaystyle \Diamond P\to \Diamond Q} from ◊ ( P → Q ) {\displaystyle \Diamond (P\to Q)} . These rules of inference belong to system K, a weak form of modal logic with only the most basic rules of inference. Many formal systems of alethic modal logic include additional rules of inference, such as system T, which allows one to deduce P {\displaystyle P} from ◻ P {\displaystyle \Box P} . Non-alethic systems of modal logic introduce operators that behave like ◊ {\displaystyle \Diamond } and ◻ {\displaystyle \Box } in alethic modal logic, following similar rules of inference but with different meanings. Deontic logic is one type of non-alethic logic. It uses the operator P {\displaystyle P} to express that an action is permitted and the operator O {\displaystyle O} to express that an action is required, where P {\displaystyle P} behaves similarly to ◊ {\displaystyle \Diamond } and O {\displaystyle O} behaves similarly to ◻ {\displaystyle \Box } . For instance, the rule of replacement in alethic modal logic asserting that ◻ Q {\displaystyle \Box Q} is equivalent to ¬ ◊ ¬ Q {\displaystyle \lnot \Diamond \lnot Q} also applies to deontic logic. As a result, one can deduce from O Q {\displaystyle OQ} (e.g. Quinn has an obligation to help) that ¬ P ¬ Q {\displaystyle \lnot P\lnot Q} (e.g. Quinn is not permitted not to help). Other systems of modal logic include temporal modal logic, which has operators for what is always or sometimes the case, as well as doxastic and epistemic modal logics, which have operators for what people believe and know. === Others === Many other systems of logic have been proposed. One of the earliest systems is Aristotelian logic, according to which each statement is made up of two terms, a subject and a predicate, connected by a copula. For example, the statement "all humans are mortal" has the subject "all humans", the predicate "mortal", and the copula "is". All rules of inference in Aristotelian logic have the form of syllogisms, which consist of two premises and a conclusion. For instance, the Barbara rule of inference describes the validity of arguments of the form "All men are mortal. All Greeks are men. Therefore, all Greeks are mortal." Second-order logic extends first-order logic by allowing quantifiers to apply to predicates in addition to singular terms. For example, to express that the individuals Adam ( a {\displaystyle a} ) and Bianca ( b {\displaystyle b} ) share a property, one can use the formula ∃ X ( X ( a ) ∧ X ( b ) ) {\displaystyle \exists X(X(a)\land X(b))} . Second-order logic also comes with new rules of inference. For instance, one can infer P ( a ) {\displaystyle P(a)} (Adam is a philosopher) from ∀ X X ( a ) {\displaystyle \forall XX(a)} (every property applies to Adam). Intuitionistic logic is a non-classical variant of propositional and first-order logic. It shares with them many rules of inference, such as modus ponens, but excludes certain rules. For example, in classical logic, one can infer P {\displaystyle P} from ¬ ¬ P {\displaystyle \lnot \lnot P} using the rule of double negation elimination. However, in intuitionistic logic, this inference is invalid. As a result, every theorem that can be deduced in intuitionistic logic can also be deduced in classical logic, but some theorems provable in classical logic cannot be proven in intuitionistic logic. Paraconsistent logics revise classical logic to allow the existence of contradictions. In logic, a contradiction happens if the same proposition is both affirmed and denied, meaning that a formal system contains both P {\displaystyle P} and ¬ P {\displaystyle \lnot P} as theorems. Classical logic prohibits contradictions because classical rules of inference lead to the principle of explosion, an admissible rule of inference that makes it possible to infer Q {\displaystyle Q} from the premises P {\displaystyle P} and ¬ P {\displaystyle \lnot P} . Since Q {\displaystyle Q} is unrelated to P {\displaystyle P} , any arbitrary statement can be deduced from a contradiction, making the affected systems useless for deciding what is true and false. Paraconsistent logics solve this problem by modifying the rules of inference in such a way that the principle of explosion is not an admissible rule of inference. As a result, it is possible to reason about inconsistent information without deriving absurd conclusions. Many-valued logics modify classical logic by introducing additional truth values. In classical logic, a proposition is either true or false with nothing in between. In many-valued logics, some propositions are neither true nor false. Kleene logic, for example, is a three-valued logic that introduces the additional truth value undefined to describe situations where information is incomplete or uncertain. Many-valued logics have adjusted rules of inference to accommodate the additional truth values. For instance, the classical rule of replacement stating that P → Q {\displaystyle P\to Q} is equivalent to ¬ P ∨ Q {\displaystyle \lnot P\lor Q} is invalid in many three-valued systems. == Formalisms == Various formalisms or proof systems have been suggested as distinct ways of codifying reasoning and demonstrating the validity of arguments. Unlike different systems of logic, these formalisms do not impact what can be proven; they only influence how proofs are formulated. Influential frameworks include natural deduction systems, Hilbert systems, and sequent calculi. Natural deduction systems aim to reflect how people naturally reason by introducing many intuitive rules of inference to make logical derivations more accessible. They break complex arguments into simple steps, often using subproofs based on temporary premises. The rules of inference in natural deduction target specific logical operators, governing how an operator can be added with introduction rules or removed with elimination rules. For example, the rule of conjunction introduction asserts that one can infer P ∧ Q {\displaystyle P\land Q} from the premises P {\displaystyle P} and Q {\displaystyle Q} , thereby producing a conclusion with the conjunction operator from premises that do not contain it. Conversely, the rule of conjunction elimination asserts that one can infer P {\displaystyle P} from P ∧ Q {\displaystyle P\land Q} , thereby producing a conclusion that no longer includes the conjunction operator. Similar rules of inference are disjunction introduction and elimination, implication introduction and elimination, negation introduction and elimination, and biconditional introduction and elimination. As a result, systems of natural deduction usually include many rules of inference. Hilbert systems, by contrast, aim to provide a minimal and efficient framework of logical reasoning by including as few rules of inference as possible. Many Hilbert systems only have modus ponens as the sole rule of inference. To ensure that all theorems can be deduced from this minimal foundation, they introduce axiom schemes. An axiom scheme is a template to create axioms or true statements. It uses metavariables, which are placeholders that can be replaced by specific terms or formulas to generate an infinite number of true statements. For example, propositional logic can be defined with the following three axiom schemes: (1) P → ( Q → P ) {\displaystyle P\to (Q\to P)} , (2) ( P → ( Q → R ) ) → ( ( P → Q ) → ( P → R ) ) {\displaystyle (P\to (Q\to R))\to ((P\to Q)\to (P\to R))} , and (3) ( ¬ P → ¬ Q ) → ( Q → P ) {\displaystyle (\lnot P\to \lnot Q)\to (Q\to P)} . To formulate proofs, logicians create new statements from axiom schemes and then apply modus ponens to these statements to derive conclusions. Compared to natural deduction, this procedure tends to be less intuitive since its heavy reliance on symbolic manipulation can obscure the underlying logical reasoning. Sequent calculi, another approach, introduce sequents as formal representations of arguments. A sequent has the form A 1 , … , A m ⊢ B 1 , … , B n {\displaystyle A_{1},\dots ,A_{m}\vdash B_{1},\dots ,B_{n}} , where A i {\displaystyle A_{i}} and B i {\displaystyle B_{i}} stand for propositions. Sequents are conditional assertions stating that at least one B i {\displaystyle B_{i}} is true if all A i {\displaystyle A_{i}} are true. Rules of inference operate on sequents to produce additional sequents. Sequent calculi define two rules of inference for each logical operator: one to introduce it on the left side of a sequent and another to introduce it on the right side. For example, through the rule for introducing the operator ¬ {\displaystyle \lnot } on the left side, one can infer ¬ R , P ⊢ Q {\displaystyle \lnot R,P\vdash Q} from P ⊢ Q , R {\displaystyle P\vdash Q,R} . The cut rule, an additional rule of inference, makes it possible to simplify sequents by removing certain propositions. == Formal fallacies == While rules of inference describe valid patterns of deductive reasoning, formal fallacies are invalid argument forms that involve logical errors. The premises of a formal fallacy do not properly support its conclusion: the conclusion can be false even if all premises are true. Formal fallacies often mimic the structure of valid rules of inference and can thereby mislead people into unknowingly committing them and accepting their conclusions. The formal fallacy of affirming the consequent concludes P {\displaystyle P} from the premises P → Q {\displaystyle P\to Q} and Q {\displaystyle Q} , as in the argument "If Leo is a cat, then Leo is an animal. Leo is an animal. Therefore, Leo is a cat." This fallacy resembles valid inferences following modus ponens, with the key difference that the fallacy swaps the second premise and the conclusion. The formal fallacy of denying the antecedent concludes ¬ Q {\displaystyle \lnot Q} from the premises P → Q {\displaystyle P\to Q} and ¬ P {\displaystyle \lnot P} , as in the argument "If Laya saw the movie, then Laya had fun. Laya did not see the movie. Therefore, Laya did not have fun." This fallacy resembles valid inferences following modus tollens, with the key difference that the fallacy swaps the second premise and the conclusion. Other formal fallacies include affirming a disjunct, the existential fallacy, and the fallacy of the undistributed middle. == In various fields == Rules of inference are relevant to many fields, especially the formal sciences, such as mathematics and computer science, where they are used to prove theorems. Mathematical proofs often start with a set of axioms to describe the logical relationships between mathematical constructs. To establish theorems, mathematicians apply rules of inference to these axioms, aiming to demonstrate that the theorems are logical consequences. Mathematical logic, a subfield of mathematics and logic, uses mathematical methods and frameworks to study rules of inference and other logical concepts. Computer science also relies on deductive reasoning, employing rules of inference to establish theorems and validate algorithms. Logic programming frameworks, such as Prolog, allow developers to represent knowledge and use computation to draw inferences and solve problems. These frameworks often include an automated theorem prover, a program that uses rules of inference to generate or verify proofs automatically. Expert systems utilize automated reasoning to simulate the decision-making processes of human experts in specific fields, such as medical diagnosis, and assist in complex problem-solving tasks. They have a knowledge base to represent the facts and rules of the field and use an inference engine to extract relevant information and respond to user queries. Rules of inference are central to the philosophy of logic regarding the contrast between deductive-theoretic and model-theoretic conceptions of logical consequence. Logical consequence, a fundamental concept in logic, is the relation between the premises of a deductively valid argument and its conclusion. Conceptions of logical consequence explain the nature of this relation and the conditions under which it exists. The deductive-theoretic conception relies on rules of inference, arguing that logical consequence means that the conclusion can be deduced from the premises through a series of inferential steps. The model-theoretic conception, by contrast, focuses on how the non-logical vocabulary of statements can be interpreted. According to this view, logical consequence means that no counterexamples are possible: under no interpretation are the premises true and the conclusion false. Cognitive psychologists study mental processes, including logical reasoning. They are interested in how humans use rules of inference to draw conclusions, examining the factors that influence correctness and efficiency. They observe that humans are better at using some rules of inference than others. For example, the rate of successful inferences is higher for modus ponens than for modus tollens. A related topic focuses on biases that lead individuals to mistake formal fallacies for valid arguments. For instance, fallacies of the types affirming the consequent and denying the antecedent are often mistakenly accepted as valid. The assessment of arguments also depends on the concrete meaning of the propositions: individuals are more likely to accept a fallacy if its conclusion sounds plausible. == See also == Immediate inference Inference objection Law of thought List of rules of inference Logical truth Structural rule == References == === Notes === === Citations === === Sources ===
Wikipedia/Rules_of_inference
Critical pedagogy is a philosophy of education and social movement that developed and applied concepts from critical theory and related traditions to the field of education and the study of culture. It insists that issues of social justice and democracy are not distinct from acts of teaching and learning. The goal of critical pedagogy is emancipation from oppression through an awakening of the critical consciousness, based on the Portuguese term conscientização. When achieved, critical consciousness encourages individuals to effect change in their world through social critique and political action in order to self-actualize. Critical pedagogy was founded by the Brazilian philosopher and educator Paulo Freire, who promoted it through his 1968 book, Pedagogy of the Oppressed. It subsequently spread internationally, developing a particularly strong base in the United States, where proponents sought to develop means of using teaching to combat racism, sexism, and oppression. As it grew, it incorporated elements from fields like the Human rights movement, Civil rights movement, Disability rights movement, Indigenous rights movement, postmodern theory, feminist theory, postcolonial theory, and queer theory. == Background == Critical Pedagogy is believed to have its roots in the critical theory of the Frankfurt School, which was established in 1923. An outgrowth of critical theory, it is intended to educate and work towards a realization of its emancipatory goals. The theory is influenced by Karl Marx who believed that inequality is a result of socioeconomic differences and that all people need to work toward a socialized economy. More recently, critical pedagogy can also be traced back to Paulo Freire's best-known 1968 work, The Pedagogy of the Oppressed. Freire, a professor of history and the philosophy of education at the Federal University of Pernambuco in Brazil, sought in this and other works to develop a philosophy of adult education that demonstrated a solidarity with the poor in their common struggle to survive by engaging them in a dialog of greater awareness and analysis. Although his family had suffered loss and hunger during the Great Depression, the poor viewed him and his formerly middle-class family "as people from another world who happened to fall accidentally into their world". His intimate discovery of class and their borders "led, invariably, to Freire's radical rejection of a class-based society". While prominent figures within Critical Pedagogy include Paulo Freire, Henry Giroux, Peter McLaren, bell hooks, and others, it is important to note that their work on critical pedagogy varies in focus. For example, some approach critical pedagogy from a Marxist perspective with a focus on socioeconomic class. Paulo Freire, on the other hand, writes about how critical pedagogy can lead to liberty and freedom of the oppressed and marginalized. bell hooks applies a feminist perspective to critical pedagogy and Ira Shor, for example, advocates for the need of moving the theoretical framework of critical pedagogy to a more practical one. The influential works of Freire made him arguably the most celebrated critical educator. He seldom used the term "critical pedagogy" himself when describing this philosophy. His initial focus targeted adult literacy projects in Brazil and later was adapted to deal with a wide range of social and educational issues. Freire's pedagogy revolved around an anti-authoritarian and interactive approach aimed to examine issues of relational power for students and workers. The center of the curriculum used the fundamental goal based on social and political critiques of everyday life. Freire's praxis required implementation of a range of educational practices and processes with the goal of creating not only a better learning environment but also a better world. Freire himself maintained that this was not merely an educational technique but a way of living in our educative practice. Freire endorses students' ability to think critically about their education situation; this method of thinking is thought by practitioners of critical pedagogy to allow them to "recognize connections between their individual problems and experiences and the social contexts in which they are embedded". Realizing one's consciousness ("conscientization", "conscientização") is then a needed first step of "praxis", which is defined as the power and know-how to take action against oppression while stressing the importance of liberating education. "Praxis involves engaging in a cycle of theory, application, evaluation, reflection, and then back to theory. Social transformation is the product of praxis at the collective level." Critical pedagogue Ira Shor, who was mentored by and worked closely with Freire from 1980 until Freire's death in 1997, defines critical pedagogy as: Habits of thought, reading, writing, and speaking which go beneath surface meaning, first impressions, dominant myths, official pronouncements, traditional clichés, received wisdom, and mere opinions, to understand the deep meaning, root causes, social context, ideology, and personal consequences of any action, event, object, process, organization, experience, text, subject matter, policy, mass media, or discourse. (Empowering Education, 129) Critical pedagogy explores the dialogic relationships between teaching and learning. Its proponents claim that it is a continuous process of what they call "unlearning", "learning", and "relearning", "reflection", "evaluation", and the effect that these actions have on the students, in particular students whom they believe have been historically and continue to be disenfranchised by what they call "traditional schooling". The educational philosophy has since been developed by Henry Giroux and others since the 1980s as a praxis-oriented "educational movement, guided by passion and principle, to help students develop a consciousness of freedom, recognize authoritarian tendencies, and connect knowledge to power and the ability to take constructive action". Freire wrote the introduction to his 1988 work, Teachers as Intellectuals: Toward a Critical Pedagogy of Learning. Another leading critical pedagogy theorist who Freire called his "intellectual cousin", Peter McLaren, wrote the foreword. McLaren and Giroux co-edited one book on critical pedagogy and co-authored another in the 1990s. Among its other leading figures in no particular order are bell hooks (Gloria Jean Watkins), Joe L. Kincheloe, Patti Lather, Myles Horton, Antonia Darder, Gloria Ladson-Billings, Peter McLaren, Khen Lampert, Howard Zinn, Donaldo Macedo, Dermeval Saviani, Sandy Grande, Michael Apple, and Stephanie Ledesma. Educationalists including Jonathan Kozol and Parker Palmer are sometimes included in this category. Other critical pedagogues known more for their Anti-schooling, unschooling, or deschooling perspectives include Ivan Illich, John Holt, Ira Shor, John Taylor Gatto, and Matt Hern. Critical pedagogy has several other strands and foundations. Postmodern, anti-racist, feminist, postcolonial, queer, and environmental theories all play a role in further expanding and enriching Freire's original ideas about a critical pedagogy, shifting its main focus on social class to include issues pertaining to religion, military identification, race, gender, sexuality, nationality, ethnicity, and age. Much of the work also draws on anarchism, György Lukács, Wilhelm Reich, postcolonialism, and the discourse theories of Edward Said, Antonio Gramsci, Gilles Deleuze (rhizomatic learning) and Michel Foucault. Radical Teacher is a magazine dedicated to critical pedagogy and issues of interest to critical educators. Many contemporary critical pedagogues have embraced Postmodern, anti-essentialist perspectives of the individual, of language, and of power, "while at the same time retaining the Freirean emphasis on critique, disrupting oppressive regimes of power/knowledge, and social change". == Developments and critiques == Like critical theory itself, the field of critical pedagogy continues to evolve. Contemporary critical educators, such as bell hooks and Peter McLaren, discuss in their criticisms the influences of many varied concerns, institutions, and social structures, "including globalization, the mass media, and race/spiritual relations", while citing reasons for resisting the possibilities to change. McLaren has developed a social movement based version of critical pedagogy that he calls revolutionary critical pedagogy, emphasizing critical pedagogy as a social movement for the creation of a democratic socialist alternative to capitalism. Curry Malott and Derek R. Ford's first collaborative book, Marx, Capital, and Education built on McLaren's revolutionary pedagogy by connecting it to the global class struggle and the history of the actually-existing workers' movements. As Curry Malott noted, "Critical pedagogy was created as a break from the Marxism of Freire's Pedagogy of the Oppressed and Bowles and Gintis' Schooling in Capitalist America. Even though it is true that critical pedagogy has become increasingly domesticated and watered down, it's birth was an act of counterrevolution itself." In particular, they argued for a critical pedagogy that simultaneously pursued communism and national liberation. Malott and Ford were the first authors to bring Harry Haywood's work into critical pedagogy. They believed that critical pedagogy had been divorced from its radical roots. Yet when Malott went to re-investigate those roots, he decided that they were not revolutionary at all. In fact, he argued that they were permeated by anti-communism and hostility to any actually-existing struggles of oppressed peoples. As a result, both Malott and Ford moved away from critical pedagogy. Ford developed a political pedagogy that built on McLaren's revolutionary critical pedagogy but took "a distanced and expository position" to link the project more explicitly to communism. Yet he later abandoned that as a starting point and instead turned his attention to educational forms. Joe L. Kincheloe and Shirley R. Steinberg have created the Paulo and Nita Freire Project for International Critical Pedagogy at McGill University. In line with Kincheloe and Steinberg's contributions to critical pedagogy, the project attempts to move the field to the next phase of its evolution. In this second phase, critical pedagogy seeks to become a worldwide, decolonizing movement dedicated to listening to and learning from diverse discourses of people from around the planet. Kincheloe and Steinberg also embrace Indigenous knowledges in education as a way to expand critical pedagogy and to question educational hegemony. Joe L. Kincheloe, in expanding on the Freire's notion that a pursuit of social change alone could promote anti-intellectualism, promotes a more balanced approach to education than postmodernists.We cannot simply attempt to cultivate the intellect without changing the unjust social context in which such minds operate. Critical educators cannot just work to change the social order without helping to educate a knowledgeable and skillful group of students. Creating a just, progressive, creative, and democratic society demands both dimensions of this pedagogical progress. One of the major texts taking on the intersection between critical pedagogy and Indigenous knowledge(s) is Sandy Grande's, Red Pedagogy: Native American Social and Political Thought (Rowman and Littlefield, 2004). In agreement with this perspective, Four Arrows, aka Don Trent Jacobs, challenges the anthropocentrism of critical pedagogy and writes that to achieve its transformative goals there are other differences between Western and Indigenous worldview that must be considered. Approaching the intersection of Indigenous perspectives and pedagogy from another perspective, critical pedagogy of place examines the impacts of place. === In the classroom === Ira Shor, a professor at the City University of New York, provides for an example of how critical pedagogy is used in the classroom. He develops these themes in looking at the use of Freirean teaching methods in the context of the everyday life of classrooms, in particular, institutional settings. He suggests that the whole curriculum of the classroom must be re-examined and reconstructed. He favors a change of role of the student from object to active, critical subject. In doing so, he suggests that students undergo a struggle for ownership of themselves. He states that students have previously been lulled into a sense of complacency by the circumstances of everyday life and that through the processes of the classroom, they can begin to envision and strive for something different for themselves. Of course, achieving such a goal is not automatic nor easy, as he suggests that the role of the teacher is critical to this process. Students need to be helped by teachers to separate themselves from unconditional acceptance of the conditions of their own existence. Once this separation is achieved, then students may be prepared for critical re-entry into an examination of everyday life. In a classroom environment that achieves such liberating intent, one of the potential outcomes is that the students themselves assume more responsibility for the class. Power is thus distributed amongst the group and the role of the teacher becomes much more mobile, not to mention more challenging. This encourages the growth of each student's intellectual character rather than a mere "mimicry of the professorial style." Teachers, however, do not simply abdicate their authority in a student-centered classroom. In the later years of his life, Freire grew increasingly concerned with what he felt was a major misinterpretation of his work and insisted that teachers cannot deny their position of authority. Critical teachers, therefore, must admit that they are in a position of authority and then demonstrate that authority in their actions in supports of students... [A]s teachers relinquish the authority of truth providers, they assume the mature authority of facilitators of student inquiry and problem-solving. In relation to such teacher authority, students gain their freedom--they gain the ability to become self-directed human beings capable of producing their own knowledge. And due to the student-centeredness that critical pedagogy insists upon, there are inherent conflicts associated with the "large collections of top-down content standards in their disciplines". Critical pedagogy advocates insist that teachers themselves are vital to the discussion about Standards-based education reform in the United States because a pedagogy that requires a student to learn or a teacher to teach externally imposed information exemplifies the banking model of education outlined by Freire where the structures of knowledge are left unexamined. To the critical pedagogue, the teaching act must incorporate social critique alongside the cultivation of intellect. Joe L. Kincheloe argues that this is in direct opposition to the epistemological concept of positivism, where "social actions should proceed with law-like predictability". In this philosophy, a teacher and their students would be served by Standards-based education where there is "only be one correct way to teach" as "[e]veryone is assumed to be the same regardless of race, class, or gender". Donald Schön's concept of "indeterminate zones of practice" illustrates how any practice, especially ones with human subjects at their center, are infinitely complex and highly contested, which amplify the critical pedagogue's unwillingness to apply universal practices. Furthermore, bell hooks, who is greatly influenced by Freire, points out the importance of engaged pedagogy and the responsibility that teachers, as well as students, must have in the classroom: Teachers must be aware of themselves as practitioners and as human beings if they wish to teach students in a non-threatening, anti-discriminatory way. Self-actualisation should be the goal of the teacher as well as the students. == Resistance from students == Students sometimes resist critical pedagogy. Student resistance to critical pedagogy can be attributed to a variety of reasons. Student objections may be due to ideological reasons, religious or moral convictions, fear of criticism, or discomfort with controversial issues. Kristen Seas argues: "Resistance in this context thus occurs when students are asked to shift not only their perspectives, but also their subjectivities as they accept or reject assumptions that contribute to the pedagogical arguments being constructed." Karen Kopelson asserts that resistance to new information or ideologies, introduced in the classroom, is a natural response to persuasive messages that are unfamiliar. Resistance is often, at the least, understandably protective: As anyone who can remember her or his own first uneasy encounters with particularly challenging new theories or theorists can attest, resistance serves to shield us from uncomfortable shifts or all-out upheavals in perception and understanding-shifts in perception which, if honored, force us to inhabit the world in fundamentally new and different ways. Kristen Seas further explains: "Students [often] reject the teacher's message because they see it as coercive, they do not agree with it, or they feel excluded by it." Karen Kopelson concludes "that many if not most students come to the university in order to gain access to and eventual enfranchisement in 'the establishment,' not to critique and reject its privileges." == Critical pedagogy of teaching == The rapidly changing demographics of the classroom in the United States has resulted in an unprecedented amount of linguistic and cultural diversity. In order to respond to these changes, advocates of critical pedagogy call into question the focus on practical skills of teacher credential programs. "[T]his practical focus far too often occurs without examining teachers' own assumptions, values, and beliefs and how this ideological posture informs, often unconsciously, their perceptions and actions when working with linguistic-minority and other politically, socially, and economically subordinated students." As teaching is considered an inherently political act to the critical pedagogue, a more critical element of teacher education becomes addressing implicit biases (also known as implicit cognition or implicit stereotypes) that can subconsciously affect a teacher's perception of a student's ability to learn. Advocates of critical pedagogy insist that teachers, then, must become learners alongside their students, as well as students of their students. They must become experts beyond their field of knowledge, and immerse themselves in the culture, customs, and lived experiences of the students they aim to teach. == Criticism == Critical pedagogy has been the subject of varied debates inside and outside the field of education. Philosopher John Searle characterized the goal of Giroux's form of critical pedagogy "to create political radicals", thus highlighting the antagonistic moral and political grounds of the ideals of citizenship and "public wisdom." These varying moral perspectives of what is right are to be found in what John Dewey has referred to as the tensions between traditional and progressive education. Searle argued that critical pedagogy's objections to the Western canon are misplaced and/or disingenuous: Precisely by inculcating a critical attitude, the "canon" served to demythologize the conventional pieties of the American bourgeoisie and provided the student with a perspective from which to critically analyze American culture and institutions. Ironically, the same tradition is now regarded as oppressive. The texts once served an unmasking function; now we are told that it is the texts which must be unmasked. In 1992, Maxine Hairston took a hard line against critical pedagogy in the first year college composition classroom and argued, "everywhere I turn I find composition faculty, both leaders in the profession and new voices, asserting that they have not only the right, but the duty, to put ideology and radical politics at the center of their teaching." Hairston further confers, When classes focus on complex issues such as racial discrimination, economic injustices, and inequities of class and gender, they should be taught by qualified faculty who have the depth of information and historical competence that such critical social issues warrant. Our society's deep and tangled cultural conflicts can neither be explained nor resolved by simplistic ideological formulas. Sharon O'Dair (2003) said that compositionists "focus [...] almost exclusively on ideological matters", and further argues that this focus is at the expense of proficiency of student writing skills in the composition classroom. To this end, O'Dair explained that "recently advocated working-class pedagogies privilege activism over "language instruction." Jeff Smith argued that students want to gain, rather than to critique, positions of privilege, as encouraged by critical pedagogues. Scholars who have worked in the field of critical pedagogy have also critiqued the movement from various angles. In 2016, Curry Stephenson Malott, who had written several books about critical pedagogy and identified as a critical pedagogue, renounced and critiqued his previous work. In History and Education: Engaging the Global Class War, he writes about his "long journey of self-reflection and de-indoctrination" that culminated in the break. Malott writes that "the term critical pedagogy was created by Henry Giroux (1981) as an attempt to dismiss socialism and the legacy of Karl Marx." During the same period, Derek R. Ford also broke with critical pedagogy, claiming that it was "at a dead end." While Ford is not concerned with "proficiency" like O'Dair, he agrees that the focus on critique at the expense of imagination and actual political engagement serves to produce the critical pedagogue as "the enlightened and isolated researcher that reveals the truth behind the curtain." Both Malott and Ford, however, note exceptions to their critiques within the field, such as the work of Peter McLaren. == See also == == Further reading == Gottesman, Isaac (2016), The Critical Turn in Education: From Marxist Critique to Poststructuralist Feminism to Critical Theories of Race (New York: Routledge) Salmani Nodoushan, M. A., & Pashapour, A. (2016). Critical pedagogy, rituals of distinction, and true professionalism. Journal of Educational Technology, 13(1), 29–43. == References ==
Wikipedia/Critical_pedagogy_theory
In chemistry the polyhedral skeletal electron pair theory (PSEPT) provides electron counting rules useful for predicting the structures of clusters such as borane and carborane clusters. The electron counting rules were originally formulated by Kenneth Wade, and were further developed by others including Michael Mingos; they are sometimes known as Wade's rules or the Wade–Mingos rules. The rules are based on a molecular orbital treatment of the bonding. These rules have been extended and unified in the form of the Jemmis mno rules. == Predicting structures of cluster compounds == Different rules (4n, 5n, or 6n) are invoked depending on the number of electrons per vertex. The 4n rules are reasonably accurate in predicting the structures of clusters having about 4 electrons per vertex, as is the case for many boranes and carboranes. For such clusters, the structures are based on deltahedra, which are polyhedra in which every face is triangular. The 4n clusters are classified as closo-, nido-, arachno- or hypho-, based on whether they represent a complete (closo-) deltahedron, or a deltahedron that is missing one (nido-), two (arachno-) or three (hypho-) vertices. However, hypho clusters are relatively uncommon due to the fact that the electron count is high enough to start to fill antibonding orbitals and destabilize the 4n structure. If the electron count is close to 5 electrons per vertex, the structure often changes to one governed by the 5n rules, which are based on 3-connected polyhedra. As the electron count increases further, the structures of clusters with 5n electron counts become unstable, so the 6n rules can be implemented. The 6n clusters have structures that are based on rings. A molecular orbital treatment can be used to rationalize the bonding of cluster compounds of the 4n, 5n, and 6n types. === 4n rules === The following polyhedra are closo polyhedra, and are the basis for the 4n rules; each of these have triangular faces. The number of vertices in the cluster determines what polyhedron the structure is based on. Using the electron count, the predicted structure can be found. n is the number of vertices in the cluster. The 4n rules are enumerated in the following table. When counting electrons for each cluster, the number of valence electrons is enumerated. For each transition metal present, 10 electrons are subtracted from the total electron count. For example, in Rh6(CO)16 the total number of electrons would be 6 × 9 + 16 × 2 − 6 × 10 = 86 – 60 = 26. Therefore, the cluster is a closo polyhedron because n = 6, with 4n + 2 = 26. Other rules may be considered when predicting the structure of clusters: For clusters consisting mostly of transition metals, any main group elements present are often best counted as ligands or interstitial atoms, rather than vertices. Larger and more electropositive atoms tend to occupy vertices of high connectivity and smaller more electronegative atoms tend to occupy vertices of low connectivity. In the special case of boron hydride clusters, each boron atom connected to 3 or more vertices has one terminal hydride, while a boron atom connected to two other vertices has two terminal hydrogen atoms. If more hydrogen atoms are present, they are placed in open face positions to even out the coordination number of the vertices. For the special case of transition metal clusters, ligands are added to the metal centers to give the metals reasonable coordination numbers, and if any hydrogen atoms are present they are placed in bridging positions to even out the coordination numbers of the vertices. In general, closo structures with n vertices are n-vertex polyhedra. To predict the structure of a nido cluster, the closo cluster with n + 1 vertices is used as a starting point; if the cluster is composed of small atoms a high connectivity vertex is removed, while if the cluster is composed of large atoms a low connectivity vertex is removed. To predict the structure of an arachno cluster, the closo polyhedron with n + 2 vertices is used as the starting point, and the n + 1 vertex nido complex is generated by following the rule above; a second vertex adjacent to the first is removed if the cluster is composed of mostly small atoms, a second vertex not adjacent to the first is removed if the cluster is composed mostly of large atoms. Example: Pb2−10 Electron count: 10 × Pb + 2 (for the negative charge) = 10 × 4 + 2 = 42 electrons. Since n = 10, 4n + 2 = 42, so the cluster is a closo bicapped square antiprism. Example: S2+4 Electron count: 4 × S – 2 (for the positive charge) = 4 × 6 – 2 = 22 electrons. Since n = 4, 4n + 6 = 22, so the cluster is arachno. Starting from an octahedron, a vertex of high connectivity is removed, and then a non-adjacent vertex is removed. Example: Os6(CO)18 Electron count: 6 × Os + 18 × CO – 60 (for 6 osmium atoms) = 6 × 8 + 18 × 2 – 60 = 24 Since n = 6, 4n = 24, so the cluster is capped closo. Starting from a trigonal bipyramid, a face is capped. The carbonyls have been omitted for clarity. Example: B5H4−5 Electron count: 5 × B + 5 × H + 4 (for the negative charge) = 5 × 3 + 5 × 1 + 4 = 24 Since n = 5, 4n + 4 = 24, so the cluster is nido. Starting from an octahedron, one of the vertices is removed. The rules are useful in also predicting the structure of carboranes. Example: C2B7H13 Electron count = 2 × C + 7 × B + 13 × H = 2 × 4 + 7 × 3 + 13 × 1 = 42 Since n in this case is 9, 4n + 6 = 42, the cluster is arachno. The bookkeeping for deltahedral clusters is sometimes carried out by counting skeletal electrons instead of the total number of electrons. The skeletal orbital (electron pair) and skeletal electron counts for the four types of deltahedral clusters are: n-vertex closo: n + 1 skeletal orbitals, 2n + 2 skeletal electrons n-vertex nido: n + 2 skeletal orbitals, 2n + 4 skeletal electrons n-vertex arachno: n + 3 skeletal orbitals, 2n + 6 skeletal electrons n-vertex hypho: n + 4 skeletal orbitals, 2n + 8 skeletal electrons The skeletal electron counts are determined by summing the total of the following number of electrons: 2 from each BH unit 3 from each CH unit 1 from each additional hydrogen atom (over and above the ones on the BH and CH units) the anionic charge electrons === 5n rules === As discussed previously, the 4n rule mainly deals with clusters with electron counts of 4n + k, in which approximately 4 electrons are on each vertex. As more electrons are added per vertex, the number of the electrons per vertex approaches 5. Rather than adopting structures based on deltahedra, the 5n-type clusters have structures based on a different series of polyhedra known as the 3-connected polyhedra, in which each vertex is connected to 3 other vertices. The 3-connected polyhedra are the duals of the deltahedra. The common types of 3-connected polyhedra are listed below. The 5n rules are as follows. Example: P4 Electron count: 4 × P = 4 × 5 = 20 It is a 5n structure with n = 4, so it is tetrahedral Example: P4S3 Electron count 4 × P + 3 × S = 4 × 5 + 3 × 6 = 38 It is a 5n + 3 structure with n = 7. Three vertices are inserted into edges Example: P4O6 Electron count 4 × P + 6 × O = 4 × 5 + 6 × 6 = 56 It is a 5n + 6 structure with n = 10. Six vertices are inserted into edges === 6n rules === As more electrons are added to a 5n cluster, the number of electrons per vertex approaches 6. Instead of adopting structures based on 4n or 5n rules, the clusters tend to have structures governed by the 6n rules, which are based on rings. The rules for the 6n structures are as follows. Example: S8 Electron count = 8 × S = 8 × 6 = 48 electrons. Since n = 8, 6n = 48, so the cluster is an 8-membered ring. Hexane (C6H14) Electron count = 6 × C + 14 × H = 6 × 4 + 14 × 1 = 38 Since n = 6, 6n = 36 and 6n + 2 = 38, so the cluster is a 6-membered chain. === Isolobal vertex units === Provided a vertex unit is isolobal with BH then it can, in principle at least, be substituted for a BH unit, even though BH and CH are not isoelectronic. The CH+ unit is isolobal, hence the rules are applicable to carboranes. This can be explained due to a frontier orbital treatment. Additionally there are isolobal transition-metal units. For example, Fe(CO)3 provides 2 electrons. The derivation of this is briefly as follows: Fe has 8 valence electrons. Each carbonyl group is a net 2 electron donor after the internal σ- and π-bonding are taken into account making 14 electrons. 3 pairs are considered to be involved in Fe–CO σ-bonding and 3 pairs are involved in π-backbonding from Fe to CO reducing the 14 to 2. == Bonding in cluster compounds == closo-B6H2−6 The boron atoms lie on each vertex of the octahedron and are sp hybridized. One sp-hybrid radiates away from the structure forming the bond with the hydrogen atom. The other sp-hybrid radiates into the center of the structure forming a large bonding molecular orbital at the center of the cluster. The remaining two unhybridized orbitals lie along the tangent of the sphere like structure creating more bonding and antibonding orbitals between the boron vertices. The orbital diagram breaks down as follows: The 18 framework molecular orbitals, (MOs), derived from the 18 boron atomic orbitals are: 1 bonding MO at the center of the cluster and 5 antibonding MOs from the 6 sp-radial hybrid orbitals 6 bonding MOs and 6 antibonding MOs from the 12 tangential p-orbitals. The total skeletal bonding orbitals is therefore 7, i.e. n + 1. === Transition metal clusters === Transition metal clusters use the d orbitals for bonding. Thus, they have up to nine bonding orbitals, instead of only the four present in boron and main group clusters. PSEPT also applies to metallaboranes === Clusters with interstitial atoms === Owing their large radii, transition metals generally form clusters that are larger than main group elements. One consequence of their increased size, these clusters often contain atoms at their centers. A prominent example is [Fe6C(CO)16]2-. In such cases, the rules of electron counting assume that the interstitial atom contributes all valence electrons to cluster bonding. In this way, [Fe6C(CO)16]2- is equivalent to [Fe6(CO)16]6- or [Fe6(CO)18]2-. == See Also == Styx rule == References == == General references == Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-037941-8. Cotton, F. Albert; Wilkinson, Geoffrey; Murillo, Carlos A.; Bochmann, Manfred (1999), Advanced Inorganic Chemistry (6th ed.), New York: Wiley-Interscience, ISBN 0-471-19957-5
Wikipedia/Polyhedral_skeletal_electron_pair_theory
In harmonic analysis, a field within mathematics, Littlewood–Paley theory is a theoretical framework used to extend certain results about L2 functions to Lp functions for 1 < p < ∞. It is typically used as a substitute for orthogonality arguments which only apply to Lp functions when p = 2. One implementation involves studying a function by decomposing it in terms of functions with localized frequencies, and using the Littlewood–Paley g-function to compare it with its Poisson integral. The 1-variable case was originated by J. E. Littlewood and R. Paley (1931, 1937, 1938) and developed further by Polish mathematicians A. Zygmund and J. Marcinkiewicz in the 1930s using complex function theory (Zygmund 2002, chapters XIV, XV). E. M. Stein later extended the theory to higher dimensions using real variable techniques. == The dyadic decomposition of a function == Littlewood–Paley theory uses a decomposition of a function f into a sum of functions fρ with localized frequencies. There are several ways to construct such a decomposition; a typical method is as follows. If f(x) is a function on R, and ρ is a measurable set (in the frequency space) with characteristic function χ ρ ( ξ ) {\displaystyle \chi _{\rho }(\xi )} , then fρ is defined via its Fourier transform f ^ ρ := χ ρ f ^ {\displaystyle {\hat {f}}_{\rho }:=\chi _{\rho }{\hat {f}}} . Informally, fρ is the piece of f whose frequencies lie in ρ. If Δ is a collection of measurable sets which (up to measure 0) are disjoint and have union on the real line, then a well behaved function f can be written as a sum of functions fρ for ρ ∈ Δ. When Δ consists of the sets of the form ρ = [ − 2 k + 1 , − 2 k ] ∪ [ 2 k , 2 k + 1 ] . {\displaystyle \rho =[-2^{k+1},-2^{k}]\cup [2^{k},2^{k+1}].} for k an integer, this gives a so-called "dyadic decomposition" of f : Σρ fρ. There are many variations of this construction; for example, the characteristic function of a set used in the definition of fρ can be replaced by a smoother function. A key estimate of Littlewood–Paley theory is the Littlewood–Paley theorem, which bounds the size of the functions fρ in terms of the size of f. There are many versions of this theorem corresponding to the different ways of decomposing f. A typical estimate is to bound the Lp norm of (Σρ |fρ|2)1/2 by a multiple of the Lp norm of f. In higher dimensions it is possible to generalize this construction by replacing intervals with rectangles with sides parallel to the coordinate axes. Unfortunately these are rather special sets, which limits the applications to higher dimensions. == The Littlewood–Paley g function == The g function is a non-linear operator on Lp(Rn) that can be used to control the Lp norm of a function f in terms of its Poisson integral. The Poisson integral u(x,y) of f is defined for y > 0 by u ( x , y ) = ∫ R n P y ( t ) f ( x − t ) d t {\displaystyle u(x,y)=\int _{\mathbb {R} ^{n}}P_{y}(t)f(x-t)\,dt} where the Poisson kernel P on the upper half space { ( y ; x ) ∈ R n + 1 ∣ y > 0 } {\displaystyle \{(y;x)\in \mathbf {R} ^{n+1}\mid y>0\}} is given by P y ( x ) = ∫ R n e − 2 π i t ⋅ x − 2 π | t | y d t = Γ ( ( n + 1 ) / 2 ) π ( n + 1 ) / 2 y ( | x | 2 + y 2 ) ( n + 1 ) / 2 . {\displaystyle P_{y}(x)=\int _{\mathbb {R} ^{n}}e^{-2\pi it\cdot x-2\pi |t|y}\,dt={\frac {\Gamma ((n+1)/2)}{\pi ^{(n+1)/2}}}{\frac {y}{(|x|^{2}+y^{2})^{(n+1)/2}}}.} The Littlewood–Paley g function g(f) is defined by g ( f ) ( x ) = ( ∫ 0 ∞ | ∇ u ( x , y ) | 2 y d y ) 1 / 2 {\displaystyle g(f)(x)=\left(\int _{0}^{\infty }|\nabla u(x,y)|^{2}y\,dy\right)^{1/2}} A basic property of g is that it approximately preserves norms. More precisely, for 1 < p < ∞, the ratio of the Lp norms of f and g(f) is bounded above and below by fixed positive constants depending on n and p but not on f. == Applications == One early application of Littlewood–Paley theory was the proof that if Sn are the partial sums of the Fourier series of a periodic Lp function (p > 1) and nj is a sequence satisfying nj+1/nj > q for some fixed q > 1, then the sequence Snj converges almost everywhere. This was later superseded by the Carleson–Hunt theorem showing that Sn itself converges almost everywhere. Littlewood–Paley theory can also be used to prove the Marcinkiewicz multiplier theorem. == References == Coifman, R. R.; Weiss, Guido (1978), "Book Review: Littlewood-Paley and multiplier theory", Bulletin of the American Mathematical Society, 84 (2): 242–250, doi:10.1090/S0002-9904-1978-14464-4, ISSN 0002-9904, MR 1567040 Edwards, R. E.; Gaudry, G. I. (1977), Littlewood-Paley and multiplier theory, Berlin, New York: Springer-Verlag, ISBN 978-3-540-07726-8, MR 0618663 Frazier, Michael; Jawerth, Björn; Weiss, Guido (1991), Littlewood-Paley theory and the study of function spaces, CBMS Regional Conference Series in Mathematics, vol. 79, Published for the Conference Board of the Mathematical Sciences, Washington, DC, doi:10.1090/cbms/079, ISBN 978-0-8218-0731-6, MR 1107300 Littlewood, J. E.; Paley, R. E. A. C. (1931), "Theorems on Fourier Series and Power Series", J. London Math. Soc., 6 (3): 230–233, doi:10.1112/jlms/s1-6.3.230 Littlewood, J. E.; Paley, R. E. A. C. (1937), "Theorems on Fourier Series and Power Series (II)", Proc. London Math. Soc., 42 (1): 52–89, doi:10.1112/plms/s2-42.1.52 Littlewood, J. E.; Paley, R. E. A. C. (1938), "Theorems on Fourier Series and Power Series (III)", Proc. London Math. Soc., 43 (2): 105–126, doi:10.1112/plms/s2-43.2.105 Stein, Elias M. (1970), Topics in harmonic analysis related to the Littlewood-Paley theory., Annals of Mathematics Studies, No. 63, Princeton University Press, MR 0252961 Zygmund, A. (2002) [1935], Trigonometric series. Vol. I, II, Cambridge Mathematical Library (3rd ed.), Cambridge University Press, ISBN 978-0-521-89053-3, MR 1963498
Wikipedia/Littlewood–Paley_theory
Engaged theory is a methodological framework for understanding the social complexity of a society, by using social relations as the base category of study, with the social always understood as grounded in the natural, including people as embodied beings. Engaged theory progresses from detailed, empirical analysis of the people, things, and processes of the world to abstract theory about the constitution and social framing of people, things, and processes. As a type of critical theory, engaged theory is cross-disciplinary, drawing from sociology, anthropology, and political studies, history, philosophy, and global studies to engage with the world whilst seeking to change the world. Examples of engaged theory are the constitutive abstraction approach of writers, such as John Hinkson, Geoff Sharp, and Simon Cooper, who published in Arena Journal; and the approach developed at the Centre for Global Research of the Royal Melbourne Institute of Technology, Australia by scholars such as Manfred Steger, Paul James and Damian Grenfell, who draw from the works of Pierre Bourdieu, Benedict Anderson, and Charles Taylor, et al. == Politics of engagement == Engaged theory research is in the world and of the world, whereby a theory somehow affects what occurs in the world, but engaged theory does not always include itself into a theory about the constitution of ideas and practices, which the sociologist Anthony Giddens identifies as a double hermeneutic movement. Engaged theory is explicit about its political standpoint, thus, in Species Matters: Human Advocacy and Cultural Theory, Carol J. Adams explained that: “Engaged theory ... arises from anger about what is, theory that envisions what is possible. Engaged theory makes change possible.” Moreover, in the praxis of engaged theory, theoreticians must be aware of their own tendencies to be ideologically driven by the dominant concerns of the time in which the theory is presented; for example, the ideology of Liberalism is reductive in its advocacy of and for 'freedom', fails to reflect upon the influence of the ideology of the liberal advocate. == Grounding of analysis == All social theories are dependent upon a process of abstraction. This is what philosophers call epistemological abstraction. However, they do not characteristically theorize their own bases for establishing their standpoint. Engaged theory does. By comparison, grounded theory, a very different approach, suggests that empirical data collection is a neutral process that gives rise to theoretical claims out of that data. Engaged theory, to the contrary, treats such a claim to value neutrality as naively unsustainable. Engaged theory is thus reflexive in a number of ways: Firstly, it recognises that doing something as basic as collecting data already entails making theoretical presuppositions. Secondly, it names the levels of analysis from which theoretical claims are made. Engaged theory works across four levels of theoretical abstraction. (See below: § Modes of analysis.) Thirdly, it makes a clear distinction between theory and method, suggesting that a social theory is an argument about a social phenomenon, while an analytical method or set of methods is defined a means of substantiating that theory. Engaged theory in these terms works as a 'Grand method', but not a 'grand theory'. It provides an integrated set of methodological tools for developing different theories of things and processes in the world. Fourthly, it seeks to understand both its own epistemological basis, while treating knowledge formation as one of the basic ontological categories of human practice. Fifthly, it treats history as a modern way of understanding temporal change; and therefore different ontologically from a tribal saga or cosmological narrative. In other words, it provides meta-standpoint on its own capacity to historicize. == Modes of analysis == In the version of Engaged theory developed by an Australian-based group of writers, analysis moves from the most concrete form of analysis—empirical generalization—to more abstract modes of analysis. Each subsequent mode of analysis is more abstract than the previous one moving across the following themes: 1. doing, 2. acting, 3. relating, 4. being. This leads to the 'levels' approach as set out below: === 1. Empirical analysis (ways of doing) === The method begins by emphasizing the importance of a first-order abstraction, here called empirical analysis. It entails drawing out and generalizing from on-the-ground detailed descriptions of history and place. This first level involves generating empirical description based on observation, experience, recording or experiment—in other words, abstracting evidence from that which exists or occurs in the world—or it involves drawing upon the empirical research of others. The first level of analytical abstraction is an ordering of ‘things in the world’, in a way that does not depend upon any kind of further analysis being applied to those ‘things’. For example, the Circles of Sustainability approach is a form of engaged theory distinguishing (at the level of empirical generalization) between different domains of social life. It can be used for understanding and assessing quality of life. Although that approach is also analytically defended through more abstract theory, the claim that economics, ecology, politics and culture can be distinguished as central domains of social practice has to be defensible at an empirical level. It needs to be useful in analysing situations on the ground. The success or otherwise of the method can be assessed by examining how it is used. One example of use of the method was a project on Papua New Guinea called Sustainable Communities, Sustainable Development. === 2. Conjunctural analysis (ways of acting) === This second level of analysis, conjunctural analysis, involves identifying and, more importantly, examining the intersection (the conjunctures) of various patterns of action (practice and meaning). Here the method draws upon established sociological, anthropological and political categories of analysis such as production, exchange, communication, organization and inquiry. === 3. Integrational analysis (ways of relating) === This third level of entry into discussing the complexity of social relations examines the intersecting modes of social integration and differentiation. These different modes of integration are expressed here in terms of different ways of relating to and distinguishing oneself from others—from the face-to-face to the disembodied. Here we see a break with the dominant emphases of classical social theory and a movement towards a post-classical sensibility. In relation to the nation-state, for example, we can ask how it is possible to explain a phenomenon that, at least in its modern variant, subjectively explains itself by reference to face-to-face metaphors of blood and place—ties of genealogy, kinship and ethnicity—when the objective 'reality' of all nation-states is that they are disembodied communities of abstracted strangers who will never meet. This accords with Benedict Anderson's conception of 'imagined communities', but recognizes the contradictory formation of that kind of community. === 4. Categorical analysis (ways of being) === This level of enquiry is based upon an exploration of the ontological categories (categories of being such as time and space). If the previous form of analysis emphasizes the different modes through which people live their commonalities with or differences from others, those same themes are examined through more abstract analytical lenses of different grounding forms of life: respectively, embodiment, spatiality, temporality, performativity and epistemology. At this level, generalizations can be made about the dominant modes of categorization in a social formation or in its fields of practice and discourse. It is only at this level that it makes sense to generalize across modes of being and to talk of ontological formations, societies as formed in the uneven dominance of formations of tribalism, traditionalism, modernism or postmodernism. == See also == == References == == Further reading == Cooper, Simon (2002). Technoculture and Critical Theory: In Service to the Machine. London: Routledge. Grenfell, Damian (2012). "Remembering the Dead from the Customary to the Modern in Timor-Leste". Local-Global: Identity, Security, Community. 11: 86–108. James, Paul; with Magee, Liam; Scerri, Andy; Steger, Manfred B. (2015). Urban Sustainability in Theory and Practice: Circles of Sustainability. London: Routledge. ISBN 978-1-315-76574-7. James, Paul (2006). Globalism, Nationalism, Tribalism: Bringing Theory Back In—Volume 2 of Towards a Theory of Abstract Community. London: Sage Publications.
Wikipedia/Engaged_theory
A sociological theory is a supposition that intends to consider, analyze, and/or explain objects of social reality from a sociological perspective,: 14  drawing connections between individual concepts in order to organize and substantiate sociological knowledge. Hence, such knowledge is composed of complex theoretical frameworks and methodology. These theories range in scope, from concise, yet thorough, descriptions of a single social process to broad, inconclusive paradigms for analysis and interpretation. Some sociological theories explain aspects of the social world and enable prediction about future events, while others function as broad perspectives which guide further sociological analyses. Prominent sociological theorists include Talcott Parsons, Robert K. Merton, Randall Collins, James Samuel Coleman, Peter Blau, Niklas Luhmann, Immanuel Wallerstein, George Homans, Theda Skocpol, Gerhard Lenski, Pierre van den Berghe and Jonathan H. Turner. == Sociological theory vs. social theory == Kenneth Allan (2006) distinguishes sociological theory from social theory, in that the former consists of abstract and testable propositions about society, heavily relying on the scientific method which aims for objectivity and to avoid passing value judgments. In contrast, social theory, according to Allan, focuses less on explanation and more on commentary and critique of modern society. As such, social theory is generally closer to continental philosophy insofar as it is less concerned with objectivity and derivation of testable propositions, thus more likely to propose normative judgments. Sociologist Robert K. Merton (1949) argued that sociological theory deals with social mechanisms, which are essential in exemplifying the 'middle ground' between social law and description.: 43–4  Merton believed these social mechanisms to be "social processes having designated consequences for designated parts of the social structure." Prominent social theorists include: Jürgen Habermas, Anthony Giddens, Michel Foucault, Dorothy Smith, Roberto Unger, Alfred Schütz, Jeffrey Alexander, and Jacques Derrida. There are also prominent scholars who could be seen as being in-between social and sociological theories, such as: Harold Garfinkel, Herbert Blumer, Claude Lévi-Strauss, Pierre Bourdieu, and Erving Goffman. == Classical theoretical traditions == The field of sociology itself is a relatively new discipline and so, by extension, is the field of sociological theory. Both date back to the 18th and 19th centuries, periods of drastic social change, where societies would begin to see, for example, the emergence of industrialization, urbanization, democracy, and early capitalism, provoking (particularly Western) thinkers to start becoming considerably more aware of society. As such, the field of sociology initially dealt with broad historical processes relating to these changes. Through a well-cited survey of sociological theory, Randall Collins (1994) retroactively labels various theorists as belonging to four theoretical traditions: functionalism, conflict, symbolic interactionism, and utilitarianism. While modern sociological theory descends predominately from functionalist (Durkheim) and conflict-oriented (Marx and Weber) perspectives of social structure, it also takes great influence from the symbolic interactionist tradition, accounting for theories of pragmatism (Mead, Cooley) and micro-level structure (Simmel). Likewise, utilitarian theories of rational choice (equivalent here to "social exchange theory"), although often associated with either ethics or economics, is an established tradition within sociological theory. Lastly, as argued by Raewyn Connell (2007), a tradition that is often forgotten is that of social Darwinism, which applies the logic of biological evolution to the social world. This tradition often aligns with classical functionalism and is associated with several founders of sociology, primarily Herbert Spencer, Lester F. Ward and William Graham Sumner. Contemporary sociological theory retains traces of each of these traditions, which are by no means mutually exclusive. === Structural functionalism === A broad historical paradigm in sociology, structural functionalism addresses social structures in its entirety and in terms of the necessary functions possessed by its constituent elements. A common parallel used by functionalists, known as the organic or biological analogy (popularized by Herbert Spencer), is to regard norms and institutions as 'organs' that work toward the proper-functioning of the entire 'body' of society. The perspective was implicit in the original sociological positivism of Auguste Comte, but was theorized in full by Durkheim, again with respect to observable, structural laws. Functionalism also has an anthropological basis in the work of theorists such as Marcel Mauss, Bronisław Malinowski, and Alfred Radcliffe-Brown, the latter of whom, through explicit usage, introduced the "structural" prefix to the concept. Classical functionalist theory is generally united by its tendency towards the biological analogy and notions of social evolutionism. As Giddens states: "Functionalist thought, from Comte onwards, has looked particularly towards biology as the science providing the closest and most compatible model for social science. Biology has been taken to provide a guide to conceptualizing the structure and the function of social systems and to analyzing processes of evolution via mechanisms of adaptation…functionalism strongly emphasizes the pre-eminence of the social world over its individual parts (i.e. its constituent actors, human subjects)." === Conflict theory === Conflict theory is a method that attempts, in a scientific manner, to provide causal explanations to the existence of conflict in society. Thus, conflict theorists look at the ways in which conflict arises and is resolved in society, as well as how every conflict is unique. Such theories describe that the origins of conflict in societies are founded in the unequal distribution of resources and power. Though there is no universal definition of what "resources" necessarily includes, most theorists follow Max Weber's point of view. Weber viewed conflict as the result of class, status, and power being ways of defining individuals in any given society. In this sense, power defines standards, thus people abide by societal rules and expectation due to an inequality of power. Karl Marx is believed to be the father of social conflict theory, in which social conflict refers to the struggle between segments of society over valued resources. By the 19th century, a small population in the West had become capitalists: individuals who own and operate factories and other businesses in pursuit of profits, owning virtually all large-scale means of production. However, theorists believe that capitalism turned most other people into industrial workers, or, in Marx's terms, proletarians: individuals who, because of the structure of capitalist economies, must sell their labor for wages. It is through this notion that conflict theories challenge historically dominant ideologies, drawing attention to such power differentials as class, gender and race. Conflict theory is therefore a macrosociological approach, in which society is interpreted as an arena of inequality that generates conflict and social change.: 15  Other important sociologists associated with social conflict theory include Harriet Martineau, Jane Addams, and W. E. B. Du Bois. Rather than observing the ways in which social structures help societies to operate, this sociological approach looks at how "social patterns" cause certain individuals to become dominant in society, while causing others to be oppressed. Accordingly, some criticisms to this theory are that it disregards how shared values and the way in which people rely on each other help to unify society. === Symbolic interactionism === Symbolic interaction—often associated with interactionism, phenomenological sociology, dramaturgy (sociology), and interpretivism—is a sociological approach that places emphasis on subjective meanings and, usually through analysis, on the empirical unfolding of social processes.: 16  Such processes are believed to rely on individuals and their actions, which is ultimately necessary for society to exists. This phenomenon was first theorized by George Herbert Mead who described it as the outcome of collaborative joint action. The approach focuses on creating a theoretical framework that observes society as the product of everyday interactions of individuals. In other words, society in its most basic form is nothing more than the shared reality constructed by individuals as they interact with one another. In this sense, individuals interact within countless situations through symbolic interpretations of their given reality, whereby society is a complex, ever-changing mosaic of subjective meanings.: 19  Some critics of this approach argue that it focuses only on ostensible characteristics of social situations while disregarding the effects of culture, race, or gender (i.e. social-historical structures). Important sociologists traditionally associated with this approach include George Herbert Mead, Herbert Blumer, and Erving Goffman. New contributions to the perspective, meanwhile, include those of Howard Becker, Gary Alan Fine, David Altheide, Robert Prus, Peter M. Hall, David R. Maines, as well as others. It is also in this tradition that the radical-empirical approach of ethnomethodology emerged from the work of Harold Garfinkel. === Utilitarianism === Utilitarianism is often referred to as exchange theory or rational choice theory in the context of sociology. This tradition tends to privilege the agency of individual rational actors, assuming that, within interactions, individuals always seek to maximize their own self-interest. As argued by Josh Whitford (2002), rational actors can be characterized as possessing four basic elements: "a knowledge of alternatives;" "a knowledge of, or beliefs about the consequences of the various alternatives;" "an ordering of preferences over outcomes;" and "a decision rule, to select amongst the possible alternatives." Exchange theory is specifically attributed to the work of George C. Homans, Peter Blau, and Richard Emerson. Organizational sociologists James G. March and Herbert A. Simon noted that an individual's rationality is bounded by the context or organizational setting. The utilitarian perspective in sociology was, most notably, revitalized in the late 20th century by the work of former ASA president James Samuel Coleman. == Basic theory == Overall, there is a strong consensus regarding the central theoretical questions and the key problems that emerge from explicating such questions in sociology. In general, sociological theory attempts to answer the following three questions: (1) What is action?; (2) What is social order?; and (3) What determines social change? In the myriad of attempts to answer these questions, three predominantly theoretical (i.e. not empirical) issues emerge, largely inherited from classical theoretical traditions. The consensus on the central theoretical problems is how to link, transcend or cope with the following "big three" dichotomies: Subjectivity and objectivity: deals with knowledge. Structure and agency: deals with agency. Synchrony and diachrony: deals with time. Lastly, sociological theory often grapples with a subset of all three central problems through the problem of integrating or transcending the divide between micro-, meso- and macro-level social phenomena. These problems are not altogether empirical. Rather, they are epistemological: they arise from the conceptual imagery and analytical analogies that sociologists use to describe the complexity of social processes. === Objectivity and subjectivity === The issue of subjectivity and objectivity can be divided into a concern over (a) the general possibilities of social actions; and (b) the specific problem of social scientific knowledge. In regard to the former, the subjective is often equated (though not necessarily) with "the individual" and the individual's intentions and interpretations of the "objective". The objective, on the other hand, is usually considered to be any public/external action or outcome, on up to society writ large. A primary question for social theorists is how knowledge reproduces along the chain of subjective-objective-subjective. That is to say, how is intersubjectivity achieved? While, historically, qualitative methods have attempted to tease out subjective interpretations, quantitative survey methods also attempt to capture individual subjectivities. Moreover, some qualitative methods take a radical approach to objective description in situ. Insofar as subjectivity & objectivity are concerned with (b) the specific problem of social scientific knowledge, such concern results from the fact that a sociologist is part of the very object they seek to explain, as expressed by Bourdieu: How can the sociologist effect in practice this radical doubting which is indispensable for bracketing all the presuppositions inherent in the fact that she is a social being, that she is therefore socialized and led to feel "like a fish in water" within that social world whose structures she has internalized? How can she prevent the social world itself from carrying out the construction of the object, in a sense, through her, through these unself-conscious operations or operations unaware of themselves of which she is the apparent subject === Structure and agency === Structure and agency (or determinism and voluntarism) form an enduring ontological debate in social theory: "Do social structures determine an individual's behaviour or does human agency?" In this context, agency refers to the capacity of an individual to act independently and make free choices, whereas structure relates to factors that limit or affect the choices and actions of the individual (e.g. social class, religion, gender, ethnicity, etc.). Discussions over the primacy of either structure and agency relate to the core of sociological ontology, i.e. "what is the social world made of?", "what is a cause in the social world", and "what is an effect?". A perennial question within this debate is that of "social reproduction": how are structures (specifically structures that produce inequality) reproduced through the choices of individuals? === Synchrony and diachrony === Synchrony and diachrony (or statics and dynamics) within social theory are terms that refer to a distinction emerging out of the work of Levi-Strauss who inherited it from the linguistics of Ferdinand de Saussure. The former slices moments of time for analysis, thus it is an analysis of static social reality. Diachrony, on the other hand, attempts to analyze dynamic sequences. Following Saussure, synchrony would describe social phenomena at a specific point of time, while diachrony would refer to unfolding processes in time. In Anthony Giddens' introduction to Central Problems in Social Theory, he states that, "in order to show the interdependence of action and structure...we must grasp the time space relations inherent in the constitution of all social interaction." And like structure and agency, time is integral to discussion of social reproduction. In terms of sociology, historical sociology is often better positioned to analyze social life as diachronic, while survey research takes a snapshot of social life and is thus better equipped to understand social life as synchronic. Some argue that the synchrony of social structure is a methodological perspective rather than an ontological claim. Nonetheless, the problem for theory is how to integrate the two manners of recording and thinking about social data. == Contemporary theories == The contemporary discipline of sociology is theoretically multi-paradigmatic, encompassing a greater range of subjects, including communities, organizations, and relationships, than when the discipline first began. === Strain theory / Anomie theory === Strain theory is a theoretical perspective that identifies anomie (i.e. normlessness) as the result of a society that provides little moral guidance to individuals.: 134  Emile Durkheim (1893) first described anomie as one of the results of an inequitable division of labour within a society, observing that social periods of disruption resulted in greater anomie and higher rates of suicide and crimes. In this sense, broadly speaking, during times of great upheaval, increasing numbers of individuals "cease to accept the moral legitimacy of society," as noted by sociologist Anthony R. Mawson (1970). Robert K. Merton would go on to theorize that anomie, as well as some forms of deviant behavior, derive largely from a disjunction between "culturally prescribed aspirations" of a society and "socially structured avenues for realizing those aspirations." === Dramaturgy === Developed by Erving Goffman, dramaturgy (aka dramaturgical perspective) is a particularized paradigm of symbolic interactionism that interprets life to be a performance (i.e. a drama). As "actors," we have a status, i.e. the part that we play, by which we are given various roles.: 16  These roles serve as a script, supplying dialogue and action for the characters (i.e. the people in reality).: 19  Roles also involve props and certain settings. For example, a doctor (the role), uses instruments like a heart monitor (the prop), all the while using medical terms (the script), while in their doctor's office (the setting).: 134  In addition, our performance is the "presentation of self," which is how people perceive us, based on the ways in which we portray ourselves.: 134  This process, known as impression management, begins with the idea of personal performance. === Mathematical theory === Mathematical theory (aka formal theory) refers to the use of mathematics in constructing social theories. Mathematical sociology aims to sociological theory in formal terms, which such theories can be understood to lack. The benefits of this approach not only include increased clarity, but also, through mathematics, the ability to derive theoretical implications that could not be arrived at intuitively. As such, models typically used in mathematical sociology allow sociologists to understand how predictable local interactions are often able to elicit global patterns of social structure. === Positivism === Positivism is a philosophy, developed in the middle of the 19th century by Auguste Comte, that states that the only authentic knowledge is scientific knowledge, and that such knowledge can only come from positive affirmation of theories through strict a scientific method. Society operates according to laws just like the physical world, thus introspective or intuitional attempts to gain knowledge are rejected. The positivist approach has been a recurrent theme in the history of western thought, from antiquity to the present day. === Postmodernism === Postmodernism, adhering to anti-theory and anti-method, believes that, due to human subjectivity, discovering objective truth is impossible or unachievable.: 10  In essence, the postmodernist perspective is one that exists as a counter to modernist thought, especially through its mistrust in grand theories and ideologies The objective truth that is touted by modernist theory is believed by postmodernists to be impossible due to the ever-changing nature of society, whereby truth is also constantly subject to change. A postmodernists purpose, therefore, is to achieve understanding through observation, rather than data collection, using both micro and macro level analyses.: 53  Questions that are asked by this approach include: "How do we understand societies or interpersonal relations, while rejecting the theories and methods of the social sciences, and our assumptions about human nature?" and "How does power permeate social relations or society, and change with the circumstances?": 19  One of the most prominent postmodernists in the approach's history is the French philosopher Michel Foucault. === Other theories === Antipositivism (or Interpretive sociology) is a theoretical perspective based on the work of Max Weber, proposes that social, economic and historical research can never be fully empirical or descriptive as one must always approach it with a conceptual apparatus.: 132  Critical theory is a lineage of sociological theory, with reference to such groups as the Frankfurt School, that aims to critique and change society and culture, not simply to document and understand it.: 16  Engaged theory is an approach that seeks to understand the complexity of social life through synthesizing empirical research with more abstract layers of analysis, including analysis of modes of practice, and analysis of basic categories of existence such a time, space, embodiment, and knowledge. Feminism is a collection of movements aimed at defining, establishing, and defending equal political, economic, and social rights for women. The theory focuses on how gender inequality shapes social life. This approach shows how sexuality both reflects patterns of social inequality and helps to perpetuate them. Feminism, from a social conflict perspective, focuses on gender inequality and links sexuality to the domination of women by men.: 185  Intersectionality is a sociological framework used to analyze how individuals' and groups' social and political identities combine to produce unique experiences of discrimination and privilege. This approach expands upon the perspectives of first- and second-wave feminism, which primarily focused on the experiences of white, middle-class women, by incorporating the distinct experiences of women of color, economically disadvantaged women, immigrant women, and other marginalized groups. Field theory examines social fields, which are social environments in which competition takes place (e.g., the field of electronics manufacturers). It is concerned with how individuals construct such fields, with how the fields are structured, and with the effects the field has on people occupying different positions in it. Grounded theory is a systematic methodology in the social sciences involving the generation of theory from data. With a largely qualitative method, the goal of this approach is to discover and analyze data through comparative analyses, though it is quite flexible in its use of techniques. Middle-range theory is an approach to sociological theorizing aimed at integrating theory and empirical research. It is currently the de facto dominant approach to sociological theory construction, especially in the United States. Middle range theory starts with an empirical phenomenon (as opposed to a broad abstract entity like the social system) and abstracts from it to create general statements that can be verified by data. Network theory is a structural approach to sociology that is most closely associated with the work of Harrison White, who views norms and behaviors as embedded in chains of social relations.: 132  Phenomenology is an approach within the field of sociology that aims to reveal what role human awareness plays in the production of social action, social situations and social worlds. In essence, phenomenology is the belief that society is a human construction. The social phenomenology of Alfred Schütz influenced the development of the social constructionism and ethnomethodology. It was originally developed by Edmund Husserl. Postcolonialism is a postmodern approach that consists of the reactions to and the analysis of colonialism. Pure sociology is a theoretical paradigm, developed by Donald Black, that explains variation in social life through social geometry, meaning through locations in social space. A recent extension of this idea is that fluctuations in social space—i.e., social time—are the cause of social conflict. Rational choice theory models social behavior as the interaction of utility maximizing individuals. "Rational" implies cost-effectiveness is balanced against cost to accomplish a utility maximizing interaction. Costs are extrinsic, meaning intrinsic values such as feelings of guilt will not be accounted for in the cost to commit a crime. Social constructionism is a sociological theory of knowledge that considers how social phenomena develop in particular social contexts. Thomas theorem refers to situations that are defined as real are real in their consequences. Suggests that the reality people construct in their interaction has real consequences for the future. For example, a teacher who believes a certain student to be intellectually gifted may well encourage exceptional academic performance. Socialization refers to the lifelong social experience by which people develop their human potential and learn culture. Unlike other living species, humans need socialization within their cultures for survival. Adopting this concept, theorists may seek to understand the means by which human infants begin to acquire the skills necessary to perform as a functional member of their society Social exchange theory proposes that interactions that occurs between people can be partly based on what can be gained or lost by being with others. For example, when people think about who they may date, they'll look to see if the other person will offer just as much (or perhaps more) than they do. This can include judging an individual's looks and appearance, or their social status. === Theories of social movements === Collective action / Collective behavior Relative deprivation Value-added theory Resource mobilization/political opportunity Framing (frame analysis theory) New social movements New culture === Theories of science and technology === Institutional sociology of science Social construction of technology Actor-network theory Normalization process theory Theories of technology == Theories of crime == The general theory of crime refers to the proposition by Michael R. Gottfredson and Travis Hirschi (1990) that the main factor in criminal behaviour is the individual's lack of self-control. Theorists who do not distinguish the differences that exist between criminals and noncriminals are considered to be classical or control theorists. Such theorists believe that those who perform deviant acts do so out of enjoyment without care for consequences. Likewise, positivists view criminals actions as a result of the person themselves instead of the nature of the person. === Labeling theory === The essential notion of labeling theory is that deviance and conformity result not so much from what people do as from how others respond to these actions.: 203  It also states that a society's reaction to specific behaviors are a major determinant of how a person may come to adopt a "deviant" label.: 204  This theory stresses the relativity of deviance, the idea that people may define the same behavior in any number of ways. Thus the labelling theory is a micro-level analysis and is often classified in the social-interactionist approach. === Hate crimes === A hate crime can be defined as a criminal act against a person or a person's property by an offender motivated by racial, ethnic, religious or other bias. Hate crimes may refer to race, ancestry, religion, sexual orientation and physical disabilities. According to Statistics Canada, the "Jewish" community has been the most likely to be victim to hate crimes in Canada in 2001–2002. Overall, about 57% of hate crimes are motivated by ethnicity and race, targeting mainly Blacks and Asians, while 43% target religion, mainly Judaism and Islam. A relatively small 9% is motivated by sexual orientation, targets gays and lesbians.: 208–9  Physical traits do not distinguish criminals from non criminals, but genetic factors together with environmental factors are strong predictors of adult crime and violence.: 198–9  Most psychologists see deviance as the result of "unsuccessful" socialization and abnormality in an individual personality.: 198–9  === Psychopathy === A psychopath can be defined as a serious criminal who does not feel shame or guilt from their actions, as they have little (if any) sympathy for the people they harm, nor do they fear punishment.: 199  Individuals of such nature may also be known to have an antisocial personality disorder. Robert D. Hare, one of the world's leading experts on psychopathy, developed an important assessment device for psychopathy, known as the Psychopathy Checklist (revised). For many, this measure is the single, most important advancement to date toward what will hopefully become our ultimate understanding of psychopathy.: 641  Psychopaths exhibit a variety of maladaptive traits, such as rarity in experience of genuine affection for others. Moreover, they are skilled at faking affection; are irresponsible, impulsive, hardly tolerant of frustration; and they pursue immediate gratification.: 614  Likewise, containment theory suggests that those with a stronger conscience will be more tolerable to frustrations, thus less likely to be involved in criminal activities.: 198–9  === White-collar crime === Sutherland and Cressey (1978) define white-collar crime as crime committed by persons of high social position in the course of their occupation. The white-collar crime involves people making use of their occupational position to enrich themselves and others illegally, which often causes public harm. In white-collar crime, public harm wreaked by false advertising, marketing of unsafe products, embezzlement, and bribery of public officials is more extensive than most people think, most of which go unnoticed and unpunished.: 206  Likewise, corporate crime refers to the illegal actions of a corporation or people acting on its behalf. Corporate crime ranges from knowingly selling faulty or dangerous products to purposely polluting the environment. Like white-collar crime, most cases of corporate crime go unpunished, and many are not never even known to the public.: 206  === Other theories of crime === Differential association: Developed by Edwin Sutherland, this theory examines criminal acts from the perspective that they are learned behaviours.: 204  Control theory: The theory was developed by Travis Hirschi and it states that a weak bond between an individual and society itself allows the individual to defy societal norms and adopt behaviors that are deviant in nature.: 204–5  Rational choice theory: States that people commit crimes when it is rational for them to do so according to analyses of costs and benefits, and that crime can be reduced by minimizing benefits and maximizing costs to the "would be" criminal. Social disorganization theory: States that crime is more likely to occur in areas where social institutions are unable to directly control groups of individuals. Social learning theory: States that people adopt new behaviors through observational learning in their environments. Strain theory: States that a social structure within a society may cause people to commit crimes. Specifically, the extent and type of deviance people engage in depend on whether a society provides the means to achieve cultural goals.: 197  Subcultural theory: States that behavior is influenced by factors such as class, ethnicity, and family status. This theory's primary focus is on juvenile delinquency. Organized crime:: 206  a business that supplies illegal goods or services, including sex, drugs, and gambling. This type of crime expanded among immigrants, who found that society was not always willing to share its opportunities with them. A famous example of organized crime is the Italian Mafia. == See also == Sociological imagination Index of sociology articles List of sociologists Bibliography of sociology List of sociology journals Branches of sociology Timeline of sociology History of the social sciences == References == === Notes === === Citations === == Introductory reading == Adams, B. N., and R. A. Sydie. 2001. Sociological Theory. Pine Forge Press. Bilton, T., K. Bonnett, and P. Jones. 2002. Introductory Sociology. Palgrave Macmillan. ISBN 0-333-94571-9. Babbie, Earle R. 2003. The Practice of Social Research (10th ed.). Wadsworth: Thomson Learning. ISBN 0-534-62029-9. Goodman, D. J., and G. Ritzer. 2004. Sociological Theory (6th ed.). McGraw Hill. Hughes, M., C. J. Kroehler, and J. W. Vander Zanden. 2001. Sociology: The Core. McGraw-Hill. ISBN 0-07-240535-X. Lay summary (chapter 1). Germov, J. 2001. "A Class Above the Rest? Education and the Reproduction of Class Inequality." Pp. 233–48 in Sociology of Education: Possibilities and Practices, edited by J. Allen. Tuggerah, NSW: Social Science Press. ISBN 1-876633-23-9. === External links === American Sociological Association - Section on Theory European Sociological Association: Social Theory Research Network (RN29) International Sociological Association: Research Committee on Sociological Theory (RC16) Sociological Theory [academic journal]. Teng Wang, Social Phenomena
Wikipedia/Sociological_theory
The circumscription theory is a theory of the role of warfare in state formation in political anthropology, created by anthropologist Robert Carneiro. The theory has been summarized in one sentence by Schacht: “In areas of circumscribed agricultural land, population pressure led to warfare that resulted in the evolution of the state”. The more circumscribed an agricultural area is, Carneiro argues, the sooner it politically unifies. == The theory in brief == The theory begins with some assumptions. Warfare usually disperses people rather than uniting them. Environmental circumscription occurs when an area of productive agricultural land is surrounded by a less productive area such as the mountains, desert, or sea. Application of extensive agriculture would bring severely diminishing returns. If there is no environmental circumscription, then losers in a war can migrate out from the region and settle somewhere else. If there is environmental circumscription, then losers in warfare are forced to submit to their conquerors, because migration is not an option and the populations of the conquered and conqueror are united. The new state organization strives to alleviate the population pressure by increasing the productive capacity of agricultural land through, for instance, more intensive cultivation using irrigation. == Primary and secondary state development == Primary state development occurred in the six original states of the Nile Valley, Peru, Mesoamerican, Yellow River Valley China, Indus River Valley, and Mesopotamia. Secondary state development occurred in states that developed from contact with already existing states. Primary state development occurred in areas with environmental circumscription. The presumption, under the Carneiro Hypothesis, is that agricultural intensification, and the social coordination and coercion necessary to achieve this end was a result of warfare in which vanquished populations could not disperse; the coercive coordination necessary for increased production of surplus is, under Carneiro's hypothesis, a causal factor in the origins of the State. For example, the mountainous river valleys of Peru which descend to the Pacific coast were severely environmentally circumscribed. Amazonian populations could always disperse and maintain sparse contact with other, potentially hostile, neighbors, whereas Andean coastal populations could not. == Criticism == Carneiro's theory has been criticized by the Dutch "early state school" emerging in the 1970s around cultural anthropologist Henri J.M. Claessen, on the ground that considerable contrary evidence can be found to Carneiro's theory. There are also cases of circumscribed environments and violent cultures which have failed to develop states, for example in the narrow highland valleys of interior Papua New Guinea, or the north west Pacific coastlines of North America. Also for example, the formation of some early states in East Africa, Sri Lanka, and Polynesia do not easily fit with Carneiro's model. Hence Claessen's school developed a "complex interaction model" to explain early state formation, in which factors such as ecology, social and demographic structures, economic conditions, conflicts, and ideology become aligned in ways which favour state organisation. == Later development and revision == Carneiro has since revised his theory in various ways. He has argued that population concentration can act as a lower level impetus for tribal conflict than geographic circumscription. He has also argued that, in addition to the necessities of conquest, a more important reason for creation of chiefdoms was the rise of war chiefs who use their military loyalists to take over a group of villages and become paramount chiefs. The theory also has since been applied to many other contexts, such as the Zulu kingdom. One of leading experts on world-system theory, Christopher Chase-Dunn, noted in 1990 that the circumscription theory is applicable for the global system. Since the modern world system, being global, is completely circumscribed, the factor of circumscription is supposed to bring about the political unification of the world as it had done on regional scales on numerous occasions in the past. The thesis was further developed by historian Max Ostrovsky who widely used the circumscription theory in his book. The works of Chase-Dunn and Ostrovsky linked the circumscription theory with Carneiro's other theory of the political unification of the world. In the "Foreword" to Ostrovsky's book Carneiro acknowledges that he unjustly "abandoned" the circumscription theory in the Bronze Age. Carneiro's later interview contains his answer on the intriguing question, "Are we circumscribed now?" == References == == Bibliography == Carneiro, R. L. (1970). "A Theory of the Origin of the State". Science. 169 (3947): 733–738. Bibcode:1970Sci...169..733C. doi:10.1126/science.169.3947.733. PMID 17820299. S2CID 11536431. Carneiro, R. L. The Muse of History and the Science of Culture. New York: Kluwer Academic/Plenum Publishers, 2000. Lewellen, Ted C. 1992. Political anthropology: An Introduction, Second Edition. Westport Connecticut, London: Bergin and Garvey, pp. 54–55. Claessen, H. J. M, Structural change; evolution and evolutionism in cultural anthropology. Leyden: CNWS, 2000 == External links == Video on Carneiro's Circumscription Theory
Wikipedia/Carneiro's_circumscription_theory
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that living organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all living organisms and also the basic unit of reproduction. Cell theory has traditionally been accepted as the governing theory of all life, but some biologists consider non-cellular entities such as viruses living organisms and thus disagree with the universal application of cell theory to all forms of life. == History == With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well. == Microscopes == The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope about six inches long with two convex lenses inside and examined specimens under reflected light for the observations in his book Micrographia. Hooke also used a simpler microscope with a single lens for examining specimens with directly transmitted light, because this allowed for a clearer image. An extensive microscopic study was done by Anton van Leeuwenhoek, a draper who took the interest in microscopes after seeing one while on an apprenticeship in Amsterdam in 1648. At some point in his life before 1668, he was able to learn how to grind lenses. This eventually led to Leeuwenhoek making his own unique microscope. He made one with a single lens. He was able to use a single lens that was a small glass sphere but allowed for a magnification of 270x. This was a large progression since the magnification before was only a maximum of 50x. After Leeuwenhoek, there was not much progress in microscope technology until the 1850s, two hundred years later. Carl Zeiss, a German engineer who manufactured microscopes, began to make changes to the lenses used. But the optical quality did not improve until the 1880s when he hired Otto Schott and eventually Ernst Abbe. Optical microscopes can focus on objects the size of a wavelength or larger, giving restrictions still to advancement in discoveries with objects smaller than the wavelengths of visible light. The development of the electron microscope in the 1920s made it possible to view objects that are smaller than optical wavelengths, once again opening up new possibilities in science. == Discovery of cells == The cell was first discovered by Robert Hooke in 1665, which can be found to be described in his book Micrographia. In this book, he gave 60 observations in detail of various objects under a coarse, compound microscope. One observation was from very thin slices of bottle cork. Hooke discovered a multitude of tiny pores that he named "cells". This came from the Latin word Cella, meaning ‘a small room’ like monks lived in, and also Cellulae, which meant the six-sided cell of a honeycomb. However, Hooke did not know their real structure or function. What Hooke had thought were cells, were actually empty cell walls of plant tissues. With microscopes during this time having a low magnification, Hooke was unable to see that there were other internal components to the cells he was observing. Therefore, he did not think the "cellulae" were alive. His cell observations gave no indication of the nucleus and other organelles found in most living cells. In Micrographia, Hooke also observed mould, bluish in color, found on leather. After studying it under his microscope, he was unable to observe "seeds" that would have indicated how the mould was multiplying in quantity. This led to Hooke suggesting that spontaneous generation, from either natural or artificial heat, was the cause. Since this was an old Aristotelian theory still accepted at the time, others did not reject it and was not disproved until Leeuwenhoek later discovered that generation was achieved otherwise. Anton van Leeuwenhoek is another scientist who saw these cells soon after Hooke did. He made use of a microscope containing improved lenses that could magnify objects 270-fold. Under these microscopes, Leeuwenhoek found motile objects. In a letter to The Royal Society on October 9, 1676, he states that motility is a quality of life therefore these were living organisms. Over time, he wrote many more papers which described many specific forms of microorganisms. Leeuwenhoek named these "animalcules," which included protozoa and other unicellular organisms, like bacteria. Though he did not have much formal education, he was able to identify the first accurate description of red blood cells and discovered bacteria after gaining interest in the sense of taste that resulted in Leeuwenhoek to observe the tongue of an ox, then leading him to study "pepper water" in 1676. He also found for the first time the sperm cells of animals and humans. Once discovering these types of cells, Leeuwenhoek saw that the fertilization process requires the sperm cell to enter the egg cell. This put an end to the previous theory of spontaneous generation. After reading letters by Leeuwenhoek, Hooke was the first to confirm his observations that were thought to be unlikely by other contemporaries. Cells in animal tissues were observed later than those in plants because their tissues are fragile and difficult to study. Biologists believed that there was a fundamental unit to life, but until Henri Dutrochet were unclear what it was. Besides stating “the cell is the fundamental element of organization”, Dutrochet claimed that cells were also a physiological unit. In 1804, Karl Rudolphi and J. H. F. Link were awarded the prize for "solving the problem of the nature of cells", meaning they were the first to prove that cells had independent cell walls by the Königliche Societät der Wissenschaft (Royal Society of Science), Göttingen. Before, it had been thought that cells shared walls and the fluid passed between them this way. == Cell theory == Credit for developing cell theory is usually given to two scientists: Theodor Schwann and Matthias Jakob Schleiden. While Rudolf Virchow contributed to the theory, he is not as credited for his attributions toward it. In 1839, Schleiden suggested that every structural part of a plant was made up of cells or the result of cells. He also suggested that cells were made by a crystallization process either within other cells or from the outside. However, this was not an original idea of Schleiden. He claimed this theory as his own, though Barthelemy Dumortier had stated it years before him. This crystallization process is no longer accepted with modern cell theory. In 1839, Theodor Schwann states that along with plants, animals are composed of cells or the product of cells in their structures. This was a major advance in the field of biology since little was known about animal structure up to this point compared to plants. From these conclusions about plants and animals, two of the three tenets of cell theory were postulated. 1. All living organisms are composed of one or more cells 2. The cell is the most basic unit of life Schleiden's theory of free cell formation through crystallization was refuted in the 1850s by Robert Remak, Rudolf Virchow, and Albert Kolliker. In 1855, Rudolf Virchow added the third tenet to cell theory. In Latin, this tenet states Omnis cellula e cellula. This translated to: 3. All cells arise only from pre-existing cells However, the idea that all cells come from pre-existing cells had already been proposed by Robert Remak; it has been suggested that Virchow plagiarized Remak. Remak published observations in 1852 on cell division, claiming Schleiden and Schawnn were incorrect about generation schemes. He instead said that binary fission, which was first introduced by Dumortier, was how reproduction of new animal cells were made. Once this tenet was added, classical cell theory was complete. == Modern interpretation == The generally accepted parts of modern cell theory include: All known living things are made up of one or more cells All living cells arise from pre-existing cells by division. The cell is the fundamental unit of structure and function in all living organisms. The activity of an organism depends on the total activity of independent cells. Energy flow (metabolism and biochemistry) occurs within cells. Cells contain DNA which is found specifically in the chromosome and RNA found in the cell nucleus and cytoplasm. All cells are basically the same in chemical composition in organisms of similar species. == Opposing concepts == The cell was first discovered by Robert Hooke in 1665 using a microscope. The first cell theory is credited to the work of Theodor Schwann and Matthias Jakob Schleiden in the 1830s. In this theory the internal contents of cells were called protoplasm and described as a jelly-like substance, sometimes called living jelly. At about the same time, colloidal chemistry began its development, and the concepts of bound water emerged. A colloid being something between a solution and a suspension, where Brownian motion is sufficient to prevent sedimentation. The idea of a semipermeable membrane, a barrier that is permeable to solvent but impermeable to solute molecules was developed at about the same time. The term osmosis originated in 1827 and its importance to physiological phenomena realized, but it wasn’t until 1877, when the botanist Pfeffer proposed the membrane theory of cell physiology. In this view, the cell was seen to be enclosed by a thin surface, the plasma membrane, and cell water and solutes such as a potassium ion existed in a physical state like that of a dilute solution. In 1889 Hamburger used hemolysis of erythrocytes to determine the permeability of various solutes. By measuring the time required for the cells to swell past their elastic limit, the rate at which solutes entered the cells could be estimated by the accompanying change in cell volume. He also found that there was an apparent nonsolvent volume of about 50% in red blood cells and later showed that this includes water of hydration in addition to the protein and other nonsolvent components of the cells. === Membrane and bulk phase theories === Two opposing concepts developed within the context of studies on osmosis, permeability, and electrical properties of cells. The first held that these properties all belonged to the plasma membrane whereas the other predominant view was that the protoplasm was responsible for these properties. The membrane theory developed as a succession of ad-hoc additions and changes to the theory to overcome experimental hurdles. Overton (a distant cousin of Charles Darwin) first proposed the concept of a lipid (oil) plasma membrane in 1899. The major weakness of the lipid membrane was the lack of an explanation of the high permeability to water, so Nathansohn (1904) proposed the mosaic theory. In this view, the membrane is not a pure lipid layer, but a mosaic of areas with lipid and areas with semipermeable gel. Ruhland refined the mosaic theory to include pores to allow additional passage of small molecules. Since membranes are generally less permeable to anions, Leonor Michaelis concluded that ions are adsorbed to the walls of the pores, changing the permeability of the pores to ions by electrostatic repulsion. Michaelis demonstrated the membrane potential (1926) and proposed that it was related to the distribution of ions across the membrane. Harvey and Danielli (1939) proposed a lipid bilayer membrane covered on each side with a layer of protein to account for measurements of surface tension. In 1941 Boyle and Conway showed that the membrane of frog muscle was permeable to both K+ and Cl−, but apparently not to Na+, so the idea of electrical charges in the pores was unnecessary since a single critical pore size would explain the permeability to K+, H+, and Cl− as well as the impermeability to Na+, Ca+, and Mg2+. Over the same time period, it was shown (Procter and Wilson, 1916) that gels, which do not have a semipermeable membrane, would swell in dilute solutions. Jacques Loeb (1920) also studied gelatin extensively, with and without a membrane, showing that more of the properties attributed to the plasma membrane could be duplicated in gels without a membrane. In particular, he found that an electrical potential difference between the gelatin and the outside medium could be developed, based on the H+ concentration. Some criticisms of the membrane theory developed in the 1930s, based on observations such as the ability of some cells to swell and increase their surface area by a factor of 1000. A lipid layer cannot stretch to that extent without becoming a patchwork (thereby losing its barrier properties). Such criticisms stimulated continued studies on protoplasm as the principal agent determining cell permeability properties. In 1938, Fischer and Suer proposed that water in the protoplasm is not free but in a chemically combined form—the protoplasm represents a combination of protein, salt and water—and demonstrated the basic similarity between swelling in living tissues and the swelling of gelatin and fibrin gels. Dimitri Nasonov (1944) viewed proteins as the central components responsible for many properties of the cell, including electrical properties. By the 1940s, the bulk phase theories were not as well developed as the membrane theories. In 1941, Brooks and Brooks published a monograph, "The Permeability of Living Cells", which rejects the bulk phase theories. === Steady-state membrane pump concept === With the development of radioactive tracers, it was shown that cells are not impermeable to Na+. This was difficult to explain with the membrane barrier theory, so the sodium pump was proposed to continually remove Na+ as it permeates cells. This drove the concept that cells are in a state of dynamic equilibrium, constantly using energy to maintain ion gradients. In 1935, Karl Lohmann discovered ATP and its role as a source of energy for cells, so the concept of a metabolically-driven sodium pump was proposed. The success of Hodgkin, Huxley, and Katz in the development of the membrane theory of cellular membrane potentials, with differential equations that modeled the phenomena correctly, provided further support for the membrane pump hypothesis. The modern view of the plasma membrane is of a fluid lipid bilayer that has protein components embedded within it. The structure of the membrane is now known in great detail, including 3D models of many of the hundreds of different proteins that are bound to the membrane. These major developments in cell physiology placed the membrane theory in a position of dominance and stimulated the imagination of most physiologists, who now apparently accept the theory as fact—there are, however, a few dissenters. === Reemergence of bulk phase theories === In 1956, Afanasy S. Troshin published a book, The Problems of Cell Permeability, in Russian, in which he showed that permeability was of secondary importance in determining the patterns of equilibrium between the cell and its environment. Troshin showed that cell water decreased in solutions of galactose or urea although these compounds did slowly permeate cells. Since the membrane theory requires an impermanent solute to sustain cell shrinkage, these experiments cast doubt on the theory. Others questioned whether the cell has enough energy to sustain the sodium/potassium pump. Such questions became even more urgent as dozens of new metabolic pumps were added as new chemical gradients were discovered. In 1962, Gilbert Ling became the champion of the bulk phase theories and proposed his association-induction hypothesis of living cells. == See also == Cell adhesion Cytoskeleton Cell biology Cellular differentiation Germ theory of disease Membrane models == References == == Bibliography == Tavassoli, M. (1980). "The cell theory: a foundation to the edifice of biology". American Journal of Pathology. 98 (1): 44. PMC 1903404. PMID 6985772. Turner, W. (January 1890). "The Cell Theory Past and Present". Journal of Anatomy and Physiology. 24 (Pt 2): 253–87. PMC 1328050. PMID 17231856. Wolfe, Stephen L. (1972). Biology of the cell. Wadsworth Pub. Co. ISBN 978-0-534-00106-3. == External links == Mallery, C. (2008-02-11). "Cell Theory". Archived from the original on 2018-12-25. Retrieved 2008-11-25. "Studying Cells Tutorial". 2004. Retrieved 2008-11-25.
Wikipedia/Cell_theory
Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject. Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling. The following was said by John von Neumann. ... the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area. There is also an increasing attention to scientific modelling in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is a growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling. == Overview == A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful. Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting. Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true. For the scientist, a model is also a way in which the human thought processes can be amplified. For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture). == Basics == === Modelling as a substitute for direct measurement and experimentation === Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modeled estimates of outcomes. Within modeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints. It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics. Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation. === Simulation === A simulation is a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models. === Structure === Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art. === Systems === A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone. The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time. === Generating a model === Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components. Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeler's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use. Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well). === Evaluating a model === A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include: Ability to explain past observations Ability to predict future observations Cost of use, especially in combination with other models Refutability, enabling estimation of the degree of confidence in the model Simplicity, or even aesthetic appeal People may attempt to quantify the evaluation of a model using a utility function. === Visualization === Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. === Space mapping === Space mapping refers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model). == Types == == Applications == === Modelling and simulation === One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools. The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process. == See also == Abductive reasoning – Inference seeking the simplest and most likely explanation All models are wrong – Aphorism in statistics Data and information visualization – Visual representation of data Heuristic – Problem-solving method Inverse problem – Process of calculating the causal factors that produced a set of observations Scientific visualization – Interdisciplinary branch of science concerned with presenting scientific data visually Statistical model – Type of mathematical model == References == == Further reading == Nowadays there are some 40 magazines about scientific modelling which offer all kinds of international forums. Since the 1960s there is a strongly growing number of books and magazines about specific forms of scientific modelling. There is also a lot of discussion about scientific modelling in the philosophy-of-science literature. A selection: Rainer Hegselmann, Ulrich Müller and Klaus Troitzsch (eds.) (1996). Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View. Theory and Decision Library. Dordrecht: Kluwer. Paul Humphreys (2004). Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press. Johannes Lenhard, Günter Küppers and Terry Shinn (Eds.) (2006) "Simulation: Pragmatic Constructions of Reality", Springer Berlin. Tom Ritchey (2012). "Outline for a Morphology of Modelling Methods: Contribution to a General Theory of Modelling". In: Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20. William Silvert (2001). "Modelling as a Discipline". In: Int. J. General Systems. Vol. 30(3), pp. 261. Sergio Sismondo and Snait Gissis (eds.) (1999). Modeling and Simulation. Special Issue of Science in Context 12. Eric Winsberg (2018) "Philosophy and Climate Science" Cambridge: Cambridge University Press Eric Winsberg (2010) "Science in the Age of Computer Simulation" Chicago: University of Chicago Press Eric Winsberg (2003). "Simulated Experiments: Methodology for a Virtual World". In: Philosophy of Science 70: 105–125. Tomáš Helikar, Jim A Rogers (2009). "ChemChains: a platform for simulation and analysis of biochemical networks aimed to laboratory scientists". BioMed Central. == External links == Models. Entry in the Internet Encyclopedia of Philosophy Models in Science. Entry in the Stanford Encyclopedia of Philosophy The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77-100. Research in simulation and modelling of various physical systems Modelling Water Quality Information Center, U.S. Department of Agriculture Ecotoxicology & Models A Morphology of Modelling Methods. Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20.
Wikipedia/Scientific_models
In radio-frequency engineering, an antenna (American English) or aerial (British English) is an electronic device that converts an alternating electric current into radio waves (transmitting), or radio waves into an electric current (receiving). It is the interface between radio waves propagating through space and electric currents moving in metal conductors, used with a transmitter or receiver. In transmission, a radio transmitter supplies an electric current to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of a radio wave in order to produce an electric current at its terminals, that is applied to a receiver to be amplified. Antennas are essential components of all radio equipment. An antenna is an array of conductor segments (elements), electrically connected to the receiver or transmitter. Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional, or high-gain, or "beam" antennas). An antenna may include components not connected to the transmitter, parabolic reflectors, horns, or parasitic elements, which serve to direct the radio waves into a beam or other desired radiation pattern. Strong directivity and good efficiency when transmitting are hard to achieve with antennas with dimensions that are much smaller than a half wavelength. The first antennas were built in 1886 by German physicist Heinrich Hertz in his pioneering experiments to prove the existence of electromagnetic waves predicted by the 1867 electromagnetic theory of James Clerk Maxwell. Hertz placed dipole antennas at the focal point of parabolic reflectors for both transmitting and receiving. Starting in 1895, Guglielmo Marconi began development of antennas practical for long-distance wireless telegraphy and opened a factory in Chelmsford, England, to manufacture his invention in 1898. == Terminology == The words antenna and aerial are used interchangeably. Occasionally the equivalent term "aerial" is used to specifically mean an elevated horizontal wire antenna. The origin of the word antenna relative to wireless apparatus is attributed to Italian radio pioneer Guglielmo Marconi. In the summer of 1895, Marconi began testing his wireless system outdoors on his father's estate near Bologna and soon began to experiment with long wire "aerials" suspended from a pole. In Italian a tent pole is known as l'antenna centrale, and the pole with the wire was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as "terminals". Because of his prominence, Marconi's use of the word antenna spread among wireless researchers and enthusiasts, and later to the general public. Antenna may refer broadly to an entire assembly including support structure, enclosure (if any), etc., in addition to the actual RF current-carrying components. A receiving antenna may include not only the passive metal receiving elements, but also an integrated preamplifier or mixer, especially at and above microwave frequencies. == Overview == Antennas are required by any radio receiver or transmitter to couple its electrical connection to the electromagnetic field. Radio waves are electromagnetic waves which carry signals through space at the speed of light with almost no transmission loss. Antennas can be classified as omnidirectional, radiating energy approximately equally in all horizontal directions, or directional, where radio waves are concentrated in some direction(s). A so-called beam antenna is unidirectional, designed for maximum response in the direction of the other station, whereas many other antennas are intended to accommodate stations in various directions but are not truly omnidirectional. Since antennas obey reciprocity the same radiation pattern applies to transmission as well as reception of radio waves. A hypothetical antenna that radiates equally in all directions (vertical as well as all horizontal angles) is called an isotropic radiator; however, these cannot exist in practice nor would they be particularly desired. For most terrestrial communications, rather, there is an advantage in reducing radiation toward the sky or ground in favor of horizontal direction(s). A dipole antenna oriented horizontally sends no energy in the direction of the conductor – this is called the antenna null – but is usable in most other directions. A number of such dipole elements can be combined into an antenna array such as the Yagi–Uda in order to favor a single horizontal direction, thus termed a beam antenna. The dipole antenna, which is the basis for most antenna designs, is a balanced component, with equal but opposite voltages and currents applied at its two terminals. The vertical antenna is a monopole antenna, not balanced with respect to ground. The ground (or any large conductive surface) plays the role of the second conductor of a monopole. Since monopole antennas rely on a conductive surface, they may be mounted with a ground plane to approximate the effect of being mounted on the Earth's surface. More complex antennas increase the directivity of the antenna. Additional elements in the antenna structure, which need not be directly connected to the receiver or transmitter, increase its directionality. Antenna "gain" describes the concentration of radiated power into a particular solid angle of space. "Gain" is perhaps an unfortunately chosen term, by comparison with amplifier "gain" which implies a net increase in power. In contrast, for antenna "gain", the power increased in the desired direction is at the expense of power reduced in undesired directions. Unlike amplifiers, antennas are electrically "passive" devices which conserve total power, and there is no increase in total power above that delivered from the power source (the transmitter), only improved distribution of that fixed total. A phased array consists of two or more simple antennas which are connected together through an electrical network. This often involves a number of parallel dipole antennas with a certain spacing. Depending on the relative phase introduced by the network, the same combination of dipole antennas can operate as a "broadside array" (directional normal to a line connecting the elements) or as an "end-fire array" (directional along the line connecting the elements). Antenna arrays may employ any basic (omnidirectional or weakly directional) antenna type, such as dipole, loop or slot antennas. These elements are often identical. Log-periodic and frequency-independent antennas employ self-similarity in order to be operational over a wide range of bandwidths. The most familiar example is the log-periodic dipole array which can be seen as a number (typically 10 to 20) of connected dipole elements with progressive lengths in an endfire array making it rather directional; it finds use especially as a rooftop antenna for television reception. On the other hand, a Yagi–Uda antenna (or simply "Yagi"), with a somewhat similar appearance, has only one dipole element with an electrical connection; the other parasitic elements interact with the electromagnetic field in order to realize a highly directional antenna but with a narrow bandwidth. Even greater directionality can be obtained using aperture antennas such as the parabolic reflector or horn antenna. Since high directivity in an antenna depends on it being large compared to the wavelength, highly directional antennas (thus with high antenna gain) become more practical at higher frequencies (UHF and above). At low frequencies (such as AM broadcast), arrays of vertical towers are used to achieve directionality and they will occupy large areas of land. For reception, a long Beverage antenna can have significant directivity. For non directional portable use, a short vertical antenna or small loop antenna works well, with the main design challenge being that of impedance matching. With a vertical antenna a loading coil at the base of the antenna may be employed to cancel the reactive component of impedance; small loop antennas are tuned with parallel capacitors for this purpose. An antenna lead-in is the transmission line, or feed line, which connects the antenna to a transmitter or receiver. The "antenna feed" may refer to all components connecting the antenna to the transmitter or receiver, such as an impedance matching network in addition to the transmission line. In a so-called "aperture antenna", such as a horn or parabolic dish, the "feed" may also refer to a basic radiating antenna embedded in the entire system of reflecting elements (normally at the focus of the parabolic dish or at the throat of a horn) which could be considered the one active element in that antenna system. A microwave antenna may also be fed directly from a waveguide in place of a (conductive) transmission line. An antenna counterpoise, or ground plane, is a structure of conductive material which improves or substitutes for the ground. It may be connected to or insulated from the natural ground. In a monopole antenna, this aids in the function of the natural ground, particularly where variations (or limitations) of the characteristics of the natural ground interfere with its proper function. Such a structure is normally connected to the return connection of an unbalanced transmission line such as the shield of a coaxial cable. An electromagnetic wave refractor in some aperture antennas is a component which due to its shape and position functions to selectively delay or advance portions of the electromagnetic wavefront passing through it. The refractor alters the spatial characteristics of the wave on one side relative to the other side. It can, for instance, bring the wave to a focus or alter the wave front in other ways, generally in order to maximize the directivity of the antenna system. This is the radio equivalent of an optical lens. An antenna coupling network is a passive network (generally a combination of inductive and capacitive circuit elements) used for impedance matching in between the antenna and the transmitter or receiver. This may be used to minimize losses on the feed line, by reducing transmission line's standing wave ratio, and to present the transmitter or receiver with a standard resistive impedance needed for its optimum operation. The feed point location(s) is selected, and antenna elements electrically similar to tuner components may be incorporated in the antenna structure itself, to improve the match. == Reciprocity == It is a fundamental property of antennas that most of the electrical characteristics of an antenna, such as those described in the next section (e.g. gain, radiation pattern, impedance, bandwidth, resonant frequency and polarization), are the same whether the antenna is transmitting or receiving. For example, the "receiving pattern" (sensitivity to incoming signals as a function of direction) of an antenna when used for reception is identical to the radiation pattern of the antenna when it is driven and functions as a radiator, even though the current and voltage distributions on the antenna itself are different for receiving and sending. This is a consequence of the reciprocity theorem of electromagnetics. Therefore, in discussions of antenna properties no distinction is usually made between receiving and transmitting terminology, and the antenna can be viewed as either transmitting or receiving, whichever is more convenient. A necessary condition for the aforementioned reciprocity property is that the materials in the antenna and transmission medium are linear and reciprocal. Reciprocal (or bilateral) means that the material has the same response to an electric current or magnetic field in one direction, as it has to the field or current in the opposite direction. Most materials used in antennas meet these conditions, but some microwave antennas use high-tech components such as isolators and circulators, made of nonreciprocal materials such as ferrite. These can be used to give the antenna a different behavior on receiving than it has on transmitting, which can be useful in applications like radar. == Resonant antennas == The majority of antenna designs are based on the resonance principle. This relies on the behaviour of moving electrons, which reflect off surfaces where the dielectric constant changes, in a fashion similar to the way light reflects when optical properties change. In these designs, the reflective surface is created by the end of a conductor, normally a thin metal wire or rod, which in the simplest case has a feed point at one end where it is connected to a transmission line. The conductor, or element, is aligned with the electrical field of the desired signal, normally meaning it is perpendicular to the line from the antenna to the source (or receiver in the case of a broadcast antenna). The radio signal's electric component induces a voltage in the conductor. This causes an electrical current to begin flowing in the direction of the signal's instantaneous field. When the resulting current reaches the end of the conductor, it reflects, which is equivalent to a 180 degree change in phase. If the conductor is ⁠ 1 /4⁠ of a wavelength long, current from the feed point will undergo 90 degree phase change by the time it reaches the end of the conductor, reflect through 180 degrees, and then another 90 degrees as it travels back. That means it has undergone a total 360 degree phase change, returning it to the original signal. The current in the element thus adds to the current being created from the source at that instant. This process creates a standing wave in the conductor, with the maximum current at the feed. The ordinary half-wave dipole is probably the most widely used antenna design. This consists of two ⁠ 1 /4⁠ wavelength elements arranged end-to-end, and lying along essentially the same axis (or collinear), each feeding one side of a two-conductor transmission wire. The physical arrangement of the two elements places them 180 degrees out of phase, which means that at any given instant one of the elements is driving current into the transmission line while the other is pulling it out. The monopole antenna is essentially one half of the half-wave dipole, a single ⁠ 1 /4⁠ wavelength element with the other side connected to ground or an equivalent ground plane (or counterpoise). Monopoles, which are one-half the size of a dipole, are common for long-wavelength radio signals where a dipole would be impractically large. Another common design is the folded dipole which consists of two (or more) half-wave dipoles placed side by side and connected at their ends but only one of which is driven. The standing wave forms with this desired pattern at the design operating frequency, fo, and antennas are normally designed to be this size. However, feeding that element with 3 fo (whose wavelength is ⁠ 1 /3⁠ that of fo) will also lead to a standing wave pattern. Thus, an antenna element is also resonant when its length is ⁠ 3 /4⁠ of a wavelength. This is true for all odd multiples of ⁠ 1 /4⁠ wavelength. This allows some flexibility of design in terms of antenna lengths and feed points. Antennas used in such a fashion are known to be harmonically operated. Resonant antennas usually use a linear conductor (or element), or pair of such elements, each of which is about a quarter of the wavelength in length (an odd multiple of quarter wavelengths will also be resonant). Antennas that are required to be small compared to the wavelength sacrifice efficiency and cannot be very directional. Since wavelengths are so small at higher frequencies (UHF, microwaves) trading off performance to obtain a smaller physical size is usually not required. === Current and voltage distribution === The quarter-wave elements imitate a series-resonant electrical element due to the standing wave present along the conductor. At the resonant frequency, the standing wave has a current peak and voltage node (minimum) at the feed. In electrical terms, this means that at that position, the element has minimum impedance magnitude, generating the maximum current for minimum voltage. This is the ideal situation, because it produces the maximum output for the minimum input, producing the highest possible efficiency. Contrary to an ideal (lossless) series-resonant circuit, a finite resistance remains (corresponding to the relatively small voltage at the feed-point) due to the antenna's resistance to radiating, as well as any conventional electrical losses from producing heat. Recall that a current will reflect when there are changes in the electrical properties of the material. In order to efficiently transfer the received signal into the transmission line, it is important that the transmission line has the same impedance as its connection point on the antenna, otherwise some of the signal will be reflected backwards into the body of the antenna; likewise part of the transmitter's signal power will be reflected back to transmitter, if there is a change in electrical impedance where the feedline joins the antenna. This leads to the concept of impedance matching, the design of the overall system of antenna and transmission line so the impedance is as close as possible, thereby reducing these losses. Impedance matching is accomplished by a circuit called an antenna tuner or impedance matching network between the transmitter and antenna. The impedance match between the feedline and antenna is measured by a parameter called the standing wave ratio (SWR) on the feedline. Consider a half-wave dipole designed to work with signals with wavelength 1 m, meaning the antenna would be approximately 50 cm from tip to tip. If the element has a length-to-diameter ratio of 1000, it will have an inherent impedance of about 63 ohms resistive. Using the appropriate transmission wire or balun, we match that resistance to ensure minimum signal reflection. Feeding that antenna with a current of 1 Ampere will require 63 Volts, and the antenna will radiate 63 Watts (ignoring losses) of radio frequency power. Now consider the case when the antenna is fed a signal with a wavelength of 1.25 m; in this case the current induced by the signal would arrive at the antenna's feedpoint out-of-phase with the signal, causing the net current to drop while the voltage remains the same. Electrically this appears to be a very high impedance. The antenna and transmission line no longer have the same impedance, and the signal will be reflected back into the antenna, reducing output. This could be addressed by changing the matching system between the antenna and transmission line, but that solution only works well at the new design frequency. The result is that the resonant antenna will efficiently feed a signal into the transmission line only when the source signal's frequency is close to that of the design frequency of the antenna, or one of the resonant multiples. This makes resonant antenna designs inherently narrow-band: Only useful for a small range of frequencies centered around the resonance(s). === Electrically short antennas === It is possible to use simple impedance matching techniques to allow the use of monopole or dipole antennas substantially shorter than the ⁠ 1 /4⁠ or ⁠ 1 /2⁠ wave, respectively, at which they are resonant. As these antennas are made shorter (for a given frequency) their impedance becomes dominated by a series capacitive (negative) reactance; by adding an appropriate size "loading coil" – a series inductance with equal and opposite (positive) reactance – the antenna's capacitive reactance may be cancelled leaving only a pure resistance. Sometimes the resulting (lower) electrical resonant frequency of such a system (antenna plus matching network) is described using the concept of electrical length, so an antenna used at a lower frequency than its resonant frequency is called an electrically short antenna For example, at 30 MHz (10 m wavelength) a true resonant ⁠ 1 /4⁠ wave monopole would be almost 2.5 meters long, and using an antenna only 1.5 meters tall would require the addition of a loading coil. Then it may be said that the coil has lengthened the antenna to achieve an electrical length of 2.5 meters. However, the resulting resistive impedance achieved will be quite a bit lower than that of a true ⁠ 1 /4⁠ wave (resonant) monopole, often requiring further impedance matching (a transformer) to the desired transmission line. For ever shorter antennas (requiring greater "electrical lengthening") the radiation resistance plummets (approximately according to the square of the antenna length), so that the mismatch due to a net reactance away from the electrical resonance worsens. Or one could as well say that the equivalent resonant circuit of the antenna system has a higher Q factor and thus a reduced bandwidth, which can even become inadequate for the transmitted signal's spectrum. Resistive losses due to the loading coil, relative to the decreased radiation resistance, entail a reduced electrical efficiency, which can be of great concern for a transmitting antenna, but bandwidth is the major factor that sets the size of antennas at 1 MHz and lower frequencies. === Arrays and reflectors === The radiant flux as a function of the distance from the transmitting antenna varies according to the inverse-square law, since that describes the geometrical divergence of the transmitted wave. For a given incoming flux, the power acquired by a receiving antenna is proportional to its effective area. This parameter compares the amount of power captured by a receiving antenna in comparison to the flux of an incoming wave (measured in terms of the signal's power density in watts per square metre). A half-wave dipole has an effective area of about 0.13 λ2 seen from the broadside direction. If higher gain is needed one cannot simply make the antenna larger. Due to the constraint on the effective area of a receiving antenna detailed below, one sees that for an already-efficient antenna design, the only way to increase gain (effective area) is by reducing the antenna's gain in another direction. If a half-wave dipole is not connected to an external circuit but rather shorted out at the feedpoint, then it becomes a resonant half-wave element which efficiently produces a standing wave in response to an impinging radio wave. Because there is no load to absorb that power, it retransmits all of that power, possibly with a phase shift which is critically dependent on the element's exact length. Thus such a conductor can be arranged in order to transmit a second copy of a transmitter's signal in order to affect the radiation pattern (and feedpoint impedance) of the element electrically connected to the transmitter. Antenna elements used in this way are known as passive radiators. A Yagi–Uda array uses passive elements to greatly increase gain in one direction (at the expense of other directions). A number of parallel approximately half-wave elements (of very specific lengths) are situated parallel to each other, at specific positions, along a boom; the boom is only for support and not involved electrically. Only one of the elements is electrically connected to the transmitter or receiver, while the remaining elements are passive. The Yagi produces a fairly large gain (depending on the number of passive elements) and is widely used as a directional antenna with an antenna rotor to control the direction of its beam. It suffers from having a rather limited bandwidth, restricting its use to certain applications. Rather than using one driven antenna element along with passive radiators, one can build an array antenna in which multiple elements are all driven by the transmitter through a system of power splitters and transmission lines in relative phases so as to concentrate the RF power in a single direction. What's more, a phased array can be made "steerable", that is, by changing the phases applied to each element the radiation pattern can be shifted without physically moving the antenna elements. Another common array antenna is the log-periodic dipole array which has an appearance similar to the Yagi (with a number of parallel elements along a boom) but is totally dissimilar in operation as all elements are connected electrically to the adjacent element with a phase reversal; using the log-periodic principle it obtains the unique property of maintaining its performance characteristics (gain and impedance) over a very large bandwidth. When a radio wave hits a large conducting sheet it is reflected (with the phase of the electric field reversed) just as a mirror reflects light. Placing such a reflector behind an otherwise non-directional antenna will insure that the power that would have gone in its direction is redirected toward the desired direction, increasing the antenna's gain by a factor of at least 2. Likewise, a corner reflector can insure that all of the antenna's power is concentrated in only one quadrant of space (or less) with a consequent increase in gain. Practically speaking, the reflector need not be a solid metal sheet, but can consist of a curtain of rods aligned with the antenna's polarization; this greatly reduces the reflector's weight and wind load. Specular reflection of radio waves is also employed in a parabolic reflector antenna, in which a curved reflecting surface effects focussing of an incoming wave toward a so-called feed antenna; this results in an antenna system with an effective area comparable to the size of the reflector itself. Other concepts from geometrical optics are also employed in antenna technology, such as with the lens antenna. == Characteristics == The antenna's power gain (or simply "gain") also takes into account the antenna's efficiency, and is often the primary figure of merit. Antennas are characterized by a number of performance measures which a user would be concerned with in selecting or designing an antenna for a particular application. A plot of the directional characteristics in the space surrounding the antenna is its radiation pattern. === Bandwidth === The frequency range or bandwidth over which an antenna functions well can be very wide (as in a log-periodic antenna) or narrow (as in a small loop antenna); outside this range the antenna impedance becomes a poor match to the transmission line and transmitter (or receiver). Use of the antenna well away from its design frequency affects its radiation pattern, reducing its directive gain. Generally an antenna will not have a feed-point impedance that matches that of a transmission line; a matching network between antenna terminals and the transmission line will improve power transfer to the antenna. A non-adjustable matching network will most likely place further limits the usable bandwidth of the antenna system. It may be desirable to use tubular elements, instead of thin wires, to make an antenna; these will allow a greater bandwidth. Or, several thin wires can be grouped in a cage to simulate a thicker element. This widens the bandwidth of the resonance. Amateur radio antennas that operate at several frequency bands which are widely separated from each other may connect elements resonant at those different frequencies in parallel. Most of the transmitter's power will flow into the resonant element while the others present a high impedance. Another solution uses traps, parallel resonant circuits which are strategically placed in breaks created in long antenna elements. When used at the trap's particular resonant frequency the trap presents a very high impedance (parallel resonance) effectively truncating the element at the location of the trap; if positioned correctly, the truncated element makes a proper resonant antenna at the trap frequency. At substantially higher or lower frequencies the trap allows the full length of the broken element to be employed, but with a resonant frequency shifted by the net reactance added by the trap. The bandwidth characteristics of a resonant antenna element can be characterized according to its Q where the resistance involved is the radiation resistance, which represents the emission of energy from the resonant antenna to free space. The Q of a narrow band antenna can be as high as 15. On the other hand, the reactance at the same off-resonant frequency of one using thick elements is much less, consequently resulting in a Q as low as 5. These two antennas may perform equivalently at the resonant frequency, but the second antenna will perform over a bandwidth 3 times as wide as the antenna consisting of a thin conductor. Antennas for use over much broader frequency ranges are achieved using further techniques. Adjustment of a matching network can, in principle, allow for any antenna to be matched at any frequency. Thus the small loop antenna built into most AM broadcast (medium wave) receivers has a very narrow bandwidth, but is tuned using a parallel capacitance which is adjusted according to the receiver tuning. On the other hand, log-periodic antennas are not resonant at any single frequency but can (in principle) be built to attain similar characteristics (including feedpoint impedance) over any frequency range. These are therefore commonly used (in the form of directional log-periodic dipole arrays) as television antennas. === Gain === Gain is a parameter which measures the degree of directivity of the antenna's radiation pattern. A high-gain antenna will radiate most of its power in a particular direction, while a low-gain antenna will radiate over a wide angle. The antenna gain, or power gain of an antenna is defined as the ratio of the intensity (power per unit surface area) I {\displaystyle I} radiated by the antenna in the direction of its maximum output, at an arbitrary distance, divided by the intensity I iso {\displaystyle I_{\text{iso}}} radiated at the same distance by a hypothetical isotropic antenna which radiates equal power in all directions. This dimensionless ratio is usually expressed logarithmically in decibels, these units are called decibels-isotropic (dBi) G dBi = 10 log ⁡ I I iso {\displaystyle G_{\text{dBi}}=10\log {I \over I_{\text{iso}}}\,} A second unit used to measure gain is the ratio of the power radiated by the antenna to the power radiated by a half-wave dipole antenna I dipole {\displaystyle I_{\text{dipole}}} ; these units are called decibels-dipole (dBd) G dBd = 10 log ⁡ I I dipole {\displaystyle G_{\text{dBd}}=10\log {I \over I_{\text{dipole}}}\,} Since the gain of a half-wave dipole is 2.15 dBi and the logarithm of a product is additive, the gain in dBi is just 2.15 decibels greater than the gain in dBd G dBi ≈ G dBd + 2.15 {\displaystyle G_{\text{dBi}}\approx G_{\text{dBd}}+2.15\,} High-gain antennas have the advantage of longer range and better signal quality, but must be aimed carefully at the other antenna. An example of a high-gain antenna is a parabolic dish such as a satellite television antenna. Low-gain antennas have shorter range, but the orientation of the antenna is relatively unimportant. An example of a low-gain antenna is the whip antenna found on portable radios and cordless phones. Antenna gain should not be confused with amplifier gain, a separate parameter measuring the increase in signal power due to an amplifying device placed at the front-end of the system, such as a low-noise amplifier. === Effective area or aperture === The effective area or effective aperture of a receiving antenna expresses the portion of the power of a passing electromagnetic wave which the antenna delivers to its terminals, expressed in terms of an equivalent area. For instance, if a radio wave passing a given location has a flux of 1 pW / m2 (10−12 Watts per square meter) and an antenna has an effective area of 12 m2, then the antenna would deliver 12 pW of RF power to the receiver (30 microvolts RMS at 75 ohms). Since the receiving antenna is not equally sensitive to signals received from all directions, the effective area is a function of the direction to the source. Due to reciprocity (discussed above) the gain of an antenna used for transmitting must be proportional to its effective area when used for receiving. Consider an antenna with no loss, that is, one whose electrical efficiency is 100%. It can be shown that its effective area averaged over all directions must be equal to λ2/4π, the wavelength squared divided by 4π. Gain is defined such that the average gain over all directions for an antenna with 100% electrical efficiency is equal to 1. Therefore, the effective area Aeff in terms of the gain G in a given direction is given by: A e f f = λ 2 4 π G {\displaystyle A_{\mathrm {eff} }={\lambda ^{2} \over 4\pi }\,G} For an antenna with an efficiency of less than 100%, both the effective area and gain are reduced by that same amount. Therefore, the above relationship between gain and effective area still holds. These are thus two different ways of expressing the same quantity. Aeff is especially convenient when computing the power that would be received by an antenna of a specified gain, as illustrated by the above example. === Radiation pattern === The radiation pattern of an antenna is a plot of the relative field strength of the radio waves emitted by the antenna at different angles in the far field. It is typically represented by a three-dimensional graph, or polar plots of the horizontal and vertical cross sections. The pattern of an ideal isotropic antenna, which radiates equally in all directions, would look like a sphere. Many nondirectional antennas, such as monopoles and dipoles, emit equal power in all horizontal directions, with the power dropping off at higher and lower angles; this is called an omnidirectional pattern and when plotted looks like a torus or donut. The radiation of many antennas shows a pattern of maxima or "lobes" at various angles, separated by "nulls", angles where the radiation falls to zero. This is because the radio waves emitted by different parts of the antenna typically interfere, causing maxima at angles where the radio waves arrive at distant points in phase, and zero radiation at other angles where the radio waves arrive out of phase. In a directional antenna designed to project radio waves in a particular direction, the lobe in that direction is designed larger than the others and is called the "main lobe". The other lobes usually represent unwanted radiation and are called "sidelobes". The axis through the main lobe is called the "principal axis" or "boresight axis". The polar diagrams (and therefore the efficiency and gain) of Yagi antennas are tighter if the antenna is tuned for a narrower frequency range, e.g. the grouped antenna compared to the wideband. Similarly, the polar plots of horizontally polarized yagis are tighter than for those vertically polarized. === Field regions === The space surrounding an antenna can be divided into three concentric regions: The reactive near-field (also called the inductive near-field), the radiating near-field (Fresnel region) and the far-field (Fraunhofer) regions. These regions are useful to identify the field structure in each, although the transitions between them are gradual; there are no clear boundaries. The far-field region is far enough from the antenna to ignore its size and shape: It can be assumed that the electromagnetic wave is purely a radiating plane wave (electric and magnetic fields are in phase and perpendicular to each other and to the direction of propagation). This simplifies the mathematical analysis of the radiated field. === Efficiency === Efficiency of a transmitting antenna is the ratio of power actually radiated (in all directions) to the power absorbed by the antenna terminals. The power supplied to the antenna terminals which is not radiated is converted into heat. This is usually through loss resistance in the antenna's conductors, or loss between the reflector and feed horn of a parabolic antenna. Antenna efficiency is separate from impedance matching, which may also reduce the amount of power radiated using a given transmitter. If an SWR meter reads 150 W of incident power and 50 W of reflected power, that means 100 W have actually been absorbed by the antenna (ignoring transmission line losses). How much of that power has actually been radiated cannot be directly determined through electrical measurements at (or before) the antenna terminals, but would require (for instance) careful measurement of field strength. The loss resistance and efficiency of an antenna can be calculated once the field strength is known, by comparing it to the power supplied to the antenna. The loss resistance will generally affect the feedpoint impedance, adding to its resistive component. That resistance will consist of the sum of the radiation resistance Rrad and the loss resistance Rloss. If a current I is delivered to the terminals of an antenna, then a power of I2 Rrad will be radiated and a power of I2 Rloss will be lost as heat. Therefore, the efficiency of an antenna is equal to ⁠Rrad/(Rrad + Rloss)⁠. Only the total resistance Rrad + Rloss can be directly measured. According to reciprocity, the efficiency of an antenna used as a receiving antenna is identical to its efficiency as a transmitting antenna, described above. The power that an antenna will deliver to a receiver (with a proper impedance match) is reduced by the same amount. In some receiving applications, the very inefficient antennas may have little impact on performance. At low frequencies, for example, atmospheric or man-made noise can mask antenna inefficiency. For example, CCIR Rep. 258-3 indicates man-made noise in a residential setting at 40 MHz is about 28 dB above the thermal noise floor. Consequently, an antenna with a 20 dB loss (due to inefficiency) would have little impact on system noise performance. The loss within the antenna will affect the intended signal and the noise/interference identically, leading to no reduction in signal to noise ratio (SNR). Antennas which are not a significant fraction of a wavelength in size are inevitably inefficient due to their small radiation resistance. AM broadcast radios include a small loop antenna for reception which has an extremely poor efficiency. This has little effect on the receiver's performance, but simply requires greater amplification by the receiver's electronics. Contrast this tiny component to the massive and very tall towers used at AM broadcast stations for transmitting at the very same frequency, where every percentage point of reduced antenna efficiency entails a substantial cost. The definition of antenna gain or power gain already includes the effect of the antenna's efficiency. Therefore, if one is trying to radiate a signal toward a receiver using a transmitter of a given power, one need only compare the gain of various antennas rather than considering the efficiency as well. This is likewise true for a receiving antenna at very high (especially microwave) frequencies, where the point is to receive a signal which is strong compared to the receiver's noise temperature. However, in the case of a directional antenna used for receiving signals with the intention of rejecting interference from different directions, one is no longer concerned with the antenna efficiency, as discussed above. In this case, rather than quoting the antenna gain, one would be more concerned with the directive gain, or simply directivity which does not include the effect of antenna (in)efficiency. The directive gain of an antenna can be computed from the published gain divided by the antenna's efficiency. In equation form, gain = directivity × efficiency. === Polarization === The orientation and physical structure of an antenna determine the polarization of the electric field of the radio wave transmitted by it. For instance, an antenna composed of a linear conductor (such as a dipole or whip antenna) oriented vertically will result in vertical polarization; if turned on its side the same antenna's polarization will be horizontal. Reflections generally affect polarization. Radio waves reflected off the ionosphere can change the wave's polarization. For line-of-sight communications or ground wave propagation, horizontally or vertically polarized transmissions generally remain in about the same polarization state at the receiving location. Using a vertically polarized antenna to receive a horizontally polarized wave (or visa-versa) results in relatively poor reception. An antenna's polarization can sometimes be inferred directly from its geometry. When the antenna's conductors viewed from a reference location appear along one line, then the antenna's polarization will be linear in that very direction. In the more general case, the antenna's polarization must be determined through analysis. For instance, a turnstile antenna mounted horizontally (as is usual), from a distant location on Earth, appears as a horizontal line segment, so its radiation received there is horizontally polarized. But viewed at a downward angle from an airplane, the same antenna does not meet this requirement; in fact its radiation is elliptically polarized when viewed from that direction. In some antennas the state of polarization will change with the frequency of transmission. The polarization of a commercial antenna is an essential specification. In the most general case, polarization is elliptical, meaning that over each cycle the electric field vector traces out an ellipse. Two special cases are linear polarization (the ellipse collapses into a line) as discussed above, and circular polarization (in which the two axes of the ellipse are equal). In linear polarization the electric field of the radio wave oscillates along one direction. In circular polarization, the electric field of the radio wave rotates around the axis of propagation. Circular or elliptically polarized radio waves are designated as right-handed or left-handed using the "thumb in the direction of the propagation" rule. Note that for circular polarization, optical researchers use the opposite right-hand rule from the one used by radio engineers. It is best for the receiving antenna to match the polarization of the transmitted wave for optimum reception. Otherwise there will be a loss of signal strength: when a linearly polarized antenna receives linearly polarized radiation at a relative angle of θ, then there will be a power loss of cos2θ . A circularly polarized antenna can be used to equally well match vertical or horizontal linear polarizations, suffering a 3 dB signal reduction. However it will be blind to a circularly polarized signal of the opposite orientation. === Impedance matching === Maximum power transfer requires matching the impedance of an antenna system (as seen looking into the transmission line) to the complex conjugate of the impedance of the receiver or transmitter. In the case of a transmitter, however, the desired matching impedance might not exactly correspond to the dynamic output impedance of the transmitter as analyzed as a source impedance but rather the design value (typically 50 Ohms) required for efficient and safe operation of the transmitting circuitry. The intended impedance is normally resistive, but a transmitter (and some receivers) may have limited additional adjustments to cancel a certain amount of reactance, in order to "tweak" the match. When a transmission line is used in between the antenna and the transmitter (or receiver) one generally would like an antenna system whose impedance is resistive and nearly the same as the characteristic impedance of that transmission line, in addition to matching the impedance that the transmitter (or receiver) expects. The match is sought to minimize the amplitude of standing waves (measured via the standing wave ratio; SWR) that a mismatch raises on the line, and the increase in transmission line losses it entails. ==== Antenna tuning at the antenna ==== Antenna tuning, in the strict sense of modifying the antenna itself, generally refers only to cancellation of any reactance seen at the antenna terminals, leaving only a resistive impedance which might or might not be exactly the desired impedance (that of the available transmission line). Although an antenna may be designed to have a purely resistive feedpoint impedance (such as a dipole 97% of a half wavelength long) at just one frequency, this will very likely not be exactly true at other frequencies that the antenna is eventually used for. In most cases, in principle the physical length of the antenna can be "trimmed" to obtain a pure resistance, although this is rarely convenient. On the other hand, the addition of a contrary inductance or capacitance can be used to cancel a residual capacitive or inductive reactance, respectively, and may be more convenient than lowering and trimming or extending the antenna, then hoisting it back. Antenna reactance may be removed using lumped elements, such as capacitors or inductors in the main path of current traversing the antenna, often near the feedpoint, or by incorporating capacitive or inductive structures into the conducting body of the antenna to cancel the feedpoint reactance – such as open-ended "spoke" radial wires, or looped parallel wires – hence genuinely tune the antenna to resonance. In addition to those reactance-neutralizing add-ons, antennas of any kind may include a transformer and / or transformer balun at their feedpoint, to change the resistive part of the impedance to more nearly match the feedline's characteristic impedance. ==== Line matching at the radio ==== Antenna tuning in the loose sense, performed by an impedance matching device (somewhat inappropriately named an "antenna tuner", or the older, more appropriate term transmatch) goes beyond merely removing reactance and includes transforming the remaining resistance to match the feedline and radio. An additional problem is matching the remaining resistive impedance to the characteristic impedance of the transmission line: A general impedance matching network (an "antenna tuner" or ATU) will have at least two adjustable elements to correct both components of impedance. Any matching network will have both power losses and power restrictions when used for transmitting. Commercial antennas are generally designed to approximately match standard 50 Ohm coaxial cables, at standard frequencies; the design expectation is that a matching network will be merely used to 'tweak' any residual mismatch. ==== Extreme examples of loaded small antennas ==== In some cases matching is done in a more extreme manner, not simply to cancel a small amount of residual reactance, but to resonate an antenna whose resonance frequency is quite different from the intended frequency of operation. Short vertical "whip" For instance, for practical reasons a "whip antenna" can be made significantly shorter than a quarter-wavelength and then resonated, using a so-called loading coil. The physically large inductor at the base of the antenna has an inductive reactance which is the opposite of the capacitative reactance that the short vertical antenna has at the desired operating frequency. The result is a pure resistance seen at feedpoint of the loading coil; although, without further measures, the resistance will be somewhat lower than would be desired to match commercial coax. Small "magnetic" loop Another extreme case of impedance matching occurs when using a small loop antenna (usually, but not always, for receiving) at a relatively low frequency, where it appears almost as a pure inductor. When such an inductor is resonated via a capacitor attached in parallel across its feedpoint, the capacitor not only cancels the reactance but also greatly magnifies the very small radiation resistance of a small loop to produce a better-matched feedpoint resistance. This is the type of antenna used in most portable AM broadcast receivers (other than car radios): The standard AM antenna is a loop of wire wound around a ferrite rod (a "loopstick antenna"). The loop is resonated by a coupled tuning capacitor, which is configured to match the receiver's tuning, in order to keep the antenna resonant at the chosen receive frequency over the AM broadcast band. == Effect of ground == Ground reflections is one of the common types of multipath. The radiation pattern and even the driving point impedance of an antenna can be influenced by the dielectric constant and especially conductivity of nearby objects. For a terrestrial antenna, the ground is usually one such object of importance. The antenna's height above the ground, as well as the electrical properties (permittivity and conductivity) of the ground, can then be important. Also, in the particular case of a monopole antenna, the ground (or an artificial ground plane) serves as the return connection for the antenna current thus having an additional effect, particularly on the impedance seen by the feed line. When an electromagnetic wave strikes a plane surface such as the ground, part of the wave is transmitted into the ground and part of it is reflected, according to the Fresnel coefficients. If the ground is a very good conductor then almost all of the wave is reflected (180° out of phase), whereas a ground modeled as a (lossy) dielectric can absorb a large amount of the wave's power. The power remaining in the reflected wave, and the phase shift upon reflection, strongly depend on the wave's angle of incidence and polarization. The dielectric constant and conductivity (or simply the complex dielectric constant) is dependent on the soil type and is a function of frequency. For very low frequencies to high frequencies (< 30 MHz), the ground behaves as a lossy dielectric, thus the ground is characterized both by a conductivity and permittivity (dielectric constant) which can be measured for a given soil (but is influenced by fluctuating moisture levels) or can be estimated from certain maps. At lower mediumwave frequencies the ground acts mainly as a good conductor, which AM broadcast (0.5–1.7 MHz) antennas depend on. At frequencies between 3–30 MHz, a large portion of the energy from a horizontally polarized antenna reflects off the ground, with almost total reflection at the grazing angles important for ground wave propagation. That reflected wave, with its phase reversed, can either cancel or reinforce the direct wave, depending on the antenna height in wavelengths and elevation angle (for a sky wave). On the other hand, vertically polarized radiation is not well reflected by the ground except at grazing incidence or over very highly conducting surfaces such as sea water. However the grazing angle reflection important for ground wave propagation, using vertical polarization, is in phase with the direct wave, providing a boost of up to 6 dB, as is detailed below. At VHF and above (> 30 MHz) the ground becomes a poorer reflector. However, for shortwave frequencies, especially below ~15 MHz, it remains a good reflector especially for horizontal polarization and grazing angles of incidence. That is important as these higher frequencies usually depend on horizontal line-of-sight propagation (except for satellite communications), the ground then behaving almost as a mirror. The net quality of a ground reflection depends on the topography of the surface. When the irregularities of the surface are much smaller than the wavelength, the dominant regime is that of specular reflection, and the receiver sees both the real antenna and an image of the antenna under the ground due to reflection. But if the ground has irregularities not small compared to the wavelength, reflections will not be coherent but shifted by random phases. With shorter wavelengths (higher frequencies), this is generally the case. Whenever both the receiving or transmitting antenna are placed at significant heights above the ground (relative to the wavelength), waves reflected specularly by the ground will travel a longer distance than direct waves, inducing a phase shift which can sometimes be significant. When a sky wave is launched by such an antenna, that phase shift is always significant unless the antenna is very close to the ground (compared to the wavelength). The phase of reflection of electromagnetic waves depends on the polarization of the incident wave. Given the larger refractive index of the ground (typically n ≈ 2) compared to air (n = 1), the phase of horizontally polarized radiation is reversed upon reflection (a phase shift of π radians, or 180°). On the other hand, the vertical component of the wave's electric field is reflected at grazing angles of incidence approximately in phase. These phase shifts apply as well to a ground modeled as a good electrical conductor. This means that a receiving antenna "sees" an image of the emitting antenna but with 'reversed' currents (opposite in direction and phase) if the emitting antenna is horizontally oriented (and thus horizontally polarized). However, the received current will be in the same absolute direction and phase if the emitting antenna is vertically polarized. The actual antenna which is transmitting the original wave then also may receive a strong signal from its own image from the ground. This will induce an additional current in the antenna element, changing the current at the feedpoint for a given feedpoint voltage. Thus the antenna's impedance, given by the ratio of feedpoint voltage to current, is altered due to the antenna's proximity to the ground. This can be quite a significant effect when the antenna is within a wavelength or two of the ground. But as the antenna height is increased, the reduced power of the reflected wave (due to the inverse square law) allows the antenna to approach its asymptotic feedpoint impedance given by theory. At lower heights, the effect on the antenna's impedance is very sensitive to the exact distance from the ground, as this affects the phase of the reflected wave relative to the currents in the antenna. Changing the antenna's height by a quarter wavelength, then changes the phase of the reflection by 180°, with a completely different effect on the antenna's impedance. The ground reflection has an important effect on the net far field radiation pattern in the vertical plane, that is, as a function of elevation angle, which is thus different between a vertically and horizontally polarized antenna. Consider an antenna at a height h above the ground, transmitting a wave considered at the elevation angle θ. For a vertically polarized transmission the magnitude of the electric field of the electromagnetic wave produced by the direct ray plus the reflected ray is: | E V | = 2 | E 0 | | cos ⁡ ( 2 π h λ sin ⁡ θ ) | {\displaystyle \textstyle {\left|E_{V}\right|=2\left|E_{0}\right|\,\left|\cos \left({2\pi h \over \lambda }\sin \theta \right)\right|}} Thus the power received can be as high as 4 times that due to the direct wave alone (such as when θ = 0), following the square of the cosine. The sign inversion for the reflection of horizontally polarized emission instead results in: | E H | = 2 | E 0 | | sin ⁡ ( 2 π h λ sin ⁡ θ ) | {\displaystyle \textstyle {\left|E_{H}\right|=2\left|E_{0}\right|\,\left|\sin \left({2\pi h \over \lambda }\sin \theta \right)\right|}} where: E 0 {\displaystyle \scriptstyle {E_{0}}} is the electrical field that would be received by the direct wave if there were no ground. θ is the elevation angle of the wave being considered. λ {\displaystyle \scriptstyle {\lambda }} is the wavelength. h {\displaystyle \scriptstyle {h}} is the height of the antenna (half the distance between the antenna and its image). For horizontal propagation between transmitting and receiving antennas situated near the ground reasonably far from each other, the distances traveled by the direct and reflected rays are nearly the same. There is almost no relative phase shift. If the emission is polarized vertically, the two fields (direct and reflected) add and there is maximum of received signal. If the signal is polarized horizontally, the two signals subtract and the received signal is largely cancelled. The vertical plane radiation patterns are shown in the image at right. With vertical polarization there is always a maximum for θ = 0, horizontal propagation (left pattern). For horizontal polarization, there is cancellation at that angle. The above formulae and these plots assume the ground as a perfect conductor. These plots of the radiation pattern correspond to a distance between the antenna and its image of 2.5 λ . As the antenna height is increased, the number of lobes increases as well. The difference in the above factors for the case of θ = 0 is the reason that most broadcasting (transmissions intended for the public) uses vertical polarization. For receivers near the ground, horizontally polarized transmissions suffer cancellation. For best reception the receiving antennas for these signals are likewise vertically polarized. In some applications where the receiving antenna must work in any position, as in mobile phones, the base station antennas use mixed polarization, such as linear polarization at an angle (with both vertical and horizontal components) or circular polarization. On the other hand, analog television transmissions are usually horizontally polarized, because in urban areas buildings can reflect the electromagnetic waves and create ghost images due to multipath propagation. Using horizontal polarization, ghosting is reduced because the amount of reflection in the horizontal polarization off the side of a building is generally less than in the vertical direction. Vertically polarized analog television have been used in some rural areas. In digital terrestrial television such reflections are less problematic, due to robustness of binary transmissions and error correction. == Modeling antennas with line equations == In the first approximation, the current in a thin antenna is distributedexactly as in a transmission line. — Schelkunoff & Friis (1952)(p 217 (§8.4)) The flow of current in wire antennas is identical to the solution of counter-propagating waves in a single conductor transmission line, which can be solved using the telegrapher's equations. Solutions of currents along antenna elements are more conveniently and accurately obtained by numerical methods, so transmission-line techniques have largely been abandoned for precision modelling, but they continue to be a widely used source of useful, simple approximations that describe well the impedance profiles of antennas.(pp 7–10)(p 232) Unlike transmission lines, currents in antennas contribute power to the radiated part electromagnetic field, which can be modeled using radiation resistance. The end of an antenna element corresponds to an unterminated (open) end of a single-conductor transmission line, resulting in a reflected wave identical to the incident wave, with its voltage in phase with the incident wave and its current in the opposite phase (thus net zero current, where there is, after all, no further conductor). The combination of the incident and reflected wave, just as in a transmission line, forms a standing wave with a current node at the conductor's end, and a voltage node one-quarter wavelength from the end (if the element is at least that long). In a resonant antenna, the feedpoint of the antenna is at one of those voltage nodes. Due to discrepancies from the simplified version of the transmission line model, the voltage one quarter wavelength from the current node is not exactly zero, but it is near a minimum, and small compared to the much large voltage at the conductor's end. Hence, a feed point matching the antenna at that spot requires a relatively small voltage but large current (the currents from the two waves add in-phase there), thus a relatively low feedpoint impedance. Feeding the antenna at other points involves a large voltage, thus a large impedance, and usually one that is primarily reactive (low power factor), which is a terrible impedance match to available transmission lines. Therefore, it is usually desired for an antenna to operate as a resonant element with each conductor having a length of one quarter wavelength (or any other odd multiples of a quarter wavelength). For instance, a half-wave dipole has two such elements (one connected to each conductor of a balanced transmission line) about one quarter wavelength long. Depending on the conductors' diameters, a small deviation from this length is adopted in order to reach the point where the antenna current and the (small) feedpoint voltage are exactly in phase. Then the antenna presents a purely resistive impedance, and ideally one close to the characteristic impedance of an available transmission line. Despite these useful properties, resonant antennas have the disadvantage that they achieve resonance (purely resistive feedpoint impedance) only at a fundamental frequency, and perhaps some of its harmonics, and the feedpoint resistance is larger at higher-order resonances. Therefore, resonant antennas can only achieve their good performance within a limited bandwidth, depending on the Q at the resonance. == Mutual impedance and interaction between antennas == The electric and magnetic fields emanating from a driven antenna element will generally affect the voltages and currents in nearby antennas, antenna elements, or other conductors. This is particularly true when the affected conductor is a resonant element (multiple of half-wavelengths in length) at about the same frequency, as is the case where the conductors are all part of the same active or passive antenna array. Because the affected conductors are in the near-field, one can not just treat two antennas as transmitting and receiving a signal according to the Friis transmission formula for instance, but must calculate the mutual impedance matrix which takes into account both voltages and currents (interactions through both the electric and magnetic fields). Thus using the mutual impedances calculated for a specific geometry, one can solve for the radiation pattern of a Yagi–Uda antenna or the currents and voltages for each element of a phased array. Such an analysis can also describe in detail reflection of radio waves by a ground plane or by a corner reflector and their effect on the impedance (and radiation pattern) of an antenna in its vicinity. Often such near-field interactions are undesired and pernicious. Currents in random metal objects near a transmitting antenna will often be in poor conductors, causing loss of RF power in addition to unpredictably altering the characteristics of the antenna. By careful design, it is possible to reduce the electrical interaction between nearby conductors. For instance, the 90 degree angle in between the two dipoles composing the turnstile antenna insures no interaction between these, allowing these to be driven independently (but actually with the same signal in quadrature phases in the turnstile antenna design). == Antenna types == Antennas can be classified by operating principles or by their application. Different authorities placed antennas in narrower or broader categories. Generally these include These antenna types and others are summarized in greater detail in the overview article, Antenna types, as well as in each of the linked articles in the list above, and in even more detail in articles which those link to. == See also == == Footnotes == == References == The dictionary definition of antenna at Wiktionary
Wikipedia/Antenna_theory
The theory of multiple intelligences (MI) posits that human intelligence is not a single general ability but comprises various distinct modalities, such as linguistic, logical-mathematical, musical, and spatial intelligences. Introduced in Howard Gardner's book Frames of Mind: The Theory of Multiple Intelligences (1983), this framework has gained popularity among educators who accordingly develop varied teaching strategies purported to cater to different student strengths. Despite its educational impact, MI has faced criticism from the psychological and scientific communities. A primary point of contention is Gardner's use of the term "intelligences" to describe these modalities. Critics argue that labeling these abilities as separate intelligences expands the definition of intelligence beyond its traditional scope, leading to debates over its scientific validity. While empirical research often supports a general intelligence factor (g-factor), Gardner contends that his model offers a more nuanced understanding of human cognitive abilities. This difference in defining and interpreting "intelligence" has fueled ongoing discussions about the theory's scientific robustness. == Separation criteria == Beginning in the late 1970s, using a pragmatic definition, Howard Gardner surveyed several disciplines and cultures around the world to determine skills and abilities essential to human development and culture building. He subjected candidate abilities to evaluation using eight criteria that must be substantively met to warrant their identification as an intelligence. Furthermore, the intelligences need to be relatively autonomous from each other, and composed of subsets of skills that are highly correlated and coherently organized. In 1983, the field of cognitive neuroscience was embryonic but Gardner was one of the early psychological theorists to describe direct links between brain systems and intelligence. Likewise the field of educational neuroscience was yet to be conceived. Since Frames of Mind was published (1983) the terms cognitive science and cognitive neuroscience have become standard in the field with extensive libraries of scholarly and scientific papers and textbooks. Thus it is essential to examine neuroscience evidence as it pertains to MI validity. Gardner defined intelligence as "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." This definition is unique for several reasons that account for MI theory's broad appeal to educators as well as its rejection by mainstream psychologists who are rooted in the traditional conception of intelligence as an abstract, logical capacity. A fundamental element for each intelligence is a framework of clearly defined levels of skill, complexity and accomplishment. One model that fits with the MI framework is Bloom’s taxonomy where each intelligence can be delineated along different levels, ranging from basic knowledge up to their highest levels of analysis / synthesis. MI is also unique because it gives full appreciation for the impact and interactions - via symbol systems - between the individual’s cognitions and their particular culture. As Gardner states, The multiple intelligences commence as a set of uncommitted neurobiological potentials. They become crystallized and mobilized by the communication that takes place among human beings and, especially, by the systems of meaning-making that already exist in a given culture. Unlike traditional practices beginning in the 19th century, MI theory is not built on the statistical analyses of psychometric test data searching for factors that account for academic achievement. Instead, Gardner employs a multi-disciplinary, cross-cultural methodology to evaluate which human capacities fit into a comprehensive model of intelligence. Eight criteria accounting for advances in neuroscience and the influence of cultural factors are used to qualify a capacity as an intelligence. These criteria are drawn from a more extensive database than what was acceptable and available to researchers in the late 19th and 20th centuries. Evidence is gathered from a variety of disciplines including psychology, neurology, biology, sociology, and anthropology as well as the arts and humanities. If a candidate faculty meets this set of criteria reasonably well then it can qualify as an intelligence. If it does not, then it is set aside or reconceptualized. === Criteria for each type of intelligence === The eight criteria can be grouped into four general categories: biology (neuroscience and evolution) analysis (core operations and symbol systems) psychology (skill development, individual differences) psychometrics (psychological experiments and test evidence) The criteria briefly described are: potential for brain isolation by brain damage place in evolutionary history presence of core operations susceptibility to encoding (symbolic expression) a distinct developmental progression the existence of savants, prodigies and other exceptional people support from experimental psychology support from psychometric findings This scientific method resembles the process used by astronomers to determine which celestial bodies to classify as a planet versus dwarf planet, star, comet, etc. == Forms of intelligences == In Frames of Mind and its sequels, Howard Gardner describes eight intelligences that can be expressed in everyday life in a variety of ways referred to as domains, skills, competencies, or talents. Like describing a multi-layer cake, the complexity depends upon how you slice the cake. One model integrates the eight intelligences with Sternberg's triarchic theory, so each intelligence is actively expressed in three ways: (1) creative, (2) academic / analytical and (3) practical thinking. In this analogy each of the eight cake layers are divided into three segments with different expressions sharing a central core. Exemplar professions and adult roles requiring specific intelligences are described along with their core skills and potential deficits. Several references to exemplar neuroscientific studies are also provided for each of the eight intelligences. Furthermore, some have suggested that the 'intelligences' refer to talents, personality, or ability rather than a distinct form of intelligence. The two intelligences that are most associated with the traditional I.Q. or general intelligence are the linguistic and logical-mathematical intelligences. Some intelligence models and tests also include visual-spatial intelligence as a third element. === Musical === This area of intelligence includes sensitivity to the sounds, rhythms, pitch, and tones of music. People with musical intelligence normally may be able to sing, play musical instruments, or compose music. They have high sensitivity to pitch, meter, melody and timbre. Musical intelligence includes cognitive elements that contribute to a person’s success and quality of life. There is a strong relationship between music and emotions as evidenced in both popular and classical music spheres. Neuroscience investigators continue to investigate the interaction between music and cognitive performances. Music is deeply rooted in human evolutionary history (Paleolithic bone flute) and culture (every country on Earth has a national anthem') and our personal lives (all important life events are associated with particular types of music (e.g., birthday, wedding songs, funeral dirges, etc.). Deficits in musical processing and abilities include congenital amusia, tone deafness, musical hallucinations, musical anhedonia, acquired music agnosia, and arrhythmia (beat deafness). Professions requiring essential musical skills include vocalist, instrumentalist, lyricist, dancer, sound engineer and composer. Musical intelligence is combined with kinesthetic to produce instrumentalists, dancers and, combined with a linguistic intelligence, for music critics and lyricists. Music combined with interpersonal intelligence is required for success as a music therapist or teacher. === Visual-spatial === This area deals with spatial awareness / judgment and the ability to visualize with the mind's eye. It is composed of two main dimensions: A) mental visualization and B) perception of the physical world (spatial arrangements and objects). It includes both practical problem-solving as well as artistic creations. Spatial ability is one of the three factors beneath g (general intelligence) in the hierarchical model of intelligence. Many I.Q. tests include a measure of spatial problem-solving skills, e.g., block design and mental rotation of objects. Visual-spatial intelligence can be expressed in both practical (e.g., drafting and building) or artistic (e.g., fine art, crafts, floral arrangements) ways. Or they can be combined in fields such as architecture, industrial design, landscape design, and fashion design. Visual-spatial processing is often combined with the kinesthetic intelligence and referred to as eye-hand or visual-motor integration for tasks such as hitting a baseball (see Babe Ruth example for Kinesthetic), sewing, golf or skiing. Professions that emphasize skill with visual-spatial processing include carpentry, engineering, designers, pilots, firefighters, surgeons, commercial and fine arts and crafts. Spatial intelligence combined with linguistic is required for success as an art critic or textbook graphic designer. Spatial artistic skills combined with naturalist sensitivity produce a pet groomer or clothing designer, costumer. === Linguistic === The core linguistic ability is sensitivity to words and their meanings. People with high verbal-linguistic intelligence display a facility with expressive language and verbal comprehension. They are typically good at reading, writing, telling stories, rhetoric and memorizing words along with dates. Verbal ability is one of the most g-loaded abilities. Linguistic (academic aspect) intelligence is measured with the Verbal Intelligence Quotient (IQ) in Wechsler Adult Intelligence Scale (WAIS-IV). Deficits in linguistic abilities include expressive and receptive aphasia, agraphia, specific language impairment, written language disorder and word recognition deficit (dyslexia). Linguistic ability can be expressed according to Triarchic theory in three main ways: analytical-academic (reading, writing, definitions); practical (verbal or written directions, explanations, narration); and creative (story telling, poetry, lyrics, imaginative word play, science fiction). Professions that require linguistic skills include teaching, sales, management, counselors, leaders, childcare, journalists, academics and politicians (debating and creating support for particular sets of values). Linguistic intelligence combines with all other intelligences to facilitate communication either via the spoken or written word. It is frequently highly correlated with the interpersonal intelligence to facilitate social interactions for education, business and human relations. Successful sports coaches combine three intelligences: kinesthetic, interpersonal and linguistic. Corporate managers require skills in the interpersonal, linguistic and logical-mathematical intelligences. === Logical-mathematical === This area has to do with logic, abstractions, reasoning, calculations, strategic and critical thinking. This intelligence includes the capacity to understand underlying principles of some kind of causal system. Logical reasoning is closely linked to fluid intelligence as well as to general intelligence (g factor). This capacity is most often associated with convergent problem-solving but it also includes divergent thinking associated with “problem-finding”. This intelligence is most closely associated with the cognitive development theory described by Jean Piaget (1983). The four main types of logical-mathematical intelligence include logical reasoning, calculations, practical thinking (common sense) and discovery. Deficits in logical-mathematical thinking include acalculia, dyscalculia, mild cognitive impairment, dementia and intellectual disability. Some critics believe that the logical and mathematics domains should be separate entities. However, Gardner argues that they both spring from the same source—abstractions taken from real world elements, e.g., logic from words and calculations from the manipulation from objects. This is not dissimilar from the relationship between musical intelligence and vocal or instrumental skills where they are very different expressions springing from a shared musical source. Professions most closely associated with this intelligence include accounting, bookkeeping, banking, finance, engineering and the sciences. Logic-mathematical skills combine with all the other intelligences to facilitate complex problem solving and creation such as environmental engineering and scientists (naturalist); symphonies (music); public sculptures (visual-spatial) and choreography/ movement analysis (kinesthetic). === Bodily-kinesthetic === The core elements of the bodily-kinesthetic intelligence are control of one's bodily movements and fine motor control to handle objects skillfully. Gardner elaborates to say that this also includes a sense of timing, a clear sense of the goal of a physical action, along with the ability to train responses. Kinesthetic ability can be displayed in goal-directed activities (athletics, handcrafts, etc.) as well as in more expressive movements (drama, dance, mime and gestures). Expressive movements can be for either concepts or feelings. For example, saluting, shaking hands or facial expressions can convey both ideas and emotions. Two major kinesthetic categories are gross and fine motor skills. Deficits in kinesthetic ability are described as proprioception disorders affecting body awareness, coordination, balance, dexterity and motor control. Gardner believes that careers that suit those with high bodily-kinesthetic intelligence include: athletes, dancers, musicians, actors, craftspeople, builders, technicians, and firefighters. Although these careers can be duplicated through virtual simulation, they will not produce the actual physical learning that is needed in this intelligence. Often people with high physical intelligence combined with visual motion acuity will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement (surgeons) and can express themselves using their body (actors and dancers). Gardner referred to the idea of natural skill and innate kinesthetic intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he had been 'born' on the pitcher's mound. Seeing the pitched ball and coordinating one’s swing to meet it over the plate requires highly developed visual-motor integration. Each sport requires its own distinctive combination of specific skills associated with the kinesthetic and visual-spatial intelligences. ==== Physical ability ==== Physical intelligence, also known as bodily-kinesthetic intelligence, is any intelligence derived through physical and practiced learning such as sports, dance, or craftsmanship. It may refer to the ability to use one's hands to create, to express oneself with one's body, a reliance on tactile mechanisms and movement, and accuracy in controlling body movement. An individual with high physical intelligence is someone who is adept at using their physical body to solve problems and express ideas and emotions. The ability to control the physical body and the mind-body connection is part of a much broader range of human potential as set out in Gardner's theory of multiple intelligences. ==== Characteristics ==== Exhibiting well developed bodily kinesthetic intelligence will be reflected in a person's movements and how they use their physical body. Often people with high physical intelligence will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement and can express themselves using their body. Gardner referred to the idea of natural skill and innate physical intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he has been 'born' on the pitcher's mound. Individuals with a high body-kinesthetic, or physical intelligence, are likely to be successful in physical careers, including athletes, dancers, musicians, police officers, and soldiers. === Interpersonal === In MI theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate or to lead a group. According to Thomas Armstrong in How Are Kids Smart: Multiple Intelligences in the Classroom, "Interpersonal intelligence is often misunderstood with being extroverted or liking other people”. Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." They have insightful understanding of other peoples’ point of view. Daniel Goleman based his concept of emotional intelligence in part on the feeling aspects of the intrapersonal and interpersonal intelligences. Interpersonal skill can be displayed in either one-on-one and group interactions. Deficits in interpersonal understanding are described as ego centrism, narcissism, socio-pathology, Asperger’s Syndrome and autism. Gardner believes that careers that suit those with high interpersonal intelligence include leaders, politicians, managers, teachers, clergy, counselors, social workers and sales persons. Mother Teresa, Martin Luther King and Lyndon Johnson are cited as historical leaders with exceptional interpersonal intelligence. Interpersonal combined with intrapersonal management are required for successful leaders, psychologists, life coaches and conflict negotiators. And obviously, team sports require specific combinations of the interpersonal and kinesthetic intelligences while individual sports emphasize the kinesthetic and intrapersonal intelligences (i.e., Tiger Woods and gymnasts). In theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate to work as part of a group. According to Gardner in How Are Kids Smart: Multiple Intelligences in the Classroom, "Inter- and Intra- personal intelligence is often misunderstood with being extroverted or liking other people". "Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." Gardner has equated this with emotional intelligence of Goleman. === Intrapersonal === This refers to having a deep and accurate understanding of the self; what one's strengths and weaknesses are, what makes one unique, being able to predict and manage one's own reactions, emotions and behaviors. Activities associated with this intelligence include introspection and self-reflection. Intrapersonal skills can be categorized in at least four areas: metacognition, awareness of thoughts, management of feelings and emotions, behavior, self-management, decision-making and judgment. Deficits in intrapersonal understanding are described as anosognosia, depersonalization, dissociation and self-dysregulation (ADHD). Leaders and people in high stress occupations need well developed intrapersonal skills, e.g., pilots, police and firefighters, entrepreneurs, middle managers, first responders and health care providers. Mahatma Gandhi, Jesus and Martin Luther King Jr. are all noted for their strong self-awareness. Deficits in intrapersonal understanding may be correlated with ADHD, substance abuse and emotional disturbances (mid-life crisis, etc.). Intrapersonal intelligence may be correlated with concepts such as self-confidence, introspection and self-efficacy but it should not be confused with personality styles/preferences such as narcissism, self-esteem, introversion or shyness. High level performance in many demanding professions and roles requires exceptional intrapersonal intelligence: Olympic athletes, professional golfers, stage performers, CEOs, crisis managers. === Naturalistic === Not part of Gardner's original seven, naturalistic intelligence was proposed by him in 1995. "If I were to rewrite Frames of Mind today, I would probably add an eighth intelligence – the intelligence of the naturalist. It seems to me that the individual who is readily able to recognize flora and fauna, to make other consequential distinctions in the natural world, and to use this ability productively (in hunting, in farming, in biological science) is exercising an important intelligence and one that is not adequately encompassed in the current list." This area has to do with nurturing and relating information to one's natural surroundings. Examples include classifying natural forms such as animal and plant species and rocks and mountain types. Essential cognitive skills include pattern recognition, taxonomy and empathy for living beings. Nature deficit disorder describes a recent hypothesis that mental health is negatively impacted by a lack of attention to and understanding of nature, e.g., nature deficit disorder. This sort of ecological receptiveness is deeply rooted in a "sensitive, ethical, and holistic understanding" of the world and its complexities – including the role of humanity within the greater ecosphere. This ability continues to be central in such roles like veterinarians, ecological scientists and botanists. === Proposed additional intelligences === From the beginning Howard Gardner has stated that there may be more intelligences beyond the original six identified in 1983. That is why the naturalist was added to the list in 1999. Several other human capacities were rejected because they do not meet enough of the criteria including personality characteristics such as humor, sexuality and extroversion. === Pedagogical and digital === In January 2016, Gardner mentioned in an interview with Big Think that he was considering adding the teaching–pedagogical intelligence "which allows us to be able to teach successfully to other people". In the same interview, he explicitly refused some other suggested intelligences like humour, cooking and sexual intelligence. Professor Nan B. Adams argues that based on Gardner's definition of multiple intelligences, digital intelligence – a meta-intelligence composed of many other identified intelligences and stemmed from human interactions with digital computers – now exists. == Use in education == Within his Theory of Multiple Intelligences, Gardner stated that our "educational system is heavily biased towards linguistic modes of intersection and assessment and, to a somewhat lesser degree, toward logical quantities modes as well". His work went on to shape educational pedagogy and influence relevant policy and legislation across the world; with particular reference to how teachers must assess students' progress to establish the most effective teaching methods for the individual learner. Gardner's research into the field of learning regarding bodily kinesthetic intelligence has resulted in the use of activities that require physical movement and exertion, with students exhibiting a high level of physical intelligence reporting to benefit from 'learning through movement' in the classroom environment. Although the distinction between intelligences has been set out in great detail, Gardner opposes the idea of labelling learners to a specific intelligence. Gardner maintains that his theory should "empower learners", not restrict them to one modality of learning. According to Gardner, an intelligence is "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture". According to a 2006 study, each of the domains proposed by Gardner involves a blend of the general g factor, cognitive abilities other than g, and, in some cases, non-cognitive abilities or personality characteristics. Gardner defines an intelligence as "bio-psychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture". According to Gardner, there are more ways to do this than just through logical and linguistic intelligence. Gardner believes that the purpose of schooling "should be to develop intelligences and to help people reach vocational and avocational goals that are appropriate to their particular spectrum of intelligences. People who are helped to do so, [he] believe[s], feel more engaged and competent and therefore more inclined to serve society in a constructive way." Gardner contends that Intelligence Quotient (IQ) tests focus mostly on logical and linguistic intelligence. Upon doing well on these tests, the chances of attending a prestigious college or university increase, which in turn creates contributing members of society. While many students function well in this environment, there are those who do not. Gardner's theory argues that students will be better served by a broader vision of education, wherein teachers use different methodologies, exercises and activities to reach all students, not just those who excel at linguistic and logical intelligence. It challenges educators to find "ways that will work for this student learning this topic". James Traub's article in The New Republic notes that Gardner's system has not been accepted by most academics in intelligence or teaching. Gardner states that "while Multiple Intelligences theory is consistent with much empirical evidence, it has not been subjected to strong experimental tests ... Within the area of education, the applications of the theory are currently being examined in many projects. Our hunches will have to be revised many times in light of actual classroom experience." Jerome Bruner agreed with Gardner that the intelligences were "useful fictions", and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered." George Miller, a prominent cognitive psychologist, wrote in The New York Times Book Review that Gardner's argument consisted of "hunch and opinion" and Charles Murray and Richard J. Herrnstein in The Bell Curve (1994) called Gardner's theory "uniquely devoid of psychometric or other quantitative evidence". === Distinction to learning styles === The notion of learning styles is problematic, and their educational use is suspect. Gardner has regularly explained the distinction between Theory of multiple intelligences and various learning style models. A big problem is that there are more than 80 different learning styles models so it is difficult to know which model is being referred to when making a comparison or planning instruction. A key difference is that learning styles typically refer to sensory modalities, preferences, personality characteristics, attitudes, and interests while the multiple intelligences are cognitive abilities with defined levels of skill. It is easy to see why they are confused given the popularity of VAK (Visual, Auditory and Kinesthetic) and Introversion, Extroversion models. Their names sound alike and they share sensory systems (vision, hearing, physicality) but the eight intelligences are much more than the senses or personal preferences. While learning style theories are fundamentally different from the eight intelligences, there is a model proposed by Richard Strong and others that integrates a person’s preference with the eight intelligences to produce a descriptive tapestry of a person’s intellectual dispositions. The four styles are Mastery, Understanding, Interpersonal, and Self-Expressive. For the visual-spatial intelligence expressed artistically, a person may have a distinct pattern of preferences for realistic imagery (Mastery), conceptual art (Understanding), portraiture (Interpersonal) or abstract expression (Self-Expressive). This model has not been tested empirically. === Talents and aptitudes === Intelligences not typically associated with academic achievement have been traditionally delegated to the status of talents or aptitudes—e.g., musical, visual-spatial, kinesthetic and naturalist. Gardner takes issue with this hierarchy because it lowers the importance of these “non-academic” intelligences and devalues their contribution to human thought, individual development and culture. Gardner is fine with calling them all talents (or aptitudes) (including logical-mathematical and linguistic) so long as they are seen to be of equal value. In spite of its lack of general acceptance in the psychological community, Gardner's theory has been adopted by many schools, where it is often conflated with learning styles, and hundreds of books have been written about its applications in education. Some of the applications of Gardner's theory have been described as "simplistic" and Gardner himself has said he is "uneasy" with the way his theory has been used in schools. Gardner has denied that multiple intelligences are learning styles and agrees that the idea of learning styles is incoherent and lacking in empirical evidence. Gardner summarizes his approach with three recommendations for educators: individualize the teaching style (to suit the most effective method for each student), pluralize the teaching (teach important materials in multiple ways), and avoid the term "styles" as being confusing. == Criticism == Gardner argues that there is a wide range of cognitive abilities, but that there are only very weak correlations among them. For example, the theory postulates that a child who learns to multiply easily is not necessarily more intelligent than a child who has more difficulty on this task. The child who takes more time to master multiplication may best learn to multiply through a different approach, may excel in a field outside mathematics, or may be looking at and understanding the multiplication process at a fundamentally deeper level. Intelligence tests and psychometrics have generally found high correlations between different aspects of intelligence, rather than the low correlations which Gardner's theory predicts, supporting the prevailing theory of general intelligence rather than multiple intelligences (MI). The theory has been criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement. === Definition of intelligence === A major criticism of the theory is that it is ad hoc: that Gardner is not expanding the definition of the word "intelligence", but rather denies the existence of intelligence as traditionally understood, and instead uses the word "intelligence" where other people have traditionally used words like "ability" and "aptitude". This practice has been criticized by Robert J. Sternberg, Michael Eysenck, and Sandra Scarr. White (2006) points out that Gardner's selection and application of criteria for his "intelligences" is subjective and arbitrary, and that a different researcher would likely have come up with different criteria. Defenders of MI theory argue that the traditional definition of intelligence is too narrow, and thus a broader definition more accurately reflects the differing ways in which humans think and learn. Some criticisms arise from the fact that Gardner has not provided a test of his multiple intelligences. He originally defined it as the ability to solve problems that have value in at least one culture, or as something that a student is interested in. He then added a disclaimer that he has no fixed definition, and his classification is more of an artistic judgment than fact: Ultimately, it would certainly be desirable to have an algorithm for the selection of intelligence, such that any trained researcher could determine whether a candidate's intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate's intelligence is reminiscent more of an artistic judgment than of a scientific assessment. Generally, linguistic and logical-mathematical abilities are called intelligence, but artistic, musical, athletic, etc. abilities are not. Gardner argues this causes the former to be needlessly aggrandized. Certain critics are wary of this widening of the definition, saying that it ignores "the connotation of intelligence ... [which] has always connoted the kind of thinking skills that makes one successful in school." Gardner writes "I balk at the unwarranted assumption that certain human abilities can be arbitrarily singled out as intelligence while others cannot." Critics hold that given this statement, any interest or ability can be redefined as "intelligence". Thus, studying intelligence becomes difficult, because it diffuses into the broader concept of ability or talent. Gardner's addition of the naturalistic intelligence and conceptions of the existential and moral intelligence are seen as the fruits of this diffusion. Defenders of the MI theory would argue that this is simply a recognition of the broad scope of inherent mental abilities and that such an exhaustive scope by nature defies a one-dimensional classification such as an IQ value. The theory and definitions have been critiqued by Perry D. Klein as being so unclear as to be tautologous and thus unfalsifiable. Having a high musical ability means being good at music while at the same time being good at music is explained by having high musical ability. Henri Wallon argues that "We can not distinguish intelligence from its operations". Yves Richez distinguishes 10 Natural Operating Modes (Modes Opératoires Naturels – MoON). Richez's studies are premised on a gap between Chinese thought and Western thought. In China, the notion of "being" (self) and the notion of "intelligence" do not exist. These are claimed to be Graeco-Roman inventions derived from Plato. Instead of intelligence, Chinese refers to "operating modes", which is why Yves Richez does not speak of "intelligence" but of "natural operating modes" (MoON). === Validity === Critics argue that MI cannot be taken seriously as a scientific theory of intelligence for a number of reasons, the most common are given below: It is not scientific as in a body of knowledge acquired by performing replicated experiments in the laboratory. There is conceptual confusion for determining exactly what intelligence is and what it isn’t, e.g., MI conflates personality, talent and learning styles with intelligence. MI does not value reasoning and academic skills. There are no empirical, experimental studies using psychometrics to establish validity. The proposed intelligences are not proven to be sufficiently independent to warrant separate identification. There is no evidence for educational efficacy and its use may undermine school effectiveness. === Neo-Piagetian criticism === Andreas Demetriou suggests that theories which overemphasize the autonomy of the domains are as simplistic as the theories that overemphasize the role of general intelligence and ignore the domains. He agrees with Gardner that there are indeed domains of intelligence that are relevantly autonomous of each other. Some of the domains, such as verbal, spatial, mathematical, and social intelligence are identified by most lines of research in psychology. In Demetriou's theory, one of the neo-Piagetian theories of cognitive development, Gardner is criticized for underestimating the effects exerted on the various domains of intelligences by the various subprocesses that define overall processing efficiency, such as speed of processing, executive functions, working memory, and meta-cognitive processes underlying self-awareness and self-regulation. All of these processes are integral components of general intelligence that regulate the functioning and development of different domains of intelligence. The domains are to a large extent expressions of the condition of the general processes, and may vary because of their constitutional differences but also differences in individual preferences and inclinations. Their functioning both channels and influences the operation of the general processes. Thus, one cannot satisfactorily specify the intelligence of an individual or design effective intervention programs unless both the general processes and the domains of interest are evaluated. === Human adaptation to multiple environments === The premise of the multiple intelligences hypothesis, that human intelligence is a collection of specialist abilities, have been criticized for not being able to explain human adaptation to most if not all environments in the world. In this context, humans are contrasted to social insects that indeed have a distributed "intelligence" of specialists, and such insects may spread to climates resembling that of their origin but the same species never adapt to a wide range of climates from tropical to temperate by building different types of nests and learning what is edible and what is poisonous. While some such as the leafcutter ant grow fungi on leaves, they do not cultivate different species in different environments with different farming techniques as human agriculture does. It is therefore argued that human adaptability stems from a general ability to falsify hypotheses and make more generally accurate predictions and adapt behavior thereafter, and not a set of specialized abilities which would only work under specific environmental conditions. === IQ tests === Gardner argues that IQ tests only measure linguistic and logical-mathematical abilities. He argues the importance of assessing in an "intelligence-fair" manner. While traditional paper-and-pen examinations favor linguistic and logical skills, there is a need for intelligence-fair measures that value the distinct modalities of thinking and learning that uniquely define each intelligence. Psychologist Alan S. Kaufman points out that IQ tests have measured spatial abilities for 70 years. Modern IQ tests are greatly influenced by the Cattell–Horn–Carroll theory which incorporates a general intelligence but also many more narrow abilities. While IQ tests do give an overall IQ score, they now also give scores for many more narrow abilities. === Lack of empirical evidence === Many of Gardner's "intelligences" correlate with the g factor, supporting the idea of a single dominant type of intelligence. Each of the domains proposed by Gardner involved a blend of g, of cognitive abilities other than g, and, in some cases, of non-cognitive abilities or of personality characteristics. The Johnson O'Connor Research Foundation has tested hundreds of thousands of people to determine their "aptitudes" ("intelligences"), such as manual dexterity, musical ability, spatial visualization, and memory for numbers. There is correlation of these aptitudes with the g factor, but not all are strongly correlated; correlation between the g factor and "inductive speed" ("quickness in seeing relationships among separate facts, ideas, or observations") is only 0.5, considered a moderate correlation. A critical review of MI theory argues that there is little empirical evidence to support it: To date, there have been no published studies that offer evidence of the validity of the multiple intelligences. In 1994 Sternberg reported finding no empirical studies. In 2000 Allix reported finding no empirical validating studies, and at that time Gardner and Connell conceded that there was "little hard evidence for MI theory" (2000, p. 292). In 2004 Sternberg and Grigerenko stated that there were no validating studies for multiple intelligences, and in 2004 Gardner asserted that he would be "delighted were such evidence to accrue", and admitted that "MI theory has few enthusiasts among psychometricians or others of a traditional psychological background" because they require "psychometric or experimental evidence that allows one to prove the existence of the several intelligences". The same review presents evidence to demonstrate that cognitive neuroscience research does not support the theory of multiple intelligences: ... the human brain is unlikely to function via Gardner's multiple intelligences. Taken together the evidence for the intercorrelations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping "what is it?" and "where is it?" neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner's intelligences could operate "via a different set of neural mechanisms" (1999, p. 99). Equally important, the evidence for the "what is it?" and "where is it?" processing pathways, for Kahneman's two decision-making systems, and for adapted cognition modules suggests that these cognitive brain specializations have evolved to address very specific problems in our environment. Because Gardner claimed that the intelligences are innate potentialities related to a general content area, MI theory lacks a rationale for the phylogenetic emergence of the intelligences. However, more recent research from Branton Shearer in 2017 was able to identify both structures that activate in common, as well as separately, across Gardner's 8 intelligences. == See also == Charles Spearman – English psychologist (1863–1945) == Notes == == References == === Works cited === == Further reading == == External links == Multiple Intelligences Oasis, Howard Gardner's official website for MI Theory Multiple Intelligences, Future Minds and Educating The App Generation: A discussion with Dr Howard Gardner, Bridging the Gaps: A Portal for Curious Minds
Wikipedia/Multiple_intelligence_theory
In theoretical chemistry, Marcus theory is a theory originally developed by Rudolph A. Marcus, starting in 1956, to explain the rates of electron transfer reactions – the rate at which an electron can move or jump from one chemical species (called the electron donor) to another (called the electron acceptor). It was originally formulated to address outer sphere electron transfer reactions, in which the two chemical species only change in their charge with an electron jumping (e.g. the oxidation of an ion like Fe2+/Fe3+), but do not undergo large structural changes. It was extended to include inner sphere electron transfer contributions, in which a change of distances or geometry in the solvation or coordination shells of the two chemical species is taken into account (the Fe-O distances in Fe(H2O)2+ and Fe(H2O)3+ are different). For electron transfer reactions without making or breaking bonds Marcus theory takes the place of Eyring's transition state theory which has been derived for reactions with structural changes. Both theories lead to rate equations of the same exponential form. However, whereas in Eyring theory the reaction partners become strongly coupled in the course of the reaction to form a structurally defined activated complex, in Marcus theory they are weakly coupled and retain their individuality. It is the thermally induced reorganization of the surroundings, the solvent (outer sphere) and the solvent sheath or the ligands (inner sphere) which create the geometrically favourable situation prior to and independent of the electron jump. The original classical Marcus theory for outer sphere electron transfer reactions demonstrates the importance of the solvent and leads the way to the calculation of the Gibbs free energy of activation, using the polarization properties of the solvent, the size of the reactants, the transfer distance and the Gibbs free energy Δ G ∘ {\displaystyle \Delta G^{\circ }} of the redox reaction. The most startling result of Marcus' theory was the "inverted region": whereas the reaction rates usually become higher with increasing exergonicity of the reaction, electron transfer should, according to Marcus theory, become slower in the very negative Δ G ∘ {\displaystyle \Delta G^{\circ }} domain. Scientists searched the inverted region for proof of a slower electron transfer rate for 30 years until it was unequivocally verified experimentally in 1984. R. A. Marcus received the Nobel Prize in Chemistry in 1992 for this theory. Marcus theory is used to describe a number of important processes in chemistry and biology, including photosynthesis, corrosion, certain types of chemiluminescence, charge separation in some types of solar cells and more. Besides the inner and outer sphere applications, Marcus theory has been extended to address heterogeneous electron transfer. == Outer vs inner ET == In a redox reaction an electron donor D must diffuse to the acceptor A, forming a precursor complex, which is labile but allows electron transfer to give successor complex. The pair then dissociates. For a one electron transfer the reaction is D + A ⇌ k 21 k 12 [ D ⋯ A ] ⇌ k 32 k 23 [ D + ⋯ A − ] → k 30 D + + A − {\displaystyle {\ce {{D}+A<=>[k_{12}][k_{21}][D{\dotsm }A]<=>[k_{23}][k_{32}][D+{\dotsm }A^{-}]->[k_{30}]{D+}+{A^{-}}}}} (D and A may already carry charges). Here k12, k21 and k30 are diffusion constants, k23 and k32 are rate constants of activated reactions. The total reaction may be diffusion controlled (the electron transfer step is faster than diffusion, every encounter leads to reaction) or activation controlled (the "equilibrium of association" is reached, the electron transfer step is slow, the separation of the successor complex is fast). The ligand shells around A and D are retained. This process is called outer sphere electron transfer. Outer sphere ET is the main focus of traditional Marcus Theory. The other kind or redox reactions is inner sphere where A and D are covalently linked by a bridging ligand. Rates for such ET reactions depend on ligand exchange rates. == The problem == In outer sphere redox reactions no bonds are formed or broken; only an electron transfer (ET) takes place. A quite simple example is the Fe2+/Fe3+ redox reaction, the self exchange reaction which is known to be always occurring in an aqueous solution containing the aquo complexes [Fe(H2O)6]2+ and [Fe(H2O)6]3+. Redox occurs with Gibbs free reaction energy Δ G ∘ = 0 {\displaystyle \Delta G^{\circ }=0} . From the reaction rate's temperature dependence an activation energy is determined, and this activation energy is interpreted as the energy of the transition state in a reaction diagram. The latter is drawn, according to Arrhenius and Eyring, as an energy diagram with the reaction coordinate as the abscissa. The reaction coordinate describes the minimum energy path from the reactants to the products, and the points of this coordinate are combinations of distances and angles between and in the reactants in the course of the formation and/or cleavage of bonds. The maximum of the energy diagram, the transition state, is characterized by a specific configuration of the atoms. Moreover, in Eyring's TST a quite specific change of the nuclear coordinates is responsible for crossing the maximum point, a vibration in this direction is consequently treated as a translation. For outer sphere redox reactions there cannot be such a reaction path, but nevertheless one does observe an activation energy. The rate equation for activation-controlled reactions has the same exponential form as the Eyring equation, k act = A ⋅ e − Δ G ‡ R T {\displaystyle k_{\text{act}}=A\cdot e^{-{\frac {\Delta G^{\ddagger }}{RT}}}} Δ G ‡ {\displaystyle \Delta G^{\ddagger }} is the Gibbs free energy of the formation of the transition state, the exponential term represents the probability of its formation, A contains the probability of crossing from precursor to successor complex. == The Marcus model == The consequence of an electron transfer is the rearrangement of charges, and this greatly influences the solvent environment. For the dipolar solvent molecules rearrange in the direction of the field of the charges (this is called orientation polarisation), and also the atoms and electrons in the solvent molecules are slightly displaced (atomic and electron polarization, respectively). It is this solvent polarization which determines the free energy of activation and thus the reaction rate. Substitution, elimination and isomerization reactions differ from the outer sphere redox reaction not only in the structural changes outlined above, but also in the fact that the movements of the nuclei and the shift of charges (charge transfer, CT) on the reactions path take place in a continuous and concerted way: nuclear configurations and charge distribution are always "in equilibrium". This is illustrated by the SN2 substitution of the saponification of an alkyl halide where the rear side attack of the OH− ion pushes out a halide ion and where a transition state with a five-coordinated carbon atom must be visualized. The system of the reactants becomes coupled so tightly during the reaction that they form the activated complex as an integral entity. The solvent here has a minor effect. By contrast, in outer sphere redox reactions the displacement of nuclei in the reactants are small, here the solvent has the dominant role. Donor-acceptor coupling is weak, both keep their identity during the reaction. Therefore, the electron, being an elementary particle, can only "jump" as a whole (electron transfer, ET). If the electron jumps, the transfer is much faster than the movement of the large solvent molecules, with the consequence that the nuclear positions of the reaction partners and the solvent molecules are the same before and after the electron jump (Franck–Condon principle). The jump of the electron is governed by quantum mechanical rules, it is only possible if also the energy of the ET system does not change "during" the jump. The arrangement of solvent molecules depends on the charge distribution on the reactants. If the solvent configuration must be the same before and after the jump and the energy may not change, then the solvent cannot be in the solvation state of the precursor nor in that of the successor complex as they are different, it has to be somewhere in between. For the self-exchange reaction for symmetry reasons an arrangement of the solvent molecules exactly in the middle of those of precursor and successor complex would meet the conditions. This means that the solvent arrangement with half of the electron on both donor and acceptor would be the correct environment for jumping. Also, in this state the energy of precursor and successor in their solvent environment would be the same. However, the electron as an elementary particle cannot be divided, it resides either on the donor or the acceptor and arranges the solvent molecules accordingly in an equilibrium. The "transition state", on the other hand, requires a solvent configuration which would result from the transfer of half an electron, which is impossible. This means that real charge distribution and required solvent polarization are not in an "equilibrium". Yet it is possible that the solvent takes a configuration corresponding to the "transition state", even if the electron sits on the donor or acceptor. This, however, requires energy. This energy may be provided by the thermal energy of the solvent and thermal fluctuations can produce the correct polarization state. Once this has been reached the electron can jump. The creation of the correct solvent arrangement and the electron jump are decoupled and do not happen in a synchronous process. Thus the energy of the transition state is mostly polarization energy of the solvent. == Marcus theory == === The macroscopic system: two conducting spheres === On the basis of his reasoning R.A. Marcus developed a classical theory with the aim of calculating the polarization energy of the said non-equilibrium state. From thermodynamics it is well known that the energy of such a state can be determined if a reversible path to that state is found. Marcus was successful in finding such a path via two reversible charging steps for the preparation of the "transition state" from the precursor complex. Four elements are essential for the model on which the theory is based: Marcus employs a classical, purely electrostatic model. The charge (many elementary charges) may be transferred in any portion from one body to another. Marcus separates the fast electron polarisation Pe and the slow atom and orientation polarisation Pu of the solvent on grounds of their time constants differing several orders of magnitude. Marcus separates the inner sphere (reactant + tightly bound solvent molecules, in complexes + ligands) and the outer sphere (free solvent ) In this model Marcus confines himself to calculating the outer sphere energy of the non-equilibrium polarization of the "transition state". The outer sphere energy is often much larger than the inner sphere contribution because of the far reaching electrostatic forces (compare the Debye–Hückel theory of electrochemistry). Marcus' tool is the theory of dielectric polarization in solvents. He solved the problem in a general way for a transfer of charge between two bodies of arbitrary shape with arbitrary surface and volume charge. For the self-exchange reaction, the redox pair (e.g. Fe(H2O)63+ / Fe(H2O)62+) is substituted by two macroscopic conducting spheres at a defined distance carrying specified charges. Between these spheres a certain amount of charge is reversibly exchanged. In the first step the energy WI of the transfer of a specific amount of charge is calculated, e.g. for the system in a state when both spheres carry half of the amount of charge which is to be transferred. This state of the system can be reached by transferring the respective charge from the donor sphere to the vacuum and then back to the acceptor sphere. Then the spheres in this state of charge give rise to a defined electric field in the solvent which creates the total solvent polarization Pu + Pe. By the same token this polarization of the solvent interacts with the charges. In a second step the energy WII of the reversible (back) transfer of the charge to the first sphere, again via the vacuum, is calculated. However, the atom and orientation polarization Pu is kept fixed, only the electron polarization Pe may adjust to the field of the new charge distribution and the fixed Pu. After this second step the system is in the desired state with an electron polarization corresponding to the starting point of the redox reaction and an atom and orientation polarization corresponding to the "transition state". The energy WI + WII of this state is, thermodynamically speaking, a Gibbs free energy G. Of course, in this classical model the transfer of any arbitrary amount of charge Δe is possible. So the energy of the non-equilibrium state, and consequently of the polarization energy of the solvent, can be probed as a function of Δe. Thus Marcus has lumped together, in a very elegant way, the coordinates of all solvent molecules into a single coordinate of solvent polarization Δp which is determined by the amount of transferred charge Δe. So he reached a simplification of the energy representation to only two dimensions: G = f(Δe). The result for two conducting spheres in a solvent is the formula of Marcus G = ( 1 2 r 1 + 1 2 r 2 − 1 R ) ⋅ ( 1 ϵ opt − 1 ϵ s ) ⋅ ( Δ e ) 2 {\displaystyle G=\left({\frac {1}{2r_{1}}}+{\frac {1}{2r_{2}}}-{\frac {1}{R}}\right)\cdot \left({\frac {1}{\epsilon _{\text{opt}}}}-{\frac {1}{\epsilon _{\text{s}}}}\right)\cdot (\Delta e)^{2}} Where r1 and r2 are the radii of the spheres and R is their separation, εs and εopt are the static and high frequency (optical) dielectric constants of the solvent, Δe the amount of charge transferred. The graph of G vs. Δe is a parabola (Fig. 1). In Marcus theory the energy belonging to the transfer of a unit charge (Δe = 1) is called the (outer sphere) reorganization energy λo, i.e. the energy of a state where the polarization would correspond to the transfer of a unit amount of charge, but the real charge distribution is that before the transfer. In terms of exchange direction the system is symmetric. === The microscopic system: the donor-acceptor pair === Shrinking the two-sphere model to the molecular level creates the problem that in the self-exchange reaction the charge can no longer be transferred in arbitrary amounts, but only as a single electron. However, the polarization still is determined by the total ensemble of the solvent molecules and therefore can still be treated classically, i.e. the polarization energy is not subject to quantum limitations. Therefore, the energy of solvent reorganization can be calculated as being due to a hypothetical transfer and back transfer of a partial elementary charge according to the Marcus formula. Thus the reorganization energy for chemical redox reactions, which is a Gibbs free energy, is also a parabolic function of Δe of this hypothetical transfer, For the self exchange reaction, where for symmetry reasons Δe = 0.5, the Gibbs free energy of activation is ΔG(0)‡ = λo/4 (see Fig. 1 and Fig. 2 intersection of the parabolas I and f, f(0), respectively). Up to now all was physics, now some chemistry enters. The self exchange reaction is a very specific redox reaction, most of the redox reactions are between different partners e.g. [ Fe II ( CN ) 6 ] 4 − + [ Ir IV Cl 6 ] 2 − ↽ − − ⇀ [ Fe III ( CN ) 6 ] 3 − + [ Ir III Cl 6 ] 3 − {\displaystyle {\ce {{[Fe^{II}(CN)6]^{4-}}+{[Ir^{IV}Cl6]^{2-}}<=>{[Fe^{III}(CN)6]^{3-}}+{[Ir^{III}Cl6]^{3-}}}}} and they have positive (endergonic) or negative (exergonic) Gibbs free energies of reaction Δ G ∘ {\displaystyle \Delta G^{\circ }} . As Marcus calculations refer exclusively to the electrostatic properties in the solvent (outer sphere) Δ G ∘ {\displaystyle \Delta G^{\circ }} and λ 0 {\displaystyle \lambda _{0}} are independent of one another and therefore can just be added up. This means that the Marcus parabolas in systems with different Δ G ∘ {\displaystyle \Delta G^{\circ }} are shifted just up or down in the G {\displaystyle G} vs. Δ e {\displaystyle \Delta e} diagram (Fig. 2). Variation of Δ G ∘ {\displaystyle \Delta G^{\circ }} can be affected in experiments by offering different acceptors to the same donor. Simple calculation of the intersection point between the parabolas i ( y = x 2 ) {\displaystyle (y=x^{2})} and f i {\displaystyle f_{i}} ( y = ( x − d ) 2 + c ) {\displaystyle (y=(x-d)^{2}+c)} give the Gibbs free energy of activation Δ G ‡ = ( λ 0 + Δ G ∘ ) 2 4 λ 0 {\displaystyle \Delta G^{\ddagger }={\frac {(\lambda _{0}+\Delta G^{\circ })^{2}}{4\lambda _{0}}}} , where λ 0 {\displaystyle \lambda _{0}} = d 2 {\displaystyle d^{2}} and Δ G ∘ {\displaystyle \Delta G^{\circ }} = c. The intersection of those parabolas represents an activation energy and not the energy of a transition state of fixed configuration of all nuclei in the system as is the case in the substitution and other reactions mentioned. The transition state of the latter reactions has to meet structural and energetic conditions, redox reactions have only to comply to the energy requirement. Whereas the geometry of the transition state in the other reactions is the same for all pairs of reactants, for redox pairs many polarization environments may meet the energetic conditions. Marcus' formula shows a quadratic dependence of the Gibbs free energy of activation on the Gibbs free energy of reaction. It is general knowledge from the host of chemical experience that reactions usually are the faster the more negative is Δ G ∘ {\displaystyle \Delta G^{\circ }} . In many cases even a linear free energy relation is found. According to the Marcus formula the rates increase also when the reactions are more exergonic, however only as long as Δ G ∘ {\displaystyle \Delta G^{\circ }} is positive or slightly negative. It is surprising that for redox reactions according to the Marcus formula the activation energy should increase for very exergonic reaction, i.e. in the cases when Δ G ∘ {\displaystyle \Delta G^{\circ }} is negative and its absolute value is greater than that of λ 0 {\displaystyle \lambda _{0}} . This realm of Gibbs free energy of reaction is called "Marcus inverted region". In Fig. 2 it becomes obvious that the intersection of the parabolas i and f moves upwards in the left part of the graph when Δ G ∘ {\displaystyle \Delta G^{\circ }} continues to become more negative, and this means increasing activation energy. Thus the total graph of ln ⁡ k {\displaystyle \ln k} vs. Δ G ∘ {\displaystyle \Delta G^{\circ }} should have a maximum. The maximum of the ET rate is expected at Δ G ‡ = 0. {\displaystyle \Delta G^{\ddagger }=0.} Here Δ e = 0 {\displaystyle \Delta e=0} and q = 0 {\displaystyle q=0} (Fig. 2) which means that the electron may jump in the precursor complex at its equilibrium polarization. No thermal activation is necessary: the reaction is barrierless. In the inverted region the polarization corresponds to the difficult-to-imagine notion of a charge distribution where the donor has received and the acceptor given off charge. Of course, in real world this does not happen, it is not a real charge distribution which creates this critical polarization, but the thermal fluctuation in the solvent. This polarization necessary for transfer in the inverted region can be created – with some probability – as well as any other one. The electron is just waiting for it for jumping. == Inner sphere electron transfer == In the outer sphere model the donor or acceptor and the tightly bound solvation shells or the complex' ligands were considered to form rigid structures which do not change in the course of electron transfer. However, the distances in the inner sphere are dependent on the charge of donor and acceptor, e.g. the central ion-ligand distances are different in complexes carrying different charges and again the Franck–Condon principle must be obeyed: for the electron to jump to occur, the nuclei have to have an identical configuration to both the precursor and the successor complexes, of course highly distorted. In this case the energy requirement is fulfilled automatically. In this inner sphere case the Arrhenius concept holds, the transition state of definite geometric structure is reached along a geometrical reaction coordinate determined by nuclear motions. No further nuclear motion is necessary to form the successor complex, just the electron jumps, which makes a difference to the TST theory. The reaction coordinate for inner sphere energy is governed by vibrations and they differ in the oxidized and reduces species. For the self-exchange system Fe2+/Fe3+ only the symmetrical breathing vibration of the six water molecules around the iron ions is considered. Assuming harmonic conditions this vibration has frequencies ν D {\displaystyle \nu _{D}} and ν A {\displaystyle \nu _{A}} , the force constants fD and fA are f = 4 π 2 ν 2 μ {\displaystyle f=4\pi ^{2}\nu ^{2}\mu } and the energies are E D = E D ( q 0 , D ) + 3 f D ( Δ q D ) 2 E A = E A ( q 0 , A ) + 3 f A ( Δ q A ) 2 {\displaystyle {\begin{aligned}E_{D}&=E_{D}(q_{0,D})+3f_{D}(\Delta q_{D})^{2}\\E_{A}&=E_{A}(q_{0,A})+3f_{A}(\Delta q_{A})^{2}\end{aligned}}} where q0 is the equilibrium normal coordinate and Δ q = ( q − q 0 ) {\displaystyle \Delta q=(q-q_{0})} the displacement along the normal coordinate, the factor 3 stems from 6 (H2O)·1⁄2. Like for the outer-sphere reorganization energy potential energy curve is quadratic, here, however, as a consequence of vibrations. The equilibrium normal coordinates differ in Fe(H2O)62+ and Fe(H2O)63+. By thermal excitation of the breathing vibration a geometry can be reached which is common to both donor and acceptor, i.e. the potential energy curves of the breathing vibrations of D and A intersect here. This is the situation where the electron may jump. The energy of this transition state is the inner sphere reorganization energy λin. For the self-exchange reaction the metal-water distance in the transition state can be calculated q ∗ = f D q 0 , D + f A q 0 , A f D + f A {\displaystyle q^{*}={\frac {f_{D}q_{0,D}+f_{A}q_{0,A}}{f_{D}+f_{A}}}} This gives the inner sphere reorganisation energy λ in = Δ E ∗ = 3 f D f A f D + f A ( q 0 , D − q 0 , A ) 2 {\displaystyle \lambda _{\text{in}}=\Delta E^{*}={\frac {3f_{D}f_{A}}{f_{D}+f_{A}}}(q_{0,D}-q_{0,A})^{2}} It is fortunate that the expressions for the energies for outer and inner reorganization have the same quadratic form. Inner sphere and outer sphere reorganization energies are independent, so they can be added to give λ = λ in + λ o {\displaystyle \lambda =\lambda _{\text{in}}+\lambda _{o}} and inserted in the Arrhenius equation k act = A ⋅ e − Δ G in ‡ + Δ G o ‡ k T {\displaystyle k_{\text{act}}=A\cdot e^{-{\frac {\Delta {G_{\text{in}}^{\ddagger }}+\Delta {G_{o}}^{\ddagger }}{kT}}}} Here, A can be seen to represent the probability of electron jump, exp[-ΔGin‡/kT] that of reaching the transition state of the inner sphere and exp[-ΔGo‡/kT] that of outer sphere adjustment. For unsymmetrical (cross) reactions like [ Fe ( H 2 O ) 6 ] 2 + + [ Co ( H 2 O ) 6 ] 3 + ↽ − − ⇀ [ Fe ( H 2 O ) 6 ] 3 + + [ Co ( H 2 O ) 6 ] 2 + {\displaystyle {\ce {{[Fe(H2O)6]^{2}+}+{[Co(H2O)6]^{3}+}<=>{[Fe(H2O)6]^{3}+}+{[Co(H2O)6]^{2}+}}}} the expression for λ i n {\displaystyle \lambda _{in}} can also be derived, but it is more complicated. These reactions have a free reaction enthalpy ΔG0 which is independent of the reorganization energy and determined by the different redox potentials of the iron and cobalt couple. Consequently, the quadratic Marcus equation holds also for the inner sphere reorganization energy, including the prediction of an inverted region. One may visualizing this by (a) in the normal region both the initial state and the final state have to have stretched bonds, (b) In the Δ G‡ = 0 case the equilibrium configuration of the initial state is the stretched configuration of the final state, and (c) in the inverted region the initial state has compressed bonds whereas the final state has largely stretched bonds. Similar considerations hold for metal complexes where the ligands are larger than solvent molecules and also for ligand bridged polynuclear complexes. == The probability of the electron jump == The strength of the electronic coupling of the donor and acceptor decides whether the electron transfer reaction is adiabatic or non-adiabatic. In the non-adiabatic case the coupling is weak, i.e. HAB in Fig. 3 is small compared to the reorganization energy and donor and acceptor retain their identity. The system has a certain probability to jump from the initial to the final potential energy curves. In the adiabatic case the coupling is considerable, the gap of 2 HAB is larger and the system stays on the lower potential energy curve. Marcus theory as laid out above, represents the non-adiabatic case. Consequently, the semi-classical Landau-Zener theory can be applied, which gives the probability of interconversion of donor and acceptor for a single passage of the system through the region of the intersection of the potential energy curves P i f = 1 − exp ⁡ [ − 4 π 2 H i f 2 h v | s i − s f | ] {\displaystyle P_{if}=1-\exp \left[-{\frac {4\pi ^{2}{H_{if}^{2}}}{hv\left|s_{i}-s_{f}\right|}}\right]} where Hif is the interaction energy at the intersection, v the velocity of the system through the intersection region, si and sf the slopes there. Working this out, one arrives at the basic equation of Marcus theory k e t = 2 π ℏ | H A B | 2 1 4 π λ k B T exp ⁡ ( − ( λ + Δ G ∘ ) 2 4 λ k B T ) {\displaystyle k_{et}={\frac {2\pi }{\hbar }}|H_{AB}|^{2}{\frac {1}{\sqrt {4\pi \lambda k_{\rm {B}}T}}}\exp \left(-{\frac {(\lambda +\Delta G^{\circ })^{2}}{4\lambda k_{\rm {B}}T}}\right)} where k e t {\displaystyle k_{et}} is the rate constant for electron transfer, | H A B | {\displaystyle |H_{AB}|} is the electronic coupling between the initial and final states, λ {\displaystyle \lambda } is the reorganization energy (both inner and outer-sphere), and Δ G ∘ {\displaystyle \Delta G^{\circ }} is the total Gibbs free energy change for the electron transfer reaction ( k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant and T {\displaystyle T} is the absolute temperature). Thus Marcus's theory builds on the traditional Arrhenius equation for the rates of chemical reactions in two ways: 1. It provides a formula for the activation energy, based on a parameter called the reorganization energy, as well as the Gibbs free energy. The reorganization energy is defined as the energy required to "reorganize" the system structure from initial to final coordinates, without making the charge transfer. 2. It provides a formula for the pre-exponential factor in the Arrhenius equation, based on the electronic coupling between the initial and final state of the electron transfer reaction (i.e., the overlap of the electronic wave functions of the two states). == Experimental results == Marcus published his theory in 1956. For many years there was an intensive search for the inverted region which would be a proof of the theory. But all experiments with series of reactions of more and more negative ΔG0 revealed only an increase of the reaction rate up to the diffusion limit, i.e. to a value indicating that every encounter lead to electron transfer, and that limit held also for very negative ΔG0 values (Rehm-Weller behaviour). It took about 30 years until the inverted region was unequivocally substantiated by Miller, Calcaterra and Closs for an intramolecular electron transfer in a molecule where donor and acceptor are kept at a constant distance by means of a stiff spacer (Fig.4). A posteriori one may presume that in the systems where the reaction partners may diffuse freely the optimum distance for the electron jump may be sought, i.e. the distance for which ΔG‡ = 0 and ΔG0 = - λo. For λo is dependent on R, λo increases for larger R and the opening of the parabola smaller. It is formally always possible to close the parabola in Fig. 2 to such an extent, that the f-parabola intersects the i-parabola in the apex. Then always ΔG‡ = 0 and the rate k reaches the maximum diffusional value for all very negative ΔG0. There are, however, other concepts for the phenomenon, e.g. the participation of excited states or that the decrease of the rate constants would be so far in the inverted region that it escapes measurement. R. A. Marcus and his coworkers have further developed the theory outlined here in several aspects. They have included inter alia statistical aspects and quantum effects, they have applied the theory to chemiluminescence and electrode reactions. R. A. Marcus received the Nobel Prize in Chemistry in 1992, and his Nobel Lecture gives an extensive view of his work. == See also == Hammond's postulate Solvated electron Free-energy relationship == References == == Marcus's key papers == Marcus, R.A (1956). "On the Theory of Oxidation-Reduction Reactions Involving Electron Transfer. I" (PDF). J. Chem. Phys. 24 (5): 966–978. Bibcode:1956JChPh..24..966M. doi:10.1063/1.1742723. S2CID 16579694. Marcus, R.A (1956). "Electrostatic Free Energy and Other Properties of States Having Nonequilibrium Polarization. I" (PDF). J. Chem. Phys. 24 (5): 979–989. Bibcode:1956JChPh..24..979M. doi:10.1063/1.1742724. Marcus, R.A (1957). "On the Theory of Oxidation-Reduction Reactions Involving Electron Transfer. II. Applications to Data on the Rates of Isotopic Exchange Reactions" (PDF). J. Chem. Phys. 26 (4): 867–871. Bibcode:1957JChPh..26..867M. doi:10.1063/1.1743423. Marcus, R.A (1957). "On the Theory of Oxidation-Reduction Reactions Involving Electron Transfer. III. Applications to Data on the Rates of Organic Redox Reactions" (PDF). J. Chem. Phys. 26 (4): 872–877. Bibcode:1957JChPh..26..872M. doi:10.1063/1.1743424. Marcus, R.A (1960). "Exchange reactions and electron transfer reactions including isotopic exchange. Theory of oxidation-reduction reactions involving electron transfer. Part 4.—A statistical-mechanical basis for treating contributions from solvent, ligands, and inert salt" (PDF). Discuss. Faraday Soc. 29: 21–31. doi:10.1039/df9602900021. Marcus, R.A (1963). "On The Theory Of Oxidation--Reduction Reactions Involving Electron Transfer. V. Comparison And Properties Of Electrochemical And Chemical Rate Constants". J. Phys. Chem. 67 (4): 853–857. doi:10.1021/j100798a033. OSTI 4712863. Marcus, R.A (1964). "Chemical and Electrochemical Electron-Transfer Theory". Annu. Rev. Phys. Chem. 15 (1): 155–196. Bibcode:1964ARPC...15..155M. doi:10.1146/annurev.pc.15.100164.001103. Marcus, R.A (1965). "On the Theory of Electron-Transfer Reactions. VI. Unified Treatment for Homogeneous and Electrode Reactions" (PDF). J. Chem. Phys. 43 (2): 679–701. Bibcode:1965JChPh..43..679M. doi:10.1063/1.1696792. Marcus, R.A.; Sutin N (1985). "Electron transfers in chemistry and biology". Biochim. Biophys. Acta. 811 (3): 265. doi:10.1016/0304-4173(85)90014-X.
Wikipedia/Marcus_theory
The multiverse is the hypothetical set of all universes. Together, these universes are presumed to comprise everything that exists: the entirety of space, time, matter, energy, information, and the physical laws and constants that describe them. The different universes within the multiverse are called "parallel universes", "flat universes", "other universes", "alternate universes", "multiple universes", "plane universes", "parent and child universes", "many universes", or "many worlds". One common assumption is that the multiverse is a "patchwork quilt of separate universes all bound by the same laws of physics." The concept of multiple universes, or a multiverse, has been discussed throughout history. It has evolved and has been debated in various fields, including cosmology, physics, and philosophy. Some physicists have argued that the multiverse is a philosophical notion rather than a scientific hypothesis, as it cannot be empirically falsified. In recent years, there have been proponents and skeptics of multiverse theories within the physics community. Although some scientists have analyzed data in search of evidence for other universes, no statistically significant evidence has been found. Critics argue that the multiverse concept lacks testability and falsifiability, which are essential for scientific inquiry, and that it raises unresolved metaphysical issues. Max Tegmark and Brian Greene have proposed different classification schemes for multiverses and universes. Tegmark's four-level classification consists of Level I: an extension of our universe, Level II: universes with different physical constants, Level III: many-worlds interpretation of quantum mechanics, and Level IV: ultimate ensemble. Brian Greene's nine types of multiverses include quilted, inflationary, brane, cyclic, landscape, quantum, holographic, simulated, and ultimate. The ideas explore various dimensions of space, physical laws, and mathematical structures to explain the existence and interactions of multiple universes. Some other multiverse concepts include twin-world models, cyclic theories, M-theory, and black-hole cosmology. The anthropic principle suggests that the existence of a multitude of universes, each with different physical laws, could explain the asserted appearance of fine-tuning of our own universe for conscious life. The weak anthropic principle posits that we exist in one of the few universes that support life. Debates around Occam's razor and the simplicity of the multiverse versus a single universe arise, with proponents like Max Tegmark arguing that the multiverse is simpler and more elegant. The many-worlds interpretation of quantum mechanics and modal realism, the belief that all possible worlds exist and are as real as our world, are also subjects of debate in the context of the anthropic principle. == History of the concept == According to some, the idea of infinite worlds was first suggested by the pre-Socratic Greek philosopher Anaximander in the sixth century BCE. However, there is debate as to whether he believed in multiple worlds, and if he did, whether those worlds were co-existent or successive. The first to whom we can definitively attribute the concept of innumerable worlds are the Ancient Greek Atomists, beginning with Leucippus and Democritus in the 5th century BCE, followed by Epicurus (341–270 BCE) and Lucretius (1st century BCE). In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages. The American philosopher and psychologist William James used the term "multiverse" in 1895, but in a different context. The concept first appeared in the modern scientific context in the course of the debate between Boltzmann and Zermelo in 1895. In Dublin in 1952, Erwin Schrödinger gave a lecture in which he jocularly warned his audience that what he was about to say might "seem lunatic". He said that when his equations seemed to describe several different histories, these were "not alternatives, but all really happen simultaneously". This sort of duality is called "superposition". == Search for evidence == In the 1990s, after recent works of fiction about the concept gained popularity, scientific discussions about the multiverse and journal articles about it gained prominence. Around 2010, scientists such as Stephen M. Feeney analyzed Wilkinson Microwave Anisotropy Probe (WMAP) data and claimed to find evidence suggesting that this universe collided with other (parallel) universes in the distant past. However, a more thorough analysis of data from the WMAP and from the Planck satellite, which has a resolution three times higher than WMAP, did not reveal any statistically significant evidence of such a bubble universe collision. In addition, there was no evidence of any gravitational pull of other universes on ours. In 2015, an astrophysicist may have found evidence of alternate or parallel universes by looking back in time to a time immediately after the Big Bang, although it is still a matter of debate among physicists. Dr. Ranga-Ram Chary, after analyzing the cosmic radiation spectrum, found a signal 4,500 times brighter than it should have been, based on the number of protons and electrons scientists believe existed in the very early universe. This signal—an emission line that arose from the formation of atoms during the era of recombination—is more consistent with a universe whose ratio of matter particles to photons is about 65 times greater than our own. There is a 30% chance that this signal is noise, and not really a signal at all; however, it is also possible that it exists because a parallel universe dumped some of its matter particles into our universe. If additional protons and electrons had been added to our universe during recombination, more atoms would have formed, more photons would have been emitted during their formation, and the signature line that arose from all of these emissions would be greatly enhanced. Chary himself is skeptical:Many other regions beyond our observable universe would exist with each such region governed by a different set of physical parameters than the ones we have measured for our universe. Chary also noted:Unusual claims like evidence for alternate universes require a very high burden of proof. The signature that Chary has isolated may be a consequence of incoming light from distant galaxies, or even from clouds of dust surrounding our own galaxy. == Proponents and skeptics == Modern proponents of one or more of the multiverse hypotheses include Lee Smolin, Don Page, Brian Greene, Max Tegmark, Alan Guth, Andrei Linde, Michio Kaku, David Deutsch, Leonard Susskind, Alexander Vilenkin, Yasunori Nomura, Raj Pathria, Laura Mersini-Houghton, Neil deGrasse Tyson, Sean Carroll and Stephen Hawking. Scientists who are generally skeptical of the concept of a multiverse or popular multiverse hypotheses include Sabine Hossenfelder, David Gross, Paul Steinhardt, Anna Ijjas, Abraham Loeb, David Spergel, Neil Turok, Viatcheslav Mukhanov, Michael S. Turner, Roger Penrose, George Ellis, Joe Silk, Carlo Rovelli, Adam Frank, Marcelo Gleiser, Jim Baggott and Paul Davies. == Arguments against multiverse hypotheses == In his 2003 New York Times opinion piece, "A Brief History of the Multiverse", author and cosmologist Paul Davies offered a variety of arguments that multiverse hypotheses are non-scientific: For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there is an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification. Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. The multiverse theory may be dressed up in scientific language, but in essence, it requires the same leap of faith. George Ellis, writing in August 2011, provided a criticism of the multiverse, and pointed out that it is not a traditional scientific theory. He accepts that the multiverse is thought to exist far beyond the cosmological horizon. He emphasized that it is theorized to be so far away that it is unlikely any evidence will ever be found. Ellis also explained that some theorists do not believe the lack of empirical testability and falsifiability is a major concern, but he is opposed to that line of thinking: Many physicists who talk about the multiverse, especially advocates of the string landscape, do not care much about parallel universes per se. For them, objections to the multiverse as a concept are unimportant. Their theories live or die based on internal consistency and, one hopes, eventual laboratory testing. Ellis says that scientists have proposed the idea of the multiverse as a way of explaining the nature of existence. He points out that it ultimately leaves those questions unresolved because it is a metaphysical issue that cannot be resolved by empirical science. He argues that observational testing is at the core of science and should not be abandoned: As skeptical as I am, I think the contemplation of the multiverse is an excellent opportunity to reflect on the nature of science and on the ultimate nature of existence: why we are here. … In looking at this concept, we need an open mind, though not too open. It is a delicate path to tread. Parallel universes may or may not exist; the case is unproved. We are going to have to live with that uncertainty. Nothing is wrong with scientifically based philosophical speculation, which is what multiverse proposals are. But we should name it for what it is. Philosopher Philip Goff argues that the inference of a multiverse to explain the apparent fine-tuning of the universe is an example of Inverse Gambler's Fallacy. Stoeger, Ellis, and Kircher: sec. 7  note that in a true multiverse theory, "the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support". In May 2020, astrophysicist Ethan Siegel expressed criticism in a Forbes blog post that parallel universes would have to remain a science fiction dream for the time being, based on the scientific evidence available to us. Scientific American contributor John Horgan also argues against the idea of a multiverse, claiming that they are "bad for science." == Types == Max Tegmark and Brian Greene have devised classification schemes for the various theoretical types of multiverses and universes that they might comprise. === Max Tegmark's four levels === Cosmologist Max Tegmark has provided a taxonomy of universes beyond the familiar observable universe. The four levels of Tegmark's classification are arranged such that subsequent levels can be understood to encompass and expand upon previous levels. They are briefly described below. ==== Level I: An extension of our universe ==== A prediction of cosmic inflation is the existence of an infinite ergodic universe, which, being infinite, must contain Hubble volumes realizing all initial conditions. Accordingly, an infinite universe will contain an infinite number of Hubble volumes, all having the same physical laws and physical constants. In regard to configurations such as the distribution of matter, almost all will differ from our Hubble volume. However, because there are infinitely many, far beyond the cosmological horizon, there will eventually be Hubble volumes with similar, and even identical, configurations. Tegmark estimates that an identical volume to ours should be about 1010115 meters away from us. Given infinite space, there would be an infinite number of Hubble volumes identical to ours in the universe. This follows directly from the cosmological principle, wherein it is assumed that our Hubble volume is not special or unique. ==== Level II: Universes with different physical constants ==== In the eternal inflation theory, which is a variant of the cosmic inflation theory, the multiverse or space as a whole is stretching and will continue doing so forever, but some regions of space stop stretching and form distinct bubbles (like gas pockets in a loaf of rising bread). Such bubbles are embryonic level I multiverses. Different bubbles may experience different spontaneous symmetry breaking, which results in different properties, such as different physical constants. Level II also includes John Archibald Wheeler's oscillatory universe theory and Lee Smolin's fecund universes theory. ==== Level III: Many-worlds interpretation of quantum mechanics ==== Hugh Everett III's many-worlds interpretation (MWI) is one of several mainstream interpretations of quantum mechanics. In brief, one aspect of quantum mechanics is that certain observations cannot be predicted absolutely. Instead, there is a range of possible observations, each with a different probability. According to the MWI, each of these possible observations corresponds to a different "world" within the Universal wavefunction, with each world as real as ours. Suppose a six-sided dice is thrown and that the result of the throw corresponds to observable quantum mechanics. All six possible ways the dice can fall correspond to six different worlds. In the case of the Schrödinger's cat thought experiment, both outcomes would be "real" in at least one "world". Tegmark argues that a Level III multiverse does not contain more possibilities in the Hubble volume than a Level I or Level II multiverse. In effect, all the different worlds created by "splits" in a Level III multiverse with the same physical constants can be found in some Hubble volume in a Level I multiverse. Tegmark writes that, "The only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert space." Similarly, all Level II bubble universes with different physical constants can, in effect, be found as "worlds" created by "splits" at the moment of spontaneous symmetry breaking in a Level III multiverse. According to Yasunori Nomura, Raphael Bousso, and Leonard Susskind, this is because global spacetime appearing in the (eternally) inflating multiverse is a redundant concept. This implies that the multiverses of Levels I, II, and III are, in fact, the same thing. This hypothesis is referred to as "Multiverse = Quantum Many Worlds". According to Yasunori Nomura, this quantum multiverse is static, and time is a simple illusion. Another version of the many-worlds idea is H. Dieter Zeh's many-minds interpretation. ==== Level IV: Ultimate ensemble ==== The ultimate mathematical universe hypothesis is Tegmark's own hypothesis. This level considers all universes to be equally real which can be described by different mathematical structures. Tegmark writes: Abstract mathematics is so general that any Theory Of Everything (TOE) which is definable in purely formal terms (independent of vague human terminology) is also a mathematical structure. For instance, a TOE involving a set of different types of entities (denoted by words, say) and relations between them (denoted by additional words) is nothing but what mathematicians call a set-theoretical model, and one can generally find a formal system that it is a model of. He argues that this "implies that any conceivable parallel universe theory can be described at Level IV" and "subsumes all other ensembles, therefore brings closure to the hierarchy of multiverses, and there cannot be, say, a Level V." Jürgen Schmidhuber, however, says that the set of mathematical structures is not even well-defined and that it admits only universe representations describable by constructive mathematics—that is, computer programs. Schmidhuber explicitly includes universe representations describable by non-halting programs whose output bits converge after a finite time, although the convergence time itself may not be predictable by a halting program, due to the undecidability of the halting problem. He also explicitly discusses the more restricted ensemble of quickly computable universes. === Brian Greene's nine types === The American theoretical physicist and string theorist Brian Greene discussed nine types of multiverses: Quilted The quilted multiverse works only in an infinite universe. With an infinite amount of space, every possible event will occur an infinite number of times. However, the speed of light prevents us from being aware of these other identical areas. Inflationary The inflationary multiverse is composed of various pockets in which inflation fields collapse and form new universes. Brane The brane multiverse version postulates that our entire universe exists on a membrane (brane) which floats in a higher dimension or "bulk". In this bulk, there are other membranes with their own universes. These universes can interact with one another, and when they collide, the violence and energy produced is more than enough to give rise to a Big Bang. The branes float or drift near each other in the bulk, and every few trillion years, attracted by gravity or some other force we do not understand, collide and bang into each other. This repeated contact gives rise to multiple or "cyclic" Big Bangs. This particular hypothesis falls under the string theory umbrella as it requires extra spatial dimensions. Cyclic The cyclic multiverse has multiple branes that have collided, causing Big Bangs. The universes bounce back and pass through time until they are pulled back together and again collide, destroying the old contents and creating them anew. Landscape The landscape multiverse relies on string theory's Calabi–Yau spaces. Quantum fluctuations drop the shapes to a lower energy level, creating a pocket with a set of laws different from that of the surrounding space. Quantum The quantum multiverse creates a new universe when a diversion in events occurs, as in the real-worlds variant of the many-worlds interpretation of quantum mechanics. Holographic The holographic multiverse is derived from the theory that the surface area of a space can encode the contents of the volume of the region. Simulated The simulated multiverse exists on complex computer systems that simulate entire universes. A related hypothesis, as put forward as a possibility by astronomer Avi Loeb, is that universes may be creatable in laboratories of advanced technological civilizations who have a theory of everything. Other related hypotheses include brain in a vat-type scenarios where the perceived universe is either simulated in a low-resource way or not perceived directly by the virtual/simulated inhabitant species. Ultimate The ultimate multiverse contains every mathematically possible universe under different laws of physics. === Twin-world models === There are models of two related universes that e.g. attempt to explain the baryon asymmetry – why there was more matter than antimatter at the beginning – with a mirror anti-universe. One two-universe cosmological model could explain the Hubble constant (H0) tension via interactions between the two worlds. The "mirror world" would contain copies of all existing fundamental particles. Another twin/pair-world or "bi-world" cosmology is shown to theoretically be able to solve the cosmological constant (Λ) problem, closely related to dark energy: two interacting worlds with a large Λ each could result in a small shared effective Λ. === Cyclic theories === In several theories, there is a series of, in some cases infinite, self-sustaining cycles – typically a series of Big Crunches (or Big Bounces). However, the respective universes do not exist at once but are forming or following in a logical order or sequence, with key natural constituents potentially varying between universes (see § Anthropic principle). == M-theory == A multiverse of a somewhat different kind has been envisaged within string theory and its higher-dimensional extension, M-theory. These theories require the presence of 10 or 11 spacetime dimensions respectively. The extra six or seven dimensions may either be compactified on a very small scale, or our universe may simply be localized on a dynamical (3+1)-dimensional object, a D3-brane. This opens up the possibility that there are other branes which could support other universes. == Black-hole cosmology == Black-hole cosmology is a cosmological model in which the observable universe is the interior of a black hole existing as one of possibly many universes inside a larger universe. This includes the theory of white holes, which are on the opposite side of space-time. == Anthropic principle == The concept of other universes has been proposed to explain how our own universe appears to be fine-tuned for conscious life as we experience it. If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), then some of these universes (even if very few) would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve. The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developed consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand life), those conditions do not require intelligent design as an explanation for the conditions in the Universe that promote our existence in it. An early form of this reasoning is evident in Arthur Schopenhauer's 1844 work "Von der Nichtigkeit und dem Leiden des Lebens", where he argues that our world must be the worst of all possible worlds, because if it were significantly worse in any respect it could not continue to exist. == Occam's razor == Proponents and critics disagree about how to apply Occam's razor. Critics argue that to postulate an almost infinite number of unobservable universes, just to explain our own universe, is contrary to Occam's razor. However, proponents argue that in terms of Kolmogorov complexity the proposed multiverse is simpler than a single idiosyncratic universe. For example, multiverse proponent Max Tegmark argues: [A]n entire ensemble is often much simpler than one of its members. This principle can be stated more formally using the notion of algorithmic information content. The algorithmic information content in a number is, roughly speaking, the length of the shortest computer program that will produce that number as output. For example, consider the set of all integers. Which is simpler, the whole set or just one number? Naively, you might think that a single number is simpler, but the entire set can be generated by quite a trivial computer program, whereas a single number can be hugely long. Therefore, the whole set is actually simpler... (Similarly), the higher-level multiverses are simpler. Going from our universe to the Level I multiverse eliminates the need to specify initial conditions, upgrading to Level II eliminates the need to specify physical constants, and the Level IV multiverse eliminates the need to specify anything at all... A common feature of all four multiverse levels is that the simplest and arguably most elegant theory involves parallel universes by default. To deny the existence of those universes, one needs to complicate the theory by adding experimentally unsupported processes and ad hoc postulates: finite space, wave function collapse and ontological asymmetry. Our judgment therefore comes down to which we find more wasteful and inelegant: many worlds or many words. Perhaps we will gradually get used to the weird ways of our cosmos and find its strangeness to be part of its charm. == Possible worlds and real worlds == In any given set of possible universes – e.g. in terms of histories or variables of nature – not all may be ever realized, and some may be realized many times. For example, over infinite time there could, in some potential theories, be infinite universes, but only a small or relatively small real number of universes where humanity could exist and only one where it ever does exist (with a unique history). It has been suggested that a universe that "contains life, in the form it has on Earth, is in a certain sense radically non-ergodic, in that the vast majority of possible organisms will never be realized". On the other hand, some scientists, theories and popular works conceive of a multiverse in which the universes are so similar that humanity exists in many equally real separate universes but with varying histories. There is a debate about whether the other worlds are real in the many-worlds interpretation (MWI) of quantum mechanics. In Quantum Darwinism one does not need to adopt a MWI in which all of the branches are equally real. === Modal realism === Possible worlds are a way of explaining probability and hypothetical statements. Some philosophers, such as David Lewis, posit that all possible worlds exist and that they are just as real as the world we live in. This position is known as modal realism. == See also == Beyond black holes – Area of study Cosmogony – Theory or model concerning the origin of the universe Eternity – Endless time or timelessness Impossible world – Term used to model separate circumstances that cannot exist together Measure problem (cosmology) – Concept in cosmology Modal realism – Philosophical concept Parallel universes in fiction – Plot device in fiction Philosophy of physics – Truths and principles of the study of matter, space, time and energy Philosophy of space and time – Branch of philosophy relating to spatiality and temporality Simulated reality – Concept of a false version of reality Twin Earth thought experiment – Thought experiment proposed by Hilary Putnam Ultimate fate of the universe – Theories about the end of the universe == References == Footnotes Citations == Further reading == == External links == Interview with Tufts cosmologist Alex Vilenkin on his book, "Many Worlds in One: The Search for Other Universes" on the podcast and public radio interview program ThoughtCast. Archived 18 August 2020 at the Wayback Machine. Multiverse – an episode of the series In Our Time with Melvyn Bragg, on BBC Radio 4. Why There Might be Many More Universes Besides Our Own, by Phillip Ball, March 21, 2016, bbc.com.
Wikipedia/Multiverse_theory
A case theory (aka theory of case, theory of a case, or theory of the case) is “a detailed, coherent, accurate story of what occurred" involving both a legal theory (i.e., claims/causes of action or affirmative defenses) and a factual theory (i.e., an explanation of how a particular course of events could have happened). That is, a case theory is a logical description of events that the attorney wants the judge or jury to adopt as their own perception of the underlying situation. The theory is often expressed in a story that should be compellingly probable. Case theory is distinguished from jurisprudence (aka legal theory) as general theory of law not specific to a case. == Examples of usage == “Judge Taylor asked lawyers for … their theories of the case because of his unfamiliarity with it …. [The Judge] agreed to …seal the defense summary of its case theory.” “Working with attorneys, Capital Case Investigators will be responsible for … consulting with attorneys to develop case theories and strategies …" == References ==
Wikipedia/Case_theory_(in_law)
Ramsey theory, named after the British mathematician and philosopher Frank P. Ramsey, is a branch of the mathematical field of combinatorics that focuses on the appearance of order in a substructure given a structure of a known size. Problems in Ramsey theory typically ask a question of the form: "how big must some structure be to guarantee that a particular property holds?" == Examples == A typical result in Ramsey theory starts with some mathematical structure that is then cut into pieces. How big must the original structure be in order to ensure that at least one of the pieces has a given interesting property? This idea can be defined as partition regularity. For example, consider a complete graph of order n; that is, there are n vertices and each vertex is connected to every other vertex by an edge. A complete graph of order 3 is called a triangle. Now colour each edge either red or blue. How large must n be in order to ensure that there is either a blue triangle or a red triangle? It turns out that the answer is 6. See the article on Ramsey's theorem for a rigorous proof. Another way to express this result is as follows: at any party with at least six people, there are three people who are all either mutual acquaintances (each one knows the other two) or mutual strangers (none of them knows either of the other two). See theorem on friends and strangers. This also is a special case of Ramsey's theorem, which says that for any given integer c, any given integers n1,...,nc, there is a number, R(n1,...,nc), such that if the edges of a complete graph of order R(n1,...,nc) are coloured with c different colours, then for some i between 1 and c, it must contain a complete subgraph of order ni whose edges are all colour i. The special case above has c = 2 and n1 = n2 = 3. == Results == Two key theorems of Ramsey theory are: Van der Waerden's theorem: For any given c and n, there is a number V, such that if V consecutive numbers are coloured with c different colours, then it must contain an arithmetic progression of length n whose elements are all the same colour. Hales–Jewett theorem: For any given n and c, there is a number H such that if the cells of an H-dimensional n×n×n×...×n cube are coloured with c colours, there must be one row, column, etc. of length n all of whose cells are the same colour. That is: a multi-player n-in-a-row tic-tac-toe cannot end in a draw, no matter how large n is, and no matter how many people are playing, if you play on a board with sufficiently many dimensions. The Hales–Jewett theorem implies Van der Waerden's theorem. A theorem similar to van der Waerden's theorem is Schur's theorem: for any given c there is a number N such that if the numbers 1, 2, ..., N are coloured with c different colours, then there must be a pair of integers x, y such that x, y, and x+y are all the same colour. Many generalizations of this theorem exist, including Rado's theorem, Rado–Folkman–Sanders theorem, Hindman's theorem, and the Milliken–Taylor theorem. A classic reference for these and many other results in Ramsey theory is Graham, Rothschild, Spencer and Solymosi, updated and expanded in 2015 to its first new edition in 25 years. Results in Ramsey theory typically have two primary characteristics. Firstly, they are unconstructive: they may show that some structure exists, but they give no process for finding this structure (other than brute-force search). For instance, the pigeonhole principle is of this form. Secondly, while Ramsey theory results do say that sufficiently large objects must necessarily contain a given structure, often the proof of these results requires these objects to be enormously large – bounds that grow exponentially, or even as fast as the Ackermann function are not uncommon. In some small niche cases, upper and lower bounds are improved, but not in general. In many cases these bounds are artifacts of the proof, and it is not known whether they can be substantially improved. In other cases it is known that any bound must be extraordinarily large, sometimes even greater than any primitive recursive function; see the Paris–Harrington theorem for an example. Graham's number, one of the largest numbers ever used in serious mathematical proof, is an upper bound for a problem related to Ramsey theory. Another large example is the Boolean Pythagorean triples problem. Theorems in Ramsey theory are generally one of the following two types. Many such theorems, which are modeled after Ramsey's theorem itself, assert that in every partition of a large structured object, one of the classes necessarily contains its own structured object, but gives no information about which class this is. In other cases, the reason behind a Ramsey-type result is that the largest partition class always contains the desired substructure. The results of this latter kind are called either density results or Turán-type result, after Turán's theorem. Notable examples include Szemerédi's theorem, which is such a strengthening of van der Waerden's theorem, and the density version of the Hales-Jewett theorem. == See also == Ergodic Ramsey theory Extremal graph theory Goodstein's theorem Bartel Leendert van der Waerden Discrepancy theory == References == == Further reading == Landman, B. M. & Robertson, A. (2004), Ramsey Theory on the Integers, Student Mathematical Library, vol. 24, Providence, RI: AMS, ISBN 0-8218-3199-2. Ramsey, F. P. (1930), "On a Problem of Formal Logic", Proceedings of the London Mathematical Society, s2-30 (1): 264–286, doi:10.1112/plms/s2-30.1.264 (behind a paywall). Erdős, Paul; Szekeres, George (2008) [1935], "A combinatorial problem in geometry", Compositio Mathematica, 2: 463–470, doi:10.1007/978-0-8176-4842-8_3, ISBN 978-0-8176-4841-1, Zbl 0012.27010. Boolos, G.; Burgess, J. P.; Jeffrey, R. (2007), Computability and Logic (5th ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-87752-7. Matthew Katz and Jan Reimann An Introduction to Ramsey Theory: Fast Functions, Infinity, and Metamathematics Student Mathematical Library Volume: 87; 2018; 207 pp; ISBN 978-1-4704-4290-3
Wikipedia/Ramsey_theory
In linguistics, X-bar theory is a model of phrase structure and a theory of syntactic category formation that proposes a universal schema for how phrases are organized. It suggests that all phrases share a common underlying structure, regardless of their specific category (noun phrase, verb phrase, etc.). This structure, known as the X-bar schema, is based on the idea that every phrase (XP, X phrase) has a head, which determines the type (syntactic category) of the phrase (X). The theory was first proposed by Noam Chomsky in 1970 reformulating the ideas of Zellig Harris (1951), and further developed by Ray Jackendoff (1974, 1977a, 1977b), along the lines of the theory of generative grammar put forth in the 1950s by Chomsky. It aimed to simplify and generalize the rules of grammar, addressing limitations of earlier phrase structure models. X-bar theory was an important step forward because it simplified the description of sentence structure. Earlier approaches needed many phrase structure rules, which went against the idea of a simple, underlying system for language. X-bar theory offered a more elegant and economical solution, aligned with the thesis of generative grammar. X-bar theory was incorporated into both transformational and nontransformational theories of syntax, including government and binding theory (GB), generalized phrase structure grammar (GPSG), lexical-functional grammar (LFG), and head-driven phrase structure grammar (HPSG). Although recent work in the minimalist program has largely abandoned X-bar schema in favor of bare phrase structure approaches, the theory's central assumptions are still valid in different forms and terms in many theories of minimalist syntax. == Background == The X-bar theory was developed to resolve the issues that phrase structure rules (PSR) under the Standard Theory had. The PSR approach has the following four main issues. It assumes exocentric structures such as "S → NP Aux VP". This is contrary to the fact that phrases have heads in all circumstances. While the sentence John talked to the man, for example, involves the PSR of a verb phrase "VP → V (PP)", John talked to the man in person involves the PSR of "VP → V (PP) (PP)". This indicates that it is necessary to posit new PSRs every time when an undefined structure is observed in E-language, which amounts to adding an indiscriminate number of grammatical rules to Universal Grammar. This poses serious issues from the perspectives of the Plato's problem and the poverty of the stimulus. It wrongly rules in structures that are impossible in natural language such as "VP → NP A PP", because as in 1 and 2, the PSR countenances phrases that do not have endocentric structures. It fails to capture sentence ambiguities because it assumes flat, nonhierarchical structures. The X-bar theory is a theory that attempts to resolve these issues by assuming the mold or template phrasal structure of "XP". == X-bar schema == === Basic principles === The "X" in the X-bar theory is equivalent to a variable in mathematics: It can be substituted by syntactic categories such as N, V, A, and P. These categories are lexemes and not phrases: The "X-bar" is a grammatical unit larger than X, thus than a lexeme, and the X-double-bar (=XP) outsizes the X(-single)-bar. X-double-bar categories are equal to phrasal categories such as NP, VP, AP, and PP. The X-bar theory assumes that all phrasal categories have the structure in Figure 1. This structure is called the X-bar schema. As in Figure 1, the phrasal category XP is notated by an X with a double overbar. For typewriting reasons, the bar symbol is often substituted by the prime ('), as in X'. The X-bar theory embodies two central principles. Headedness principle: Every phrase has a head. Binarity principle: Every node branches into two different nodes. The headedness principle resolves the issues 1 and 3 above simultaneously. The binarity principle is important to projection and ambiguity, which will be explained below. The X-bar schema consists of a head and its circumstantial components, in accordance with the headedness principle. The relevant components are as follows: Specifier: [obligatory] The node that is in a sister relation with an X' node. This is a term that refers to the syntactic position itself. Head: [obligatory] The core of a phrase, into which a lexeme fits. The head determines the form and characteristics of the phrase as a whole. Complement: [obligatory] An argument required by the head. Adjunct: [optional] A modifier for the phrase constituted by the head. The specifier, head, and complement are obligatory; hence, a phrasal category XP must contain one specifier, one head, and one complement. On the other hand, the adjunct is optional; hence, a phrasal category contains zero or more adjuncts. Accordingly, when a phrasal category XP does not have an adjunct, it forms the structure in Figure 2. For example, the NP linguistics in the sentence John studies linguistics has the structure in Figure 3. It is important that even if there are no candidates that can fit into the specifier and complement positions, these positions are syntactically present, and thus they are merely empty and unoccupied. (This is a natural consequence of the binarity principle.) This means that all phrasal categories have fundamentally uniform structures under the X-bar schema, which makes it unnecessary to assume that different phrases have different structures, unlike when one adopts the PSR. (This resolves the second issue above.) In the meantime, one needs to be wary of when such empty positions are representationally omitted as in Figure 4. In illustrating syntactic structures this way, at least one X'-level node is present in any circumstance because the complement is obligatory. Next, the X'' and X' inherit the characteristics of the head X. This trait inheritance is referred to as projection. Figure 5 suggests that syntactic structures are derived in a bottom-up fashion under the X-bar theory. More specifically, the structures are derived via the following processes. A lexeme is fitted into the head. Heads are sometimes called zero-level projections because they are X-zero-bar-level categories, notated as X0. The head and the complement are combined to form an X-single-bar (X, X') node, which constitutes a semi-phrasal category (a syntactic category not as big as a phrase). This category is called intermediate projection. (An adjunct, if there is any, combines with an X' to form another X'. If there is more than one adjunct, this process is repeated.) An intermediate projection combines with the specifier, forming a complete phrasal category XP (X-double-bar). This category is called maximal projection. It is important that all the processes except for the third are obligatory. This means that one phrasal category necessarily includes X0, X, and XP (=X''). Moreover, nodes bigger than X0 (thus, X and XP nodes) are called constituents. === Directionality of branching === Figures 1–5 are based on the word order of English, but the X-bar schema does not specify the directionality of branching because the binarity principle does not have a rule on it. For example, John read a long book of linguistics with a red cover, which involves two adjuncts, may have either of the structures in Figure 6 or Figure 7. (The figures follow the convention of omitting the inner structures of certain phrasal categories with triangles.) The structure in Figure 6 yields the meaning the book of linguistics with a red cover is long, and the one in Figure 7 the long book of linguistics is with a red cover (see also #Hierarchical structure). What is important is the directionality of the nodes N'2 and N'3: One is left-branching, while the other is right-branching. Accordingly, the X-bar theory, more specifically the binarity principle, does not impose a restriction on how a node branches. When it comes to the head and the complement, their relative order is determined based on the principles-and-parameters model of language, more specifically by the head parameter (not by the X-bar schema itself). A principle is a shared, invariable rule of grammar across languages, whereas a parameter is a typologically variable aspect of the grammars. One can either set their parameter with the values of "+" or "-": In the case of the head parameter, one configures the parameter of [±head first], depending on what language they primarily speak. If this parameter is configured to be [+head first], what results is head-initial languages such as English, and if it is configured to be [-head first], what results is head-final languages such as Japanese. For example, the English sentence John ate an apple and its corresponding Japanese sentence have the structures in Figure 8 and Figure 9, respectively. Finally the directionality of the specifier node is in essence unspecified as well, although this is subject to debate: Some argue that the relevant node is necessarily left-branching across languages, the idea of which is (partially) motivated by the fact that both English and Japanese have subjects on the left of a VP, whereas others such as Saito and Fukui (1998) argue that the directionality of the node is not fixed and needs to be externally determined, for example by the head parameter. == Structure of sentence == === Structure of S === Under the PSR, the structure of S (sentence) is illustrated as follows. S → NP (Aux) VP However, this structure violates the headedness principle because it has an exocentric, headless structure, and would also violate the binarity principle if an Aux (auxiliary) occurs, because the S node will then be ternary-branching. Given these, Chomsky (1981) proposed that S is an InflP headed by the functional category Infl(ection), and later in Chomsky (1986a), this category was relabelled as I (hence constitutes an IP), following the notational convention that phrasal categories are represented in the form of XP, with two letters. The category I includes auxiliary verbs such as will and can, clitics such as -s of the third person singular present and -ed of the past tense. This is consistent with the headedness principle, which requires that a phrase have a head, because a sentence (or a clause) necessarily involves an element that determines the inflection of a verb. Assuming that S constitutes an IP, the structure of the sentence John studies linguistics at the university, for example, can be illustrated as in Figure 10. As is obvious, the IP hypothesis makes it possible to regard the grammatical unit of sentence as a phrasal category. It is also important that the configuration in Figure 10 is fully compatible with the central assumptions of the X-bar theory, namely the headedness principle and the binarity principle. === Structure of S' === Words that introduce subordinate or complement clauses are called complementizers, and representative of them are that, if, and for. Under the PSR, complement clauses were assumed to constitute the category S'. S' → COMP S Chomsky (1986a) proposed that this category is in fact a CP headed by the functional category C. The sentence I think that John is honest, for example, then has the following structure. Moreover, Chomsky (1986a) assumes that the landing site of wh-movement is the specifier position of CP (Spec-CP). Accordingly, the wh-question What did John eat?, for example, is derived as in Figure 12. In this derivation, the I-to-C movement is an instance of subject-auxiliary inversion (SAI), or more generally, head movement. === Other phrasal structures === VP-internal subject hypothesis: A hypothesis on the inner structure of VP proposed by researchers such as Fukui and Speas (1986) and Kitagawa (1986). It assumes that the sentential subject is base-generated in Spec-VP, not in Spec-IP. DP Hypothesis: A hypothesis proposed by Abney (1987), according to whom noun phrases are not NPs but DPs headed by the functional category D. VP shell: An analysis put forth by Larson (1988), which assumes two-layered structures of VP. Later in Chomsky (1995a, 1995b), the higher VP was replaced by vP headed by the functional category v (little/small v, traditionally written in italics). PredP Hypothesis: A hypothesis proposed by Bowers (1993, 2001), according to whom small clauses are PredPs headed by the functional category Pred. Bare Phrase Structure (BPS): A replacement of the X-bar theory put forth by Chomsky (1995a, 1995b). It dispenses with a "template" structure like the X-bar schema, and yields syntactic structures by (iterative applications of) an operation called Merge, which serves to connect two syntactic objects such as words and phrases into one. Some radical versions of it even reject syntactic category labels such as V and A. See also Minimalist Program. == Hierarchical structure == The PSR has the shortcoming of being incapable of capturing sentence ambiguities. I saw a man with binoculars. This sentence is ambiguous between the reading I saw a man, using binoculars, in which with binoculars modifies the VP, and the reading I saw a man who had binoculars, in which the PP modifies the NP. Under the PSR model, the sentence above is subject to the following two parsing rules. S → NP VP VP → V NP PP The sentence's structure under these PSRs would be as in Figure 13. It is obvious that this structure fails to capture the NP modification reading because [PP with binoculars] modifies the VP no matter how one tries to illustrate the structure. The X-bar theory, however, successfully captures the ambiguity as demonstrated in the configurations in Figure 14 and 15 below, because it assumes hierarchical structures in accordance with the binarity principle. Thus, the X-bar theory resolves the fourth issue mentioned in § Background as well. There is always a unilateral relation from syntax to semantics (never from semantics to syntax) in any version of generative grammar because syntactic computation starts from the lexicon, then continues into the syntax, then into Logical Form (LF) at which meanings are computed. This is so under any of Standard Theory (Chomsky, 1965), Extended Standard Theory (Chomsky, 1972), and Revised Extended Standard Theory (Chomsky, 1981). == Footnotes == == References == == See also ==
Wikipedia/X-bar_theory
In inorganic chemistry, crystal field theory (CFT) describes the breaking of degeneracies of electron orbital states, usually d or f orbitals, due to a static electric field produced by a surrounding charge distribution (anion neighbors). This theory has been used to describe various spectroscopies of transition metal coordination complexes, in particular optical spectra (colors). CFT successfully accounts for some magnetic properties, colors, hydration enthalpies, and spinel structures of transition metal complexes, but it does not attempt to describe bonding. CFT was developed by physicists Hans Bethe and John Hasbrouck van Vleck in the 1930s. CFT was subsequently combined with molecular orbital theory to form the more realistic and complex ligand field theory (LFT), which delivers insight into the process of chemical bonding in transition metal complexes. CFT can be complicated further by breaking assumptions made of relative metal and ligand orbital energies, requiring the use of inverted ligand field theory (ILFT) to better describe bonding. == Overview == According to crystal field theory, the interaction between a transition metal and ligands arises from the attraction between the positively charged metal cation and the negative charge on the non-bonding electrons of the ligand. The theory is developed by considering energy changes of the five degenerate d-orbitals upon being surrounded by an array of point charges consisting of the ligands. As a ligand approaches the metal ion, the electrons from the ligand will be closer to some of the d-orbitals and farther away from others, causing a loss of degeneracy. The electrons in the d-orbitals and those in the ligand repel each other due to repulsion between like charges. Thus the d-electrons closer to the ligands will have a higher energy than those further away which results in the d-orbitals splitting in energy. This splitting is affected by the following factors: the nature of the metal ion. the metal's oxidation state. A higher oxidation state leads to a larger splitting relative to the spherical field. the arrangement of the ligands around the metal ion. the coordination number of the metal (i.e. tetrahedral, octahedral...) the nature of the ligands surrounding the metal ion. The stronger the effect of the ligands then the greater the difference between the high and low energy d groups. The most common type of complex is octahedral, in which six ligands form the vertices of an octahedron around the metal ion. In octahedral symmetry the d-orbitals split into two sets with an energy difference, Δoct (the crystal-field splitting parameter, also commonly denoted by 10Dq for ten times the "differential of quanta") where the dxy, dxz and dyz orbitals will be lower in energy than the dz2 and dx2-y2, which will have higher energy, because the former group is farther from the ligands than the latter and therefore experiences less repulsion. The three lower-energy orbitals are collectively referred to as t2g, and the two higher-energy orbitals as eg. These labels are based on the theory of molecular symmetry: they are the names of irreducible representations of the octahedral point group, Oh.(see the Oh character table) Typical orbital energy diagrams are given below in the section High-spin and low-spin. Tetrahedral complexes are the second most common type; here four ligands form a tetrahedron around the metal ion. In a tetrahedral crystal field splitting, the d-orbitals again split into two groups, with an energy difference of Δtet. The lower energy orbitals will be dz2 and dx2-y2, and the higher energy orbitals will be dxy, dxz and dyz - opposite to the octahedral case. Furthermore, since the ligand electrons in tetrahedral symmetry are not oriented directly towards the d-orbitals, the energy splitting will be lower than in the octahedral case. Square planar and other complex geometries can also be described by CFT. The size of the gap Δ between the two or more sets of orbitals depends on several factors, including the ligands and geometry of the complex. Some ligands always produce a small value of Δ, while others always give a large splitting. The reasons behind this can be explained by ligand field theory. The spectrochemical series is an empirically-derived list of ligands ordered by the size of the splitting Δ that they produce (small Δ to large Δ; see also this table): I− < Br− < S2− < SCN− (S–bonded) < Cl− < NO3− < N3− < F− < OH− < C2O42− < H2O < NCS− (N–bonded) < CH3CN < py < NH3 < en < 2,2'-bipyridine < phen < NO2− < PPh3 < CN− < CO. It is useful to note that the ligands producing the most splitting are those that can engage in metal to ligand back-bonding. The oxidation state of the metal also contributes to the size of Δ between the high and low energy levels. As the oxidation state increases for a given metal, the magnitude of Δ increases. A V3+ complex will have a larger Δ than a V2+ complex for a given set of ligands, as the difference in charge density allows the ligands to be closer to a V3+ ion than to a V2+ ion. The smaller distance between the ligand and the metal ion results in a larger Δ, because the ligand and metal electrons are closer together and therefore repel more. === High-spin and low-spin === Ligands which cause a large splitting Δ of the d-orbitals are referred to as strong-field ligands, such as CN− and CO from the spectrochemical series. In complexes with these ligands, it is unfavourable to put electrons into the high energy orbitals. Therefore, the lower energy orbitals are completely filled before population of the upper sets starts according to the Aufbau principle. Complexes such as this are called "low spin". For example, NO2− is a strong-field ligand and produces a large Δ. The octahedral ion [Fe(NO2)6]3−, which has 5 d-electrons, would have the octahedral splitting diagram shown at right with all five electrons in the t2g level. This low spin state therefore does not follow Hund's rule. Conversely, ligands (like I− and Br−) which cause a small splitting Δ of the d-orbitals are referred to as weak-field ligands. In this case, it is easier to put electrons into the higher energy set of orbitals than it is to put two into the same low-energy orbital, because two electrons in the same orbital repel each other. So, one electron is put into each of the five d-orbitals in accord with Hund's rule, and "high spin" complexes are formed before any pairing occurs. For example, Br− is a weak-field ligand and produces a small Δoct. So, the ion [FeBr6]3−, again with five d-electrons, would have an octahedral splitting diagram where all five orbitals are singly occupied. In order for low spin splitting to occur, the energy cost of placing an electron into an already singly occupied orbital must be less than the cost of placing the additional electron into an eg orbital at an energy cost of Δ. As noted above, eg refers to the dz2 and dx2-y2 which are higher in energy than the t2g in octahedral complexes. If the energy required to pair two electrons is greater than Δ, the energy cost of placing an electron in an eg, high spin splitting occurs. The crystal field splitting energy for tetrahedral metal complexes (four ligands) is referred to as Δtet, and is roughly equal to 4/9Δoct (for the same metal and same ligands). Therefore, the energy required to pair two electrons is typically higher than the energy required for placing electrons in the higher energy orbitals. Thus, tetrahedral complexes are usually high-spin. The use of these splitting diagrams can aid in the prediction of magnetic properties of co-ordination compounds. A compound that has unpaired electrons in its splitting diagram will be paramagnetic and will be attracted by magnetic fields, while a compound that lacks unpaired electrons in its splitting diagram will be diamagnetic and will be weakly repelled by a magnetic field. == Stabilization energy == The crystal field stabilization energy (CFSE) is the stability that results from placing a transition metal ion in the crystal field generated by a set of ligands. It arises due to the fact that when the d-orbitals are split in a ligand field (as described above), some of them become lower in energy than before with respect to a spherical field known as the barycenter in which all five d-orbitals are degenerate. For example, in an octahedral case, the t2g set becomes lower in energy than the orbitals in the barycenter. As a result of this, if there are any electrons occupying these orbitals, the metal ion is more stable in the ligand field relative to the barycenter by an amount known as the CFSE. Conversely, the eg orbitals (in the octahedral case) are higher in energy than in the barycenter, so putting electrons in these reduces the amount of CFSE. If the splitting of the d-orbitals in an octahedral field is Δoct, the three t2g orbitals are stabilized relative to the barycenter by 2/5 Δoct, and the eg orbitals are destabilized by 3/5 Δoct. As examples, consider the two d5 configurations shown further up the page. The low-spin (top) example has five electrons in the t2g orbitals, so the total CFSE is 5 x 2/5 Δoct = 2Δoct. In the high-spin (lower) example, the CFSE is (3 x 2/5 Δoct) - (2 x 3/5 Δoct) = 0 - in this case, the stabilization generated by the electrons in the lower orbitals is canceled out by the destabilizing effect of the electrons in the upper orbitals. === Optical properties === The optical properties (details of absorption and emission spectra) of many coordination complexes can be explained by Crystal Field Theory. Often, however, the deeper colors of metal complexes arise from more intense charge-transfer excitations. == Geometries and splitting diagrams == == See also == Schottky anomaly — low temperature spike in heat capacity seen in materials containing high-spin magnetic impurities, often due to crystal field splitting Ligand field theory Molecular orbital theory == References == == Further reading == Housecroft, C. E.; Sharpe, A. G. (2004). Inorganic Chemistry (2nd ed.). Prentice Hall. ISBN 978-0-13-039913-7. Miessler, G. L.; Tarr, D. A. (2003). Inorganic Chemistry (3rd ed.). Pearson Prentice Hall. ISBN 978-0-13-035471-6. Orgel, Leslie E. (1960). An introduction to transition-metal chemistry: Ligand-Field theory. Methuen. ISBN 978-0416634402. {{cite book}}: ISBN / Date incompatibility (help) Shriver, D. F.; Atkins, P. W. (2001). Inorganic Chemistry (4th ed.). Oxford University Press. pp. 227–236. ISBN 978-0-8412-3849-7. Silberberg, Martin S (2006). Chemistry: The Molecular Nature of Matter and Change (4th ed.). New York: McGraw Hill Company. pp. 1028–1034. ISBN 978-0-8151-8505-5.{{cite book}}: CS1 maint: publisher location (link) Zumdahl, Steven S (2005). Chemical Principles (5th ed.). Houghton Mifflin Company. pp. 550–551, 957–964. ISBN 978-0-669-39321-7. == External links == Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012, ISBN 978-3-89336-796-2
Wikipedia/Crystal_field_theory
Constructivism in education is a theory that suggests that learners do not passively acquire knowledge through direct instruction. Instead, they construct their understanding through experiences and social interaction, integrating new information with their existing knowledge. This theory originates from Swiss developmental psychologist Jean Piaget's theory of cognitive development. == Background == Constructivism in education is rooted in epistemology, a theory of knowledge concerned with the logical categories of knowledge and its justification. It acknowledges that learners bring prior knowledge and experiences shaped by their social and cultural environment and that learning is a process of students "constructing" knowledge based on their experiences. While behaviorism focuses on understanding what students are doing, constructivism emphasizes the importance of understanding what students are thinking and how to enrich their thinking. Constructivism in educational psychology can be attributed to the work of Jean Piaget (1896–1980) and his theory of cognitive development. Piaget's focus was on how humans make meaning by integrating experiences with ideas, emphasizing human development as distinct from external influences Another influential figure, Lev Vygotsky (1896–1934), emphasized the importance of sociocultural learning in his theory of social constructivism, highlighting how interactions with adults, peers, and cognitive tools contribute to the formation of mental constructs. Building upon Vygotsky's work, Jerome Bruner and other educational psychologists introduced the concept of instructional scaffolding, where the learning environment provides support that is gradually removed as learners internalize the knowledge. Views more focused on human development within the social sphere include the sociocultural or socio-historical perspective of Lev Vygotsky and the situated cognition perspectives of Mikhail Bakhtin, Jean Lave, and Etienne Wenger. Additionally, the works of Brown, Collins, and Duguid, as well as Newman, Griffin, Cole, and Barbara Rogoff. The concept of constructivism has impacted a number of disciplines, including psychology, sociology, education, and the history of science. In its early stages, constructivism focused on the relationship between human experiences and their reflexes or behavior patterns. Piaget referred to these systems of knowledge as "schemes." Piaget's theory of constructivist learning has significantly influenced learning theories and teaching methods in education. It serves as a foundational concept in education reform movements within cognitive science and neuroscience. == Overview == The formalization of constructivism from a within-the-human perspective is commonly credited to Jean Piaget. Piaget described the mechanisms by which information from the environment and ideas from the individual interact to form internalized structures developed by learners. He identified processes of assimilation and accommodation as crucial in this interaction, as individuals construct new knowledge from their experiences. When individuals assimilate new information, they integrate it into their existing framework without altering that framework. This can happen when their experiences align with their internal view of the world, but it can also occur if they fail to update a flawed understanding. Accommodation is the process of adjusting one's mental representation of the external world to fit new experiences. It can be understood as the mechanism by which failure leads to learning. It is important to note that constructivism is not a specific pedagogy, but rather a theory explaining how learning occurs, regardless of the learning environment. However, constructivism is often associated with pedagogic approaches that promote active learning, or learning by doing. While there is much enthusiasm for constructivism as a design strategy, some experts believe that it is more of a philosophical framework than a theory that can precisely describe instruction or prescribe design strategies.: 4  == Constructivist pedagogy == === Nature of the learner === Social constructivism recognizes and embraces the individuality and complexity of each learner, actively encouraging and rewarding it as a vital component of the learning process. ==== Background and culture ==== Social constructivism, also known as socioculturalism, emphasizes the role of an individual's background, culture, and worldview in shaping their understanding of truth. According to this theory, learners inherit historical developments and symbol systems from their culture and continue to learn and develop these throughout their lives. This approach highlights the significance of a learner's social interactions with knowledgeable members of society. It suggests that without such interactions, it is challenging to grasp the social meaning of important symbol systems and learn how to effectively use them. Social constructivism also points out that young children develop their thinking abilities through interactions with peers, adults, and the physical world. Therefore, it is essential to consider the learner's background and culture throughout the learning process, as these factors help shape the knowledge and truth that the learner acquires. ==== Motivation and responsibility for learning ==== Social constructivism emphasizes the importance of the student being actively involved in the learning process, unlike previous educational viewpoints where the responsibility rested with the instructor to teach and where the learner played a passive, receptive role. Von Glasersfeld (1989) emphasized that learners construct their own understanding and that they do not simply mirror and reflect what they read. Learners look for meaning and will try to find regularity and order in the events of the world even in the absence of full or complete information. When considering students' learning, it is essential to take into account their motivation and confidence. According to Von Glasersfeld, a student's motivation to learn is strongly influenced by their belief in their potential for learning This belief is shaped by their past experiences of successfully mastering problems, which is more influential than external acknowledgment and motivation. This idea aligns with Vygotsky's concept of the "zone of proximal development," where students are challenged at a level slightly above their current development. By successfully completing challenging tasks, students build confidence and motivation to take on even more complex challenges. According to a study on the impact that COVID-19 had on the learning process in Australian University students, a student's motivation and confidence depends on self-determination theory. This theory requires support from the educational environment to fulfill three basic needs to achieve growth, including autonomy, relatedness, and competency. During the historical event of COVID-19, the basic needs were hindered in some way, along with environments that were meant to foster education and growth, which was hindered through the change from traditional in-person classes to online classes that left students with significantly less opportunities for social interactive and active learning opportunities. === Role of the instructor === ==== Instructors as facilitators ==== According to the social constructivist approach, instructors are expected to adapt to the role of facilitators rather than traditional teachers. While a teacher gives a didactic lecture that covers the subject matter, a facilitator assists the student in developing their own understanding of the content. This shift in roles places the focus on the student's active involvement in the learning process, as opposed to the instructor and the content itself. As a result, a facilitator requires a different set of skills compared to a teacher. For instance, a teacher imparts information, whereas a facilitator encourages questions; a teacher leads from the front, while a facilitator provides support from the background; and a teacher delivers answers based on a set curriculum, whereas a facilitator offers guidance and creates an environment for the learner to form their own conclusions. Furthermore, a teacher typically engages in a monologue, whereas a facilitator maintains an ongoing dialogue with the learners. Additionally, a facilitator should be able to dynamically adapt the learning experience by taking the lead in guiding the experience to align with the learners' interests and needs in order to create value. The learning environment should be created in a way that both supports and challenges the student's thinking While it is advocated to give the student ownership of the problem and solution process, it is not the case that any and all activities or solutions are adequate. The critical goal is to support the student in developing effective thinking skills. ==== Relationship between instructor and students ==== In the social constructivist viewpoint, the role of the facilitator involves both the instructor and the students being actively engaged in learning from each other. This dynamic interaction requires that the instructor's culture, values, and background play a significant part in shaping the learning experience. Students compare their own thoughts with those of the instructor and their peers, leading to the development of a new, socially validated understanding of the subject matter. The task or problem serves as the interface between the instructor and the student, creating a dynamic interaction. As a result, both students and instructors need to develop an awareness of each other's viewpoints and consider their own beliefs, standards, and values, making the learning experience both subjective and objective at the same time. Several studies highlight the significance of mentoring in the learning process. The social constructivist model underscores the importance of the relationship between the student and the instructor in facilitating learning. Interactive learning can be facilitated through various approaches such as reciprocal teaching, peer collaboration, cognitive apprenticeship, problem-based instruction, anchored instruction, and other methods that involve collaborative learning. === Learning is an active process === Social constructivism, which is strongly influenced by Vygotsky's work, proposes that knowledge is initially built within a social setting and is then taken in by individuals. According to social constructivists, the act of sharing individual viewpoints, known as collaborative elaboration, leads to learners jointly constructing understanding that would not be achievable on their own. Social constructivist scholars view learning as an active process in which students are encouraged to discover principles, concepts, and facts independently. Therefore, it is crucial to promote speculation and intuitive thinking in students. According to other constructivist scholars, individuals create meanings through their interactions with each other and the environment they inhabit. Knowledge is created by people and is shaped by social and cultural influences. McMahon (1997) also emphasizes the social nature of learning, stating that it is not solely a mental process or a result of external factors shaping behavior. Instead, meaningful learning occurs when individuals participate in social activities. According to Vygotsky (1978), an important aspect of intellectual development is the convergence of speech and practical activity. He emphasized that as children engage in practical activities, they construct meaning on an individual level, and through speech, they connect this meaning to their culture and the interpersonal world they share with others. ==== Collaboration among learners ==== Another tenet of social constructivism is that collaboration among individuals with diverse skills and backgrounds is essential for developing a comprehensive understanding of a particular subject or field. In some social constructivist models, there is an emphasis on the importance of collaboration among learners, which contrasts with traditional competitive approaches. One concept from Vygotsky that is particularly relevant to peer collaboration is the zone of proximal development. This is defined as the gap between a learner's actual developmental level, determined by independent problem-solving, and the level of potential development, determined through problem-solving under adult guidance or in collaboration with more capable peers. It differs from Piaget's fixed biological stages of development. Through a process called "scaffolding," a learner can be extended beyond the limitations of physical maturation, allowing the development process to catch up to the learning process. When students present and teach new material to their peers, it fosters a non-linear process of collective knowledge construction. ==== Importance of context ==== The social constructivist paradigm emphasizes that the environment in which learning takes place plays a crucial role in the learning process. The concept of the learner as an active processor is based on the idea that there are no universal learning laws that apply to all domains.: 208  When individuals possess decontextualized knowledge, they may struggle to apply their understanding to real-world tasks. This is due to the lack of engagement with the concept in its complex, real-world environment, as well as the absence of experience with the intricate interrelationships that influence the application of the concept. One concept within social constructivism is authentic or situated learning, which involves students participating in activities directly related to the practical application of their learning within a culture similar to the real-world setting. Cognitive apprenticeship is a suggested effective model of constructivist learning that aims to immerse students in authentic practices through activity and social interaction, similar to the successful methods used in craft apprenticeship.[: 25  Holt and Willard-Holt (2000) highlight the concept of dynamic assessment, which offers a distinct approach to evaluating learners compared to traditional tests. Dynamic assessment extends the interactive nature of learning to the assessment process, emphasizing interaction between the assessor and the learner. It involves a dialogue between the assessor and the learner to understand the current performance level on a task and explore ways to improve future performance. This approach views assessment and learning as interconnected processes, rather than separate entities. According to this viewpoint, instructors should approach assessment as an ongoing and interactive process that evaluates the learner's achievements, the quality of the learning experience, and course materials. The feedback generated by the assessment process is crucial for driving further development. === Selection, scope, and sequencing of subject matter === The organization of knowledge should prioritize integration over division into separate subjects or compartments. This again emphasizes the significance of presenting learning within a specific context. The world in which learners operate is not divided into separate subjects but rather comprises a complex array of facts, problems, dimensions, and perceptions. ==== Engaging and challenging the student ==== Students benefit from being challenged with tasks that require them to apply skills and knowledge slightly beyond their current level of mastery. This approach can help to maintain their motivation and build on past achievements to boost their confidence. This is in line with Vygotsky's zone of proximal development, which refers to the gap between a person's current level of ability and their potential level of development under the guidance of adults or more capable peers. Vygotsky (1978) argued that effective instruction should be slightly ahead of a learner's current developmental stage. By doing so, instruction can stimulate the development of a range of functions that are in the learner's zone of proximal development. This highlights the crucial role of instruction in fostering development. In order to effectively engage and challenge students, it is important that the tasks and learning environment mirror the complexity of the real-world environment in which the students are expected to operate upon completing their education. Students should not only take ownership of the learning and problem-solving process but also take ownership of the problems themselves. When it comes to organizing subject matter, the constructivist perspective suggests that the fundamental principles of any subject can be taught to anyone at any point, in some capacity. This approach entails introducing the foundational concepts that makeup topics or subject areas initially and then consistently revisiting and expanding on these ideas. Instructors should recognize that while they are given a set curriculum to follow, they inevitably personalize it to reflect their own beliefs, thoughts, and emotions about the subject matter and their students. As a result, the learning experience becomes a collaborative effort, influenced by the emotions and life experiences of all involved. It's important to consider the student's motivation as central to the learning process. ==== Structuredness of the learning process ==== Incorporating an appropriate balance between structure and flexibility into the learning process is essential. According to Savery (1994), a highly structured learning environment may pose challenges for learners in constructing meaning based on their existing conceptual understandings. A facilitator should strive to provide adequate structure to offer clear guidance and parameters for achieving learning objectives, while also allowing for an open and flexible learning experience that enables learners to discover, interact, and arrive at their own understanding of truth. === Teaching techniques === A few strategies for cooperative learning include: Reciprocal questioning: students work together to ask and answer questions Jigsaw: students become "experts" on one part of a group project and teach it to the others in their group Structured controversies: Students work together to research a particular controversy The "Harkness" discussion method is named after Edward Harkness, who funded its development at Phillips Exeter Academy in the 1930s. This method involves students sitting in a circle, guiding their own discussion. The teacher's role is minimized, with the students initiating, directing, and focusing the discussion. They work together as a team, sharing responsibility and goals. The ultimate aim is to illuminate the subject, interpret different viewpoints, and piece together a comprehensive understanding. Discussion skills are crucial, and every participant is expected to contribute to keeping the discussion engaging and productive. == Criticism == Many cognitive psychologists and educators have raised concerns about the core principles of constructivism, arguing that these theories may be misleading or inconsistent with well-established findings. In neo-Piagetian theories of cognitive development, it is proposed that learning is influenced by the processing and representational resources available at a particular age. This implies that if the demands of a concept to be learned exceed the available processing efficiency and working memory resources, then the concept is considered unlearnable. This approach to learning can impact the understanding of essential theoretical concepts and reasoning. Therefore, for effective learning to occur, a child must operate in an environment that aligns with their developmental and individual learning constraints, taking into account any deviations from the norm for their age. If this condition is not met, the learning process may not progress as intended. Many educators have raised concerns about the effectiveness of this approach to instructional design, particularly when it comes to creating instruction for beginners. While some proponents of constructivism claim that "learning by doing" improves learning, critics argue that there is insufficient empirical evidence to support this assertion, especially for novice learners. Sweller and his colleagues argue that novices do not possess the underlying mental models, or "schemas" necessary for "learning by doing". Additionally, Mayer (2004) conducted a review of the literature and concluded that fifty years of empirical data do not support the use of pure discovery as a constructivist teaching technique. In situations requiring discovery, he recommends the use of guided discovery instead. Some researchers, such as Kirschner et al. (2006), have characterized the constructivist teaching methods as "unguided methods of instruction" and have suggested more structured learning activities for learners with little to no prior knowledge. Slezak has expressed skepticism about constructivism, describing it as "fashionable but thoroughly problematic doctrines that can have little benefit for practical pedagogy or teacher education." Similar views have been stated by Meyer, Boden, Quale and others. Kirschner et al. grouped several learning theories together, including discovery, problem-based, experiential, and inquiry-based learning, and suggested that highly scaffolded constructivist methods such as problem-based learning and inquiry learning may be ineffective. They described several research studies that were favorable to problem-based learning given learners were provided some level of guidance and support. === Confusion with maturationism === Many people confuse constructivism with maturationism. The constructivist (or cognitive-developmental) stream "is based on the idea that the dialectic or interactionist process of development and learning through the student's active construction should be facilitated and promoted by adults". The romantic maturationist stream emphasizes the natural development of students without adult interventions in a permissive environment. In contrast, constructivism involves adults actively guiding learning while allowing children to take charge of their own learning process. == Subtypes == === Contextual constructivism === According to William Cobern (1991) Contextual constructivism is "about understanding the fundamental, culturally based beliefs that both students and teachers bring to class, and how these beliefs are supported by culture. Contextual constructivists not only raise new research questions, they also call for a new research paradigm. The focus on contextualization means that qualitative, especially ethnographic, techniques are to be preferred" (p. 3). === Radical constructivism === Ernst von Glasersfeld developed radical constructivism by coupling Piaget's theory of learning and philosophical viewpoint about the nature of knowledge with Kant's rejection of an objective reality independent of human perception or reason. Radical constructivism does not view knowledge as an attempt to generate ideas that match an independent, objective reality. Instead, theories and knowledge about the world, as generated by our senses and reason, either fit within the constraints of whatever reality may exist and, thus, are viable or do not and are not viable. As a theory of education, radical constructivism emphasizes the experiences of the learner, differences between learners and the importance of uncertainty. === Relational constructivism === Björn Kraus' relational constructivism can be perceived as a relational consequence of radical constructivism. In contrast to social constructivism, it picks up the epistemological threads and maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception. Despite the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions that apply to human perceptional processes. === Social constructivism === In recent decades, constructivist theorists have extended the traditional focus on individual learning to address collaborative and social dimensions of learning. It is possible to see social constructivism as a bringing together of aspects of the work of Piaget with that of Bruner and Vygotsky. === Communal constructivism === The concept communal constructivism was developed by Leask and Younie, in 1995, through their research on the European SchoolNet, which demonstrated the value of experts collaborating to push the boundaries of knowledge, including communal construction of new knowledge between experts, rather than the social construction of knowledge, as described by Vygotsky, where there is a learner to teacher scaffolding relationship. "Communal constructivism,” as a concept, applies to those situations in which there is currently no expert knowledge or research to underpin knowledge in an area. "Communal constructivism" refers, specifically, to the process of experts working together to create, record, and publish new knowledge in emerging areas. In the seminal European SchoolNet research where, for the first time, academics were testing out how the internet could support classroom practice and pedagogy, experts from a number of countries set up test situations to generate and understand new possibilities for educational practice. Bryan Holmes, in 2001, applied this to student learning, as described in an early paper, "in this model, students will not simply pass through a course like water through a sieve but instead leave their own imprint in the learning process." === Critical constructivism === Critical constructivism is a theory of learning that combines elements of constructivism and critical theory. It emphasizes the role of social and cultural factors in shaping knowledge construction. Critical constructivists argue that learners actively construct knowledge through their interactions with the world, but also recognize the power imbalances and social structures that can influence this process. Key concepts in critical constructivism include: critical consciousness – ability to critically analyze social and political structures empowerment – process of gaining control over one's own life and the lives of others social justice – pursuit of fairness and equality for all Critical constructivism has implications for education, as it suggests that teachers should create learning environments that foster critical thinking, problem-solving, and social justice. == Influence on computer science and robotics == Constructivism has influenced the course of programming and computer science. Some famous programming languages have been created, either wholly or in part, for educational use, to support the constructionist theory of Seymour Papert. These languages have been dynamically typed and reflective. Logo and its successor, Scratch, are the best known of them. Constructivism has also informed the design of interactive machine learning systems, whereas radical constructivism has been explored as a paradigm to design experiments in rehabilitation robotics and more precisely in prosthetics. == List of notable constructivists == Writers who influenced constructivism include: John Dewey (1859–1952) Maria Montessori (1870–1952) Władysław Strzemiński (1893–1952) Jean Piaget (1896–1980) Lev Vygotsky (1896–1934) Heinz von Foerster (1911–2002) George Kelly (1905–1967) Jerome Bruner (1915–2016) Herbert Simon (1916–2001) Paul Watzlawick (1921–2007) Ernst von Glasersfeld (1917–2010) Edgar Morin (born 1921) Humberto Maturana (1928–2021) Paulo Freire (1921–1997) == See also == Autodidactism Connectivism Constructivist epistemology Constructivist teaching methods Critical pedagogy Cultural-historical activity theory (CHAT) Educational psychology Learning styles Philosophy of education Reform mathematics Situated cognition Socratic method Teaching for social justice Vocational education APOS Theory == References == == Further reading == == External links == A journey into Constructivism by Martin Dougiamas, 1998–11. Cognitively Guided Instruction reviewed on the Promising Practices Network Sample Online Activity Objects Designed with Constructivist Approach (2007) Liberal Exchange learning resources offering a constructivist approach to learning English as a second/foreign language (2009) Lutz, S., & Huitt, W. (2018). "Connecting cognitive development and constructivism." In W. Huitt (Ed.), Becoming a Brilliant Star: Twelve core ideas supporting holistic education (pp. 45–63). IngramSpark. Definition of Constructivism by Martin Ryder (a footnote to the book chapter The Cyborg and the Noble Savage where Ryder discusses One Laptop Per Child's XO laptop from a constructivist educator's point of view)
Wikipedia/Constructivist_theory
A peculiarity of thermal motion of very long linear macromolecules in entangled polymer melts or concentrated polymer solutions is reptation. Derived from the word reptile, reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. Sir Sam Edwards and Masao Doi later refined reptation theory. Similar phenomena also occur in proteins. Two closely related concepts are reptons and entanglement. A repton is a mobile point residing in the cells of a lattice, connected by bonds. Entanglement means the topological restriction of molecular motion by other chains. == Theory and mechanism == Reptation theory describes the effect of polymer chain entanglements on the relationship between molecular mass and chain relaxation time. The theory predicts that, in entangled systems, the relaxation time τ is proportional to the cube of molecular mass, M: τ ∝ M 3. The prediction of the theory can be arrived at by a relatively simple argument. First, each polymer chain is envisioned as occupying a tube of length L, through which it may move with snake-like motion (creating new sections of tube as it moves). Furthermore, if we consider a time scale comparable to τ, we may focus on the overall, global motion of the chain. Thus, we define the tube mobility as μtube = v / f, where v is the velocity of the chain when it is pulled by a force, f. μtube will be inversely proportional to the degree of polymerization (and thus also inversely proportional to chain weight). The diffusivity of the chain through the tube may then be written as Dtube = kB T μtube. By then recalling that in 1-dimension the mean squared displacement due to Brownian motion is given by s(t)2 = 2Dtube t, we obtain s(t)2 = 2kB T μtube t. The time necessary for a polymer chain to displace the length of its original tube is then t = L2 / (2kB T μtube). By noting that this time is comparable to the relaxation time, we establish that τ ∝ L2 / μtube. Since the length of the tube is proportional to the degree of polymerization, and μtube is inversely proportional to the degree of polymerization, we observe that τ ∝ (DPn)3 (and so τ ∝ M3). From the preceding analysis, we see that molecular mass has a very strong effect on relaxation time in entangled polymer systems. Indeed, this is significantly different from the untangled case, where relaxation time is observed to be proportional to molecular mass. This strong effect can be understood by recognizing that, as chain length increases, the number of tangles present will dramatically increase. These tangles serve to reduce chain mobility. The corresponding increase in relaxation time can result in viscoelastic behavior, which is often observed in polymer melts. Note that the polymer’s zero-shear viscosity gives an approximation of the actual observed dependency, τ ∝ M3.4; this relaxation time has nothing to do with the reptation relaxation time. == Models == Entangled polymers are characterized with effective internal scale, commonly known as the length of macromolecule between adjacent entanglements M e {\displaystyle M_{\text{e}}} . Entanglements with other polymer chains restrict polymer chain motion to a thin virtual tube passing through the restrictions. Without breaking polymer chains to allow the restricted chain to pass through it, the chain must be pulled or flow through the restrictions. The mechanism for movement of the chain through these restrictions is called reptation. In the blob model, the polymer chain is made up of n {\displaystyle n} Kuhn lengths of individual length l {\displaystyle l} . The chain is assumed to form blobs between each entanglement, containing n e {\displaystyle n_{\text{e}}} Kuhn length segments in each. The mathematics of random walks can show that the average end-to-end distance of a section of a polymer chain, made up of n e {\displaystyle n_{\text{e}}} Kuhn lengths is d = l n e {\displaystyle d=l{\sqrt {n_{\text{e}}}}} . Therefore if there are n {\displaystyle n} total Kuhn lengths, and A {\displaystyle A} blobs on a particular chain: A = n n e {\displaystyle A={\dfrac {n}{n_{\text{e}}}}} The total end-to-end length of the restricted chain L {\displaystyle L} is then: L = A d = n l n e n e = n l n e {\displaystyle L=Ad={\dfrac {nl{\sqrt {n_{\text{e}}}}}{n_{\text{e}}}}={\dfrac {nl}{\sqrt {n_{\text{e}}}}}} This is the average length a polymer molecule must diffuse to escape from its particular tube, and so the characteristic time for this to happen can be calculated using diffusive equations. A classical derivation gives the reptation time t {\displaystyle t} : t = l 2 n 3 μ n e k T {\displaystyle t={\dfrac {l^{2}n^{3}\mu }{n_{\text{e}}kT}}} where μ {\displaystyle \mu } is the coefficient of friction on a particular polymer chain, k {\displaystyle k} is the Boltzmann constant, and T {\displaystyle T} is the absolute temperature. The linear macromolecules reptate if the length of macromolecule M {\displaystyle M} is bigger than the critical entanglement molecular weight M c {\displaystyle M_{\text{c}}} . M c {\displaystyle M_{\text{c}}} is 1.4 to 3.5 times M e {\displaystyle M_{\text{e}}} . There is no reptation motion for polymers with M < M c {\displaystyle M<M_{\text{c}}} , so that the point M c {\displaystyle M_{\text{c}}} is a point of dynamic phase transition. Due to the reptation motion the coefficient of self-diffusion and conformational relaxation times of macromolecules depend on the length of macromolecule as M − 2 {\displaystyle M^{-2}} and M 3 {\displaystyle M^{3}} , correspondingly. The conditions of existence of reptation in the thermal motion of macromolecules of complex architecture (macromolecules in the form of branch, star, comb and others) have not been established yet. The dynamics of shorter chains or of long chains at short times is usually described by the Rouse model. == See also == Important publications in polymer physics Polymer characterization Polymer physics Protein dynamics Soft matter == References ==
Wikipedia/Reptation_theory
Film theory is a set of scholarly approaches within the academic discipline of film or cinema studies that began in the 1920s by questioning the formal essential attributes of motion pictures; and that now provides conceptual frameworks for understanding film's relationship to reality, the other arts, individual viewers, and society at large. Film theory is not to be confused with general film criticism, or film history, though these three disciplines interrelate. Although some branches of film theory are derived from linguistics and literary theory, it also originated and overlaps with the philosophy of film. == History == === Early theory, before 1945 === French philosopher Henri Bergson's Matter and Memory (1896) anticipated the development of film theory during the birth of cinema in the early twentieth century. Bergson commented on the need for new ways of thinking about movement, and coined the terms "the movement-image" and "the time-image". However, in his 1906 essay L'illusion cinématographique (in L'évolution créatrice; English: The cinematic illusion) he rejects film as an example of what he had in mind. Nonetheless, decades later, in Cinéma I and Cinema II (1983–1985), the philosopher Gilles Deleuze took Matter and Memory as the basis of his philosophy of film and revisited Bergson's concepts, combining them with the semiotics of Charles Sanders Peirce. Early film theory arose in the silent era and was mostly concerned with defining the crucial elements of the medium. Ricciotto Canudo was an early Italian film theoretician who saw cinema as "plastic art in motion", and gave cinema the label "the Sixth Art", later changed to "the Seventh Art". In 1915, Vachel Lindsay wrote a book on film, followed a year later by Hugo Münsterberg. Lindsay argued that films could be classified into three categories: action films, intimate films, as well as films of splendour. According to him, the action film was sculpture-in-motion, while the intimate film was painting-in-motion, and splendour film architecture-in-motion. He also argued against the contemporary notion of calling films photoplays and seen as filmed versions of theatre, instead seeing film with camera-born opportunities. He also described cinema as hieroglyphic in the sense of containing symbols in its images. He believed this visuality gave film the potential for universal accessibility. Münsterberg in turn noted the analogies between cinematic techniques and certain mental processes. For example, he compared the close-up to the mind paying attention. The flashback, in turn, was similar to remembering. This was later followed by the formalism of Rudolf Arnheim, who studied how techniques influenced film as art. Among early French theorists, Germaine Dulac brought the concept of impressionism to film by describing cinema that explored the malleability of the border between internal experience and external reality, for example through superimposition. Surrealism also had an influence on early French film culture. The term photogénie was important to both, having been brought to use by Louis Delluc in 1919 and becoming widespread in its usage to capture the unique power of cinema. Jean Epstein noted how filming gives a "personality" or a "spirit" to objects while also being able to reveal "the untrue, the unreal, the 'surreal'". This was similar to defamiliarization used by avant-garde artists to recreate the world. He saw the close-up as the essence of photogénie. Béla Balázs also praised the close-up for similar reasons. Arnheim also believed defamiliarization to be a critical element of film. After the Russian Revolution, a chaotic situation in the country also created a sense of excitement at new possibilities. This gave rise to montage theory in the work of Dziga Vertov and Sergei Eisenstein. After the establishment of the Moscow Film School, Lev Kuleshov set up a workshop to study the formal structure of film, focusing on editing as "the essence of cinematography". This produced findings on the Kuleshov effect. Editing was also associated with the foundational Marxist concept of dialectical materialism. To this end, Eisenstein claimed that "montage is conflict". Eisenstein's theories were focused on montage having the ability create meaning transcending the sum of its parts with a thematic effect in a way that ideograms turned graphics into abstract symbols. Multiple scenes could work to produce themes (tonal montage), while multiple themes could create even higher levels of meaning (intellectual montage). Vertov in turn focused on developing Kino-Pravda, film truth, and the Kino-Eye, which he claimed showed a deeper truth than could be seen with the naked eye. === Later theory, after 1945 === In the years after World War II, the French film critic and theorist André Bazin argued that film's essence lay in its ability to mechanically reproduce reality, not in its difference from reality. This had followed the rise of poetic realism in French cinema in the 1930s. He believed that the purpose of art is to preserve reality, even famously claiming that "The photographic image is the object itself". Based on this, he advocated for the use of long takes and deep focus, to reveal the structural depth of reality and finding meaning objectively in images. This was soon followed by the rise of Italian neorealism. Siegfried Kracauer was also notable for arguing that realism is the most important function of cinema. The Auteur theory derived from the approach of critic and filmmaker Alexandre Astruc, among others, and was originally developed in articles in Cahiers du Cinéma, a film journal that had been co-founded by Bazin. François Truffaut issued auteurism's manifestos in two Cahiers essays: "Une certaine tendance du cinéma français" (January 1954) and "Ali Baba et la 'Politique des auteurs'" (February 1955). His approach was brought to American criticism by Andrew Sarris in 1962. The auteur theory was based on films depicting the directors' own worldviews and impressions of the subject matter, by varying lighting, camerawork, staging, editing, and so on. Georges Sadoul deemed a film's putative "author" potentially even an actor, but a film indeed collaborative. Aljean Harmetz cited major control even by film executives. David Kipen's view of screenwriter as indeed main author is termed Schreiber theory. In the 1960s and 1970s, film theory took up residence in academia importing concepts from established disciplines like psychoanalysis, gender studies, anthropology, literary theory, semiotics and linguistics—as advanced by scholars such as Christian Metz. However, not until the late 1980s or early 1990s did film theory per se achieve much prominence in American universities by displacing the prevailing humanistic, auteur theory that had dominated cinema studies and which had been focused on the practical elements of film writing, production, editing and criticism. American scholar David Bordwell has spoken against many prominent developments in film theory since the 1970s. He uses the derogatory term "SLAB theory" to refer to film studies based on the ideas of Ferdinand de Saussure, Jacques Lacan, Louis Althusser, and Roland Barthes. Instead, Bordwell promotes what he describes as "neoformalism" (a revival of formalist film theory). During the 1990s the digital revolution in image technologies has influenced film theory in various ways. There has been a refocus onto celluloid film's ability to capture an "indexical" image of a moment in time by theorists like Mary Ann Doane, Philip Rosen and Laura Mulvey who was informed by psychoanalysis. From a psychoanalytical perspective, after the Lacanian notion of "the Real", Slavoj Žižek offered new aspects of "the gaze" extensively used in contemporary film analysis. From the 1990s onward the Matrixial theory of artist and psychoanalyst Bracha L. Ettinger revolutionized feminist film theory. Her concept The Matrixial Gaze, that has established a feminine gaze and has articulated its differences from the phallic gaze and its relation to feminine as well as maternal specificities and potentialities of "coemergence", offering a critique of Sigmund Freud's and Jacques Lacan's psychoanalysis, is extensively used in analysis of films by female authors, like Chantal Akerman, as well as by male authors, like Pedro Almodovar. The matrixial gaze offers the female the position of a subject, not of an object, of the gaze, while deconstructing the structure of the subject itself, and offers border-time, border-space and a possibility for compassion and witnessing. Ettinger's notions articulate the links between aesthetics, ethics and trauma. There has also been a historical revisiting of early cinema screenings, practices and spectatorship modes by writers Tom Gunning, Miriam Hansen and Yuri Tsivian. In Critical Cinema: Beyond the Theory of Practice (2011), Clive Meyer suggests that 'cinema is a different experience to watching a film at home or in an art gallery', and argues for film theorists to re-engage the specificity of philosophical concepts for cinema as a medium distinct from others. == Specific theories of film == == See also == Cinematography Digital cinema 3D film Film Film studies Glossary of motion picture terms Invisible auditor List of film periodicals Narrative film Philosophy of film Psychology of film == References == == Further reading == Dudley Andrew, Concepts in Film Theory, Oxford, New York: Oxford University Press, 1984. Dudley Andrew, The Major Film Theories: An Introduction, Oxford, New York: Oxford University Press, 1976. Francesco Casetti, Theories of Cinema, 1945–1990, Austin: University of Texas Press, 1999. Stanley Cavell, The World Viewed: Reflections on the Ontology of Film (1971); 2nd enlarged ed. (1979) Bill Nichols, Representing Reality: Issues and Concepts in Documentary, Bloomington: Indiana University Press, 1991. The Oxford Guide to Film Studies, edited by John Hill and Pamela Church Gibson, Oxford University Press, 1998. The Routledge Encyclopedia of Film Theory, edited by Edward Branigan, Warren Buckland, Routledge, 2015.
Wikipedia/Film_theory
A polymer field theory is a statistical field theory describing the statistical behavior of a neutral or charged polymer system. It can be derived by transforming the partition function from its standard many-dimensional integral representation over the particle degrees of freedom in a functional integral representation over an auxiliary field function, using either the Hubbard–Stratonovich transformation or the delta-functional transformation. Computer simulations based on polymer field theories have been shown to deliver useful results, for example to calculate the structures and properties of polymer solutions (Baeurle 2007, Schmid 1998), polymer melts (Schmid 1998, Matsen 2002, Fredrickson 2002) and thermoplastics (Baeurle 2006). == Canonical ensemble == === Particle representation of the canonical partition function === The standard continuum model of flexible polymers, introduced by Edwards (Edwards 1965), treats a solution composed of n {\displaystyle n} linear monodisperse homopolymers as a system of coarse-grained polymers, in which the statistical mechanics of the chains is described by the continuous Gaussian thread model (Baeurle 2007) and the solvent is taken into account implicitly. The Gaussian thread model can be viewed as the continuum limit of the discrete Gaussian chain model, in which the polymers are described as continuous, linearly elastic filaments. The canonical partition function of such a system, kept at an inverse temperature β = 1 / k B T {\displaystyle \beta =1/k_{B}T} and confined in a volume V {\displaystyle V} , can be expressed as Z ( n , V , β ) = 1 n ! ( λ T 3 ) n N ∏ j = 1 n ∫ D r j exp ⁡ ( − β Φ 0 [ r ] − β Φ ¯ [ r ] ) , ( 1 ) {\displaystyle Z(n,V,\beta )={\frac {1}{n!(\lambda _{T}^{3})^{nN}}}\prod _{j=1}^{n}\int D\mathbf {r} _{j}\exp \left(-\beta \Phi _{0}\left[\mathbf {r} \right]-\beta {\bar {\Phi }}\left[\mathbf {r} \right]\right),\qquad (1)} where Φ ¯ [ r ] {\displaystyle {\bar {\Phi }}\left[\mathbf {r} \right]} is the potential of mean force given by, Φ ¯ [ r ] = N 2 2 ∑ j = 1 n ∑ k = 1 n ∫ 0 1 d s ∫ 0 1 d s ′ Φ ¯ ( | r j ( s ) − r k ( s ′ ) | ) − 1 2 n N Φ ¯ ( 0 ) , ( 2 ) {\displaystyle {\bar {\Phi }}\left[\mathbf {r} \right]={\frac {N^{2}}{2}}\sum _{j=1}^{n}\sum _{k=1}^{n}\int _{0}^{1}ds\int _{0}^{1}ds'{\bar {\Phi }}\left(\left|\mathbf {r} _{j}(s)-\mathbf {r} _{k}(s')\right|\right)-{\frac {1}{2}}nN{\bar {\Phi }}(0),\qquad (2)} representing the solvent-mediated non-bonded interactions among the segments, while Φ 0 [ r ] {\displaystyle \Phi _{0}[\mathbf {r} ]} represents the harmonic binding energy of the chains. The latter energy contribution can be formulated as Φ 0 [ r ] = 3 k B T 2 N b 2 ∑ l = 1 n ∫ 0 1 d s | d r l ( s ) d s | 2 , {\displaystyle \Phi _{0}[\mathbf {r} ]={\frac {3k_{B}T}{2Nb^{2}}}\sum _{l=1}^{n}\int _{0}^{1}ds\left|{\frac {d\mathbf {r} _{l}(s)}{ds}}\right|^{2},} where b {\displaystyle b} is the statistical segment length and N {\displaystyle N} the polymerization index. === Field-theoretic transformation === To derive the basic field-theoretic representation of the canonical partition function, one introduces in the following the segment density operator of the polymer system ρ ^ ( r ) = N ∑ j = 1 n ∫ 0 1 d s δ ( r − r j ( s ) ) . {\displaystyle {\hat {\rho }}(\mathbf {r} )=N\sum _{j=1}^{n}\int _{0}^{1}ds\delta \left(\mathbf {r} -\mathbf {r} _{j}(s)\right).} Using this definition, one can rewrite Eq. (2) as Φ ¯ [ r ] = 1 2 ∫ d r ∫ d r ′ ρ ^ ( r ) Φ ¯ ( | r − r ′ | ) ρ ^ ( r ′ ) − 1 2 n N Φ ¯ ( 0 ) . ( 3 ) {\displaystyle {\bar {\Phi }}\left[\mathbf {r} \right]={\frac {1}{2}}\int d\mathbf {r} \int d\mathbf {r} '{\hat {\rho }}(\mathbf {r} ){\bar {\Phi }}(\left|\mathbf {r} -\mathbf {r} '\right|){\hat {\rho }}(\mathbf {r} ')-{\frac {1}{2}}nN{\bar {\Phi }}(0).\qquad (3)} Next, one converts the model into a field theory by making use of the Hubbard-Stratonovich transformation or delta-functional transformation ∫ D ρ δ [ ρ − ρ ^ ] F [ ρ ] = F [ ρ ^ ] , ( 4 ) {\displaystyle \int D\rho \;\delta \left[\rho -{\hat {\rho }}\right]F\left[\rho \right]=F\left[{\hat {\rho }}\right],\qquad (4)} where F [ ρ ^ ] {\displaystyle F\left[{\hat {\rho }}\right]} is a functional and δ [ ρ − ρ ^ ] {\displaystyle \delta \left[\rho -{\hat {\rho }}\right]} is the delta functional given by δ [ ρ − ρ ^ ] = ∫ D w e i ∫ d r w ( r ) [ ρ ( r ) − ρ ^ ( r ) ] , ( 5 ) {\displaystyle \delta \left[\rho -{\hat {\rho }}\right]=\int Dwe^{i\int d\mathbf {r} w(\mathbf {r} )\left[\rho (\mathbf {r} )-{\hat {\rho }}(\mathbf {r} )\right]},\qquad (5)} with w ( r ) = ∑ G w ( G ) exp ⁡ [ i G r ] {\displaystyle w(\mathbf {r} )=\sum \nolimits _{\mathbf {G} }w(\mathbf {G} )\exp \left[i\mathbf {G} \mathbf {r} \right]} representing the auxiliary field function. Here we note that, expanding the field function in a Fourier series, implies that periodic boundary conditions are applied in all directions and that the G {\displaystyle \mathbf {G} } -vectors designate the reciprocal lattice vectors of the supercell. === Basic field-theoretic representation of canonical partition function === Using the Eqs. (3), (4) and (5), we can recast the canonical partition function in Eq. (1) in field-theoretic representation, which leads to Z ( n , V , β ) = Z 0 ∫ D w exp ⁡ [ − 1 2 β V 2 ∫ d r d r ′ w ( r ) Φ ¯ − 1 ( r − r ′ ) w ( r ′ ) ] Q n [ i w ] , ( 6 ) {\displaystyle Z(n,V,\beta )=Z_{0}\int Dw\exp \left[-{\frac {1}{2\beta V^{2}}}\int d\mathbf {r} d\mathbf {r} 'w(\mathbf {r} ){\bar {\Phi }}^{-1}(\mathbf {r} -\mathbf {r} ')w(\mathbf {r} ')\right]Q^{n}[iw],\qquad (6)} where Z 0 = 1 n ! ( exp ⁡ ( β / 2 N Φ ¯ ( 0 ) ) Z ′ λ 3 N ( T ) ) n {\displaystyle Z_{0}={\frac {1}{n!}}\left({\frac {\exp \left(\beta /2N{\bar {\Phi }}(0)\right)Z'}{\lambda ^{3N}(T)}}\right)^{n}} can be interpreted as the partition function for an ideal gas of non-interacting polymers and Z ′ = ∫ D R exp ⁡ [ − β U 0 ( R ) ] ( 7 ) {\displaystyle Z'=\int D\mathbf {R} \exp \left[-\beta U_{0}(\mathbf {R} )\right]\qquad (7)} is the path integral of a free polymer in a zero field with elastic energy U 0 [ R ] = k B T 4 R g 0 2 ∫ 0 1 d s | d R ( s ) d s | 2 . {\displaystyle U_{0}[\mathbf {R} ]={\frac {k_{B}T}{4R_{g0}^{2}}}\int _{0}^{1}ds\left|{\frac {d\mathbf {R} (s)}{ds}}\right|^{2}.} In the latter equation the unperturbed radius of gyration of a chain R g 0 = N b 2 / ( 6 ) {\displaystyle R_{g0}={\sqrt {Nb^{2}/(6)}}} . Moreover, in Eq. (6) the partition function of a single polymer, subjected to the field w ( R ) {\displaystyle w(\mathbf {R} )} , is given by Q [ i w ] = ∫ D R exp ⁡ [ − β U 0 [ R ] − i N ∫ 0 1 d s w ( R ( s ) ) ] ∫ D R exp ⁡ [ − β U 0 [ R ] ] . ( 8 ) {\displaystyle Q[iw]={\frac {\int D\mathbf {R} \exp \left[-\beta U_{0}[\mathbf {R} ]-iN\int _{0}^{1}ds\;w(\mathbf {R} (s))\right]}{\int D\mathbf {R} \exp \left[-\beta U_{0}[\mathbf {R} ]\right]}}.\qquad (8)} == Grand canonical ensemble == === Basic field-theoretic representation of grand canonical partition function === To derive the grand canonical partition function, we use its standard thermodynamic relation to the canonical partition function, given by Ξ ( μ , V , β ) = ∑ n = 0 ∞ e β μ n Z ( n , V , β ) , {\displaystyle \Xi (\mu ,V,\beta )=\sum _{n=0}^{\infty }e^{\beta \mu n}Z(n,V,\beta ),} where μ {\displaystyle \mu } is the chemical potential and Z ( n , V , β ) {\displaystyle Z(n,V,\beta )} is given by Eq. (6). Performing the sum, this provides the field-theoretic representation of the grand canonical partition function, Ξ ( ξ , V , β ) = γ Φ ¯ ∫ D w exp ⁡ [ − S [ w ] ] , {\displaystyle \Xi (\xi ,V,\beta )=\gamma _{\bar {\Phi }}\int Dw\exp \left[-S[w]\right],} where S [ w ] = 1 2 β V 2 ∫ d r d r ′ w ( r ) Φ ¯ − 1 ( r − r ′ ) w ( r ′ ) − ξ Q [ i w ] {\displaystyle S[w]={\frac {1}{2\beta V^{2}}}\int d\mathbf {r} d\mathbf {r} 'w(\mathbf {r} ){\bar {\Phi }}^{-1}(\mathbf {r} -\mathbf {r} ')w(\mathbf {r} ')-\xi Q[iw]} is the grand canonical action with Q [ i w ] {\displaystyle Q[iw]} defined by Eq. (8) and the constant γ Φ ¯ = 1 2 ∏ G ( 1 π β Φ ¯ ( G ) ) 1 / 2 . {\displaystyle \gamma _{\bar {\Phi }}={\frac {1}{\sqrt {2}}}\prod _{\mathbf {G} }\left({\frac {1}{\pi \beta {\bar {\Phi }}(\mathbf {G} )}}\right)^{1/2}.} Moreover, the parameter related to the chemical potential is given by ξ = exp ⁡ ( β μ + β / 2 N Φ ¯ ( 0 ) ) Z ′ λ 3 N ( T ) , {\displaystyle \xi ={\frac {\exp \left(\beta \mu +\beta /2N{\bar {\Phi }}(0)\right)Z'}{\lambda ^{3N}(T)}},} where Z ′ {\displaystyle Z'} is provided by Eq. (7). == Mean field approximation == A standard approximation strategy for polymer field theories is the mean field (MF) approximation, which consists in replacing the many-body interaction term in the action by a term where all bodies of the system interact with an average effective field. This approach reduces any multi-body problem into an effective one-body problem by assuming that the partition function integral of the model is dominated by a single field configuration. A major benefit of solving problems with the MF approximation, or its numerical implementation commonly referred to as the self-consistent field theory (SCFT), is that it often provides some useful insights into the properties and behavior of complex many-body systems at relatively low computational cost. Successful applications of this approximation strategy can be found for various systems of polymers and complex fluids, like e.g. strongly segregated block copolymers of high molecular weight, highly concentrated neutral polymer solutions or highly concentrated block polyelectrolyte (PE) solutions (Schmid 1998, Matsen 2002, Fredrickson 2002). There are, however, a multitude of cases for which SCFT provides inaccurate or even qualitatively incorrect results (Baeurle 2006a). These comprise neutral polymer or polyelectrolyte solutions in dilute and semidilute concentration regimes, block copolymers near their order-disorder transition, polymer blends near their phase transitions, etc. In such situations the partition function integral defining the field-theoretic model is not entirely dominated by a single MF configuration and field configurations far from it can make important contributions, which require the use of more sophisticated calculation techniques beyond the MF level of approximation. == Higher-order corrections == One possibility to face the problem is to calculate higher-order corrections to the MF approximation. Tsonchev et al. developed such a strategy including leading (one-loop) order fluctuation corrections, which allowed to gain new insights into the physics of confined PE solutions (Tsonchev 1999). However, in situations where the MF approximation is bad many computationally demanding higher-order corrections to the integral are necessary to get the desired accuracy. == Renormalization techniques == An alternative theoretical tool to cope with strong fluctuations problems occurring in field theories has been provided in the late 1940s by the concept of renormalization, which has originally been devised to calculate functional integrals arising in quantum field theories (QFT's). In QFT's a standard approximation strategy is to expand the functional integrals in a power series in the coupling constant using perturbation theory. Unfortunately, generally most of the expansion terms turn out to be infinite, rendering such calculations impracticable (Shirkov 2001). A way to remove the infinities from QFT's is to make use of the concept of renormalization (Baeurle 2007). It mainly consists in replacing the bare values of the coupling parameters, like e.g. electric charges or masses, by renormalized coupling parameters and requiring that the physical quantities do not change under this transformation, thereby leading to finite terms in the perturbation expansion. A simple physical picture of the procedure of renormalization can be drawn from the example of a classical electrical charge, Q {\displaystyle Q} , inserted into a polarizable medium, such as in an electrolyte solution. At a distance r {\displaystyle r} from the charge due to polarization of the medium, its Coulomb field will effectively depend on a function Q ( r ) {\displaystyle Q(r)} , i.e. the effective (renormalized) charge, instead of the bare electrical charge, Q {\displaystyle Q} . At the beginning of the 1970s, K.G. Wilson further pioneered the power of renormalization concepts by developing the formalism of renormalization group (RG) theory, to investigate critical phenomena of statistical systems (Wilson 1971). === Renormalization group theory === The RG theory makes use of a series of RG transformations, each of which consists of a coarse-graining step followed by a change of scale (Wilson 1974). In case of statistical-mechanical problems the steps are implemented by successively eliminating and rescaling the degrees of freedom in the partition sum or integral that defines the model under consideration. De Gennes used this strategy to establish an analogy between the behavior of the zero-component classical vector model of ferromagnetism near the phase transition and a self-avoiding random walk of a polymer chain of infinite length on a lattice, to calculate the polymer excluded volume exponents (de Gennes 1972). Adapting this concept to field-theoretic functional integrals, implies to study in a systematic way how a field theory model changes while eliminating and rescaling a certain number of degrees of freedom from the partition function integral (Wilson 1974). === Hartree renormalization === An alternative approach is known as the Hartree approximation or self-consistent one-loop approximation (Amit 1984). It takes advantage of Gaussian fluctuation corrections to the 0 t h {\displaystyle 0^{th}} -order MF contribution, to renormalize the model parameters and extract in a self-consistent way the dominant length scale of the concentration fluctuations in critical concentration regimes. === Tadpole renormalization === In a more recent work Efimov and Nogovitsin showed that an alternative renormalization technique originating from QFT, based on the concept of tadpole renormalization, can be a very effective approach for computing functional integrals arising in statistical mechanics of classical many-particle systems (Efimov 1996). They demonstrated that the main contributions to classical partition function integrals are provided by low-order tadpole-type Feynman diagrams, which account for divergent contributions due to particle self-interaction. The renormalization procedure performed in this approach effects on the self-interaction contribution of a charge (like e.g. an electron or an ion), resulting from the static polarization induced in the vacuum due to the presence of that charge (Baeurle 2007). As evidenced by Efimov and Ganbold in an earlier work (Efimov 1991), the procedure of tadpole renormalization can be employed very effectively to remove the divergences from the action of the basic field-theoretic representation of the partition function and leads to an alternative functional integral representation, called the Gaussian equivalent representation (GER). They showed that the procedure provides functional integrals with significantly ameliorated convergence properties for analytical perturbation calculations. In subsequent works Baeurle et al. developed effective low-cost approximation methods based on the tadpole renormalization procedure, which have shown to deliver useful results for prototypical polymer and PE solutions (Baeurle 2006a, Baeurle 2006b, Baeurle 2007a). == Numerical simulation == Another possibility is to use Monte Carlo (MC) algorithms and to sample the full partition function integral in field-theoretic formulation. The resulting procedure is then called a polymer field-theoretic simulation. In a recent work, however, Baeurle demonstrated that MC sampling in conjunction with the basic field-theoretic representation is impracticable due to the so-called numerical sign problem (Baeurle 2002). The difficulty is related to the complex and oscillatory nature of the resulting distribution function, which causes a bad statistical convergence of the ensemble averages of the desired thermodynamic and structural quantities. In such cases special analytical and numerical techniques are necessary to accelerate the statistical convergence (Baeurle 2003, Baeurle 2003a, Baeurle 2004). === Mean field representation === To make the methodology amenable for computation, Baeurle proposed to shift the contour of integration of the partition function integral through the homogeneous MF solution using Cauchy's integral theorem, providing its so-called mean-field representation. This strategy was previously successfully employed by Baer et al. in field-theoretic electronic structure calculations (Baer 1998). Baeurle could demonstrate that this technique provides a significant acceleration of the statistical convergence of the ensemble averages in the MC sampling procedure (Baeurle 2002, Baeurle 2002a). === Gaussian equivalent representation === In subsequent works Baeurle et al. (Baeurle 2002, Baeurle 2002a, Baeurle 2003, Baeurle 2003a, Baeurle 2004) applied the concept of tadpole renormalization, leading to the Gaussian equivalent representationof the partition function integral, in conjunction with advanced MC techniques in the grand canonical ensemble. They could convincingly demonstrate that this strategy provides a further boost in the statistical convergence of the desired ensemble averages (Baeurle 2002). == References == Baeurle, S.A.; Nogovitsin, E.A. (2007). "Challenging scaling laws of flexible polyelectrolyte solutions with effective renormalization concepts". Polymer. 48 (16): 4883. doi:10.1016/j.polymer.2007.05.080. Schmid, F. (1998). "Self-consistent-field theories for complex fluids". J. Phys.: Condens. Matter. 10 (37): 8105–8138. arXiv:cond-mat/9806277. Bibcode:1998JPCM...10.8105S. doi:10.1088/0953-8984/10/37/002. S2CID 250772406. Matsen, M.W. (2002). "The standard Gaussian model for block copolymer melts". J. Phys.: Condens. Matter. 14 (2): R21 – R47. Bibcode:2002JPCM...14R..21M. doi:10.1088/0953-8984/14/2/201. S2CID 250888356. Fredrickson, G.H.; Ganesan, V.; Drolet, F. (2002). "Field-Theoretic Computer Simulation Methods for Polymers and Complex Fluids". Macromolecules. 35 (1): 16–39. Bibcode:2002MaMol..35...16F. doi:10.1021/ma011515t. Baeurle, S.A.; Usami, T.; Gusev, A.A. (2006). "A new multiscale modeling approach for the prediction of mechanical properties of polymer-based nanomaterials". Polymer. 47 (26): 8604. doi:10.1016/j.polymer.2006.10.017. Edwards, S.F. (1965). "The statistical mechanics of polymers with excluded volume". Proc. Phys. Soc. 85 (4): 613–624. Bibcode:1965PPS....85..613E. doi:10.1088/0370-1328/85/4/301. Baeurle, S.A.; Efimov, G.V.; Nogovitsin, E.A. (2006a). "Calculating field theories beyond the mean-field level". Europhys. Lett. 75 (3): 378. Bibcode:2006EL.....75..378B. doi:10.1209/epl/i2006-10133-6. S2CID 250825211. Tsonchev, S.; Coalson, R.D.; Duncan, A. (1999). "Statistical mechanics of charged polymers in electrolyte solutions: A lattice field theory approach". Phys. Rev. E. 60 (4): 4257–4267. arXiv:cond-mat/9902325. Bibcode:1999PhRvE..60.4257T. doi:10.1103/PhysRevE.60.4257. PMID 11970278. S2CID 8754634. Shirkov, D.V. (2001). "Fifty years of the renormalization group". CERN Courier. 41: 14. Wilson, K.G. (1971). "Renormalization Group and Critical Phenomena. II. Phase-Space Cell Analysis of Critical Behavior". Phys. Rev. B. 4 (9): 3184. Bibcode:1971PhRvB...4.3184W. doi:10.1103/PhysRevB.4.3184. Wilson, K.G.; Kogut J. (1974). "The renormalization group and the ε expansion". Phys. Rep. 12 (2): 75. Bibcode:1974PhR....12...75W. doi:10.1016/0370-1573(74)90023-4. de Gennes, P.G. (1972). "Exponents for the excluded volume problem as derived by the Wilson method". Phys. Lett. 38 A: 339. Amit, D.J. (1984). "Field theory, the renormalization group, and critical phenomena". Singapore, World Scientific. ISBN 9812561196. Efimov, G.V.; Nogovitsin, E.A. (1996). "The partition functions of classical systems in the Gaussian equivalent representation of functional integrals". Physica A. 234 (1–2): 506–522. Bibcode:1996PhyA..234..506V. doi:10.1016/S0378-4371(96)00279-8. Efimov, G.V.; Ganbold, G. (1991). "Functional Integrals in the Strong Coupling Regime and the Polaron Self-Energy". Physica Status Solidi. 168 (1): 165–178. Bibcode:1991PSSBR.168..165E. doi:10.1002/pssb.2221680116. hdl:10068/325205. Baeurle, S.A.; Efimov, G.V.; Nogovitsin, E.A. (2006b). "On a new self-consistent-field theory for the canonical ensemble". J. Chem. Phys. 124 (22): 224110. Bibcode:2006JChPh.124v4110B. doi:10.1063/1.2204913. PMID 16784266. Baeurle, S.A.; Charlot, M.; Nogovitsin E.A. (2007a). "Grand canonical investigations of prototypical polyelectrolyte models beyond the mean field level of approximation". Phys. Rev. E. 75 (1): 011804. Bibcode:2007PhRvE..75a1804B. doi:10.1103/PhysRevE.75.011804. Baeurle, S.A. (2002). "Method of Gaussian Equivalent Representation: A New Technique for Reducing the Sign Problem of Functional Integral Methods". Phys. Rev. Lett. 89 (8): 080602. Bibcode:2002PhRvL..89h0602B. doi:10.1103/PhysRevLett.89.080602. PMID 12190451. Baeurle, S.A. (2003). "Computation within the auxiliary field approach". J. Comput. Phys. 184 (2): 540–558. Bibcode:2003JCoPh.184..540B. doi:10.1016/S0021-9991(02)00036-0. Baeurle, S.A. (2003a). "The stationary phase auxiliary field Monte Carlo method: a new strategy for reducing the sign problem of auxiliary field methodologies". Comput. Phys. Commun. 154 (2): 111–120. Bibcode:2003CoPhC.154..111B. doi:10.1016/S0010-4655(03)00284-4. Baeurle, S.A. (2004). "Grand canonical auxiliary field Monte Carlo: a new technique for simulating open systems at high density". Comput. Phys. Commun. 157 (3): 201–206. Bibcode:2004CoPhC.157..201B. doi:10.1016/j.comphy.2003.11.001. Baer, R.; Head-Gordon, M.; Neuhauser, D. (1998). "Shifted-contour auxiliary field Monte Carlo for ab initio electronic structure: Straddling the sign problem". J. Chem. Phys. 109 (15): 6219. Bibcode:1998JChPh.109.6219B. doi:10.1063/1.477300. Baeurle, S.A.; Martonak, R.; Parrinello, M. (2002a). "A field-theoretical approach to simulation in the classical canonical and grand canonical ensemble". J. Chem. Phys. 117 (7): 3027. Bibcode:2002JChPh.117.3027B. doi:10.1063/1.1488587. == External links == University of Regensburg Research Group on Theory and Computation of Advanced Materials
Wikipedia/Polymer_field_theory
In physics, the dynamo theory proposes a mechanism by which a celestial body such as Earth or a star generates a magnetic field. The dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid can maintain a magnetic field over astronomical time scales. A dynamo is thought to be the source of the Earth's magnetic field and the magnetic fields of Mercury and the Jovian planets. == History of theory == When William Gilbert published De Magnete in 1600, he concluded that the Earth is magnetic and proposed the first hypothesis for the origin of this magnetism: permanent magnetism such as that found in lodestone. In 1822, André-Marie Ampère proposed that internal currents are responsible for Earth's magnetism. In 1919, Joseph Larmor proposed that a dynamo might be generating the field. However, even after he advanced his hypothesis, some prominent scientists advanced alternative explanations. The Nobel Prize winner Patrick Blackett did a series of experiments looking for a fundamental relation between angular momentum and magnetic moment, but found none. Walter M. Elsasser, considered a "father" of the presently accepted dynamo theory as an explanation of the Earth's magnetism, proposed that this magnetic field resulted from electric currents induced in the fluid outer core of the Earth. He revealed the history of the Earth's magnetic field through pioneering the study of the magnetic orientation of minerals in rocks. In order to maintain the magnetic field against ohmic decay (which would occur for the dipole field in 20,000 years), the outer core must be convecting. The convection is likely some combination of thermal and compositional convection. The mantle controls the rate at which heat is extracted from the core. Heat sources include gravitational energy released by the compression of the core, gravitational energy released by the rejection of light elements (probably sulfur, oxygen, or silicon) at the inner core boundary as it grows, latent heat of crystallization at the inner core boundary, and radioactivity of potassium, uranium and thorium. At the dawn of the 21st century, numerical modeling of the Earth's magnetic field is far from precise. Initial models are focused on field generation by convection in the planet's fluid outer core. It was possible to show the generation of a strong, Earth-like field when the model assumed a uniform core-surface temperature and exceptionally high viscosities for the core fluid. Computations which incorporated more realistic parameter values yielded magnetic fields that were less Earth-like, but indicated that model refinements may ultimately lead to an accurate analytic model. Slight variations in the core-surface temperature, in the range of a few millikelvins, result in significant increases in convective flow and produce more realistic magnetic fields. == Formal definition == Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. The conductive fluid in the geodynamo is liquid iron in the outer core, and in the solar dynamo is ionized gas at the tachocline. Dynamo theory of astrophysical bodies uses magnetohydrodynamic equations to investigate how the fluid can continuously regenerate the magnetic field. It was once believed that the dipole, which comprises much of the Earth's magnetic field and is misaligned along the rotation axis by 11.3 degrees, was caused by permanent magnetization of the materials in the earth. This means that dynamo theory was originally used to explain the Sun's magnetic field in its relationship with that of the Earth. However, this hypothesis, which was initially proposed by Joseph Larmor in 1919, has been modified due to extensive studies of magnetic secular variation, paleomagnetism (including polarity reversals), seismology, and the solar system's abundance of elements. Also, the application of the theories of Carl Friedrich Gauss to magnetic observations showed that Earth's magnetic field had an internal, rather than external, origin. There are three requisites for a dynamo to operate: An electrically conductive fluid medium Kinetic energy provided by planetary rotation An internal energy source to drive convective motions within the fluid. In the case of the Earth, the magnetic field is induced and constantly maintained by the convection of liquid iron in the outer core. A requirement for the induction of field is a rotating fluid. Rotation in the outer core is supplied by the Coriolis effect caused by the rotation of the Earth. The Coriolis force tends to organize fluid motions and electric currents into columns (also see Taylor columns) aligned with the rotation axis. Induction or generation of magnetic field is described by the induction equation: ∂ B ∂ t = η ∇ 2 B + ∇ × ( u × B ) {\displaystyle {\frac {\partial \mathbf {B} }{\partial t}}=\eta \nabla ^{2}\mathbf {B} +\nabla \times (\mathbf {u} \times \mathbf {B} )} where u is velocity, B is magnetic field, t is time, and η = 1 / ( σ μ ) {\displaystyle \eta =1/(\sigma \mu )} is the magnetic diffusivity with σ {\displaystyle \sigma } electrical conductivity and μ {\displaystyle \mu } permeability. The ratio of the second term on the right hand side to the first term gives the magnetic Reynolds number, a dimensionless ratio of advection of magnetic field to diffusion. === Tidal heating supporting a dynamo === Tidal forces between celestial orbiting bodies cause friction that heats up their interiors. This is known as tidal heating, and it helps keep the interior in a liquid state. A liquid interior that can conduct electricity is required to produce a dynamo. Saturn's Enceladus and Jupiter's Io have enough tidal heating to liquify their inner cores, but they may not create a dynamo because they cannot conduct electricity. Mercury, despite its small size, has a magnetic field, because it has a conductive liquid core created by its iron composition and friction resulting from its highly elliptical orbit. It is theorized that the Moon once had a magnetic field, based on evidence from magnetized lunar rocks, due to its short-lived closer distance to Earth creating tidal heating. An orbit and rotation of a planet helps provide a liquid core, and supplements kinetic energy that supports a dynamo action. == Kinematic dynamo theory == In kinematic dynamo theory the velocity field is prescribed, instead of being a dynamic variable: The model makes no provision for the flow distorting in response to the magnetic field. This method cannot provide the time variable behaviour of a fully nonlinear chaotic dynamo, but can be used to study how magnetic field strength varies with the flow structure and speed. Using Maxwell's equations simultaneously with the curl of Ohm's law, one can derive what is basically a linear eigenvalue equation for magnetic fields (B), which can be done when assuming that the magnetic field is independent from the velocity field. One arrives at a critical magnetic Reynolds number, above which the flow strength is sufficient to amplify the imposed magnetic field, and below which the magnetic field dissipates. === Practical measure of possible dynamos === The most functional feature of kinematic dynamo theory is that it can be used to test whether a velocity field is or is not capable of dynamo action. By experimentally applying a certain velocity field to a small magnetic field, one can observe whether the magnetic field tends to grow (or not) in response to the applied flow. If the magnetic field does grow, then the system is either capable of dynamo action or is a dynamo, but if the magnetic field does not grow, then it is simply referred to as “not a dynamo”. An analogous method called the membrane paradigm is a way of looking at black holes that allows for the material near their surfaces to be expressed in the language of dynamo theory. === Spontaneous breakdown of a topological supersymmetry === Kinematic dynamo can be also viewed as the phenomenon of the spontaneous breakdown of the topological supersymmetry of the associated stochastic differential equation related to the flow of the background matter. Within stochastic supersymmetric theory, this supersymmetry is an intrinsic property of all stochastic differential equations, its interpretation is that the model's phase space preserves continuity via continuous time flows. When the continuity of that flow spontaneously breaks down, the system is in the stochastic state of deterministic chaos. In other words, kinematic dynamo arises because of chaotic flow in the underlying background matter. == Nonlinear dynamo theory == The kinematic approximation becomes invalid when the magnetic field becomes strong enough to affect the fluid motions. In that case the velocity field becomes affected by the Lorentz force, and so the induction equation is no longer linear in the magnetic field. In most cases this leads to a quenching of the amplitude of the dynamo. Such dynamos are sometimes also referred to as hydromagnetic dynamos. Virtually all dynamos in astrophysics and geophysics are hydromagnetic dynamos. The main idea of the theory is that any small magnetic field existing in the outer core creates currents in the moving fluid there due to Lorentz force. These currents create further magnetic field due to Ampere's law. With the fluid motion, the currents are carried in a way that the magnetic field gets stronger (as long as u ⋅ ( J × B ) {\displaystyle \;\mathbf {u} \cdot (\mathbf {J} \times \mathbf {B} )\;} is negative). Thus a "seed" magnetic field can get stronger and stronger until it reaches some value that is related to existing non-magnetic forces. Numerical models are used to simulate fully nonlinear dynamos. The following equations are used: The induction equation, presented above. Maxwell's equations for negligible electric field: ∇ ⋅ B = 0 ∇ × B = μ 0 J {\displaystyle {\begin{aligned}&\nabla \cdot \mathbf {B} =0\\[1ex]&\nabla \times \mathbf {B} =\mu _{0}\mathbf {J} \end{aligned}}} The continuity equation for conservation of mass, for which the Boussinesq approximation is often used: ∇ ⋅ u = 0 , {\displaystyle \nabla \cdot \mathbf {u} =0,} The Navier-Stokes equation for conservation of momentum, again in the same approximation, with the magnetic force and gravitation force as the external forces: D u D t = − 1 ρ 0 ∇ p + ν ∇ 2 u + ρ ′ g + 2 Ω × u + Ω × Ω × R + 1 ρ 0 J × B , {\displaystyle {\frac {D\mathbf {u} }{Dt}}=-{\frac {1}{\rho _{0}}}\nabla p+\nu \nabla ^{2}\mathbf {u} +\rho '\mathbf {g} +2{\boldsymbol {\Omega }}\times \mathbf {u} +{\boldsymbol {\Omega }}\times {\boldsymbol {\Omega }}\times \mathbf {R} +{\frac {1}{\rho _{0}}}\mathbf {J} \times \mathbf {B} ~,} where ν {\displaystyle \,\nu \,} is the kinematic viscosity, ρ 0 {\displaystyle \,\rho _{0}\,} is the mean density and ρ ′ {\displaystyle \rho '} is the relative density perturbation that provides buoyancy (for thermal convection ρ ′ = α Δ T {\displaystyle \;\rho '=\alpha \Delta T\;} where α {\displaystyle \,\alpha \,} is coefficient of thermal expansion), Ω {\displaystyle \,\Omega \,} is the rotation rate of the Earth, and J {\displaystyle \,\mathbf {J} \,} is the electric current density. A transport equation, usually of heat (sometimes of light element concentration): ∂ T ∂ t = κ ∇ 2 T + ε {\displaystyle {\frac {\,\partial T\,}{\partial t}}=\kappa \nabla ^{2}T+\varepsilon } where T is temperature, κ = k / ρ c p {\displaystyle \;\kappa =k/\rho c_{p}\;} is the thermal diffusivity with k thermal conductivity, c p {\displaystyle \,c_{p}\,} heat capacity, and ρ {\displaystyle \rho } density, and ε {\displaystyle \varepsilon } is an optional heat source. Often the pressure is the dynamic pressure, with the hydrostatic pressure and centripetal potential removed. These equations are then non-dimensionalized, introducing the non-dimensional parameters, R a = g α T D 3 ν κ , E = ν Ω D 2 , P r = ν κ , P m = ν η {\displaystyle R_{\mathsf {a}}={\frac {\,g\alpha TD^{3}\,}{\nu \kappa }}\;,\quad E={\frac {\nu }{\,\Omega D^{2}\,}}\;,\quad P_{\mathsf {r}}={\frac {\,\nu \,}{\kappa }}\;,\quad P_{\mathsf {m}}={\frac {\,\nu \,}{\eta }}} where Ra is the Rayleigh number, E the Ekman number, Pr and Pm the Prandtl and magnetic Prandtl number. Magnetic field scaling is often in Elsasser number units B = ( ρ Ω / σ ) 1 / 2 . {\displaystyle B=(\rho \Omega /\sigma )^{1/2}.} === Energy conversion between magnetic and kinematic energy === The scalar product of the above form of Navier-Stokes equation with ρ 0 u {\displaystyle \;\rho _{0}\mathbf {u} \;} gives the rate of increase of kinetic energy density, 1 2 ρ 0 u 2 c {\displaystyle \;{\tfrac {1}{2}}\rho _{0}u^{2}c\;} , on the left-hand side. The last term on the right-hand side is then u ⋅ ( J × B ) {\displaystyle \;\mathbf {u} \cdot (\mathbf {J} \times \mathbf {B} )\;} , the local contribution to the kinetic energy due to Lorentz force. The scalar product of the induction equation with 1 μ 0 B {\textstyle {\tfrac {1}{\mu _{0}}}\mathbf {B} } gives the rate of increase of the magnetic energy density, 1 2 μ 0 B 2 {\displaystyle \;{\tfrac {1}{2}}\mu _{0}B^{2}\;} , on the left-hand side. The last term on the right-hand side is then 1 μ 0 B ⋅ ( ∇ × ( u × B ) ) . {\textstyle {\tfrac {1}{\mu _{0}}}\mathbf {B} \cdot \left(\nabla \times \left(\mathbf {u} \times \mathbf {B} \right)\right)\;.} Since the equation is volume-integrated, this term is equivalent up to a boundary term (and with the double use of the scalar triple product identity) to − u ⋅ ( 1 μ 0 ( ∇ × B ) × B ) = − u ⋅ ( J × B ) {\textstyle \;-\mathbf {u} \cdot \left({\tfrac {1}{\mu _{0}}}\left(\nabla \times \mathbf {B} \right)\times \mathbf {B} \right)=-\mathbf {u} \cdot \left(\mathbf {J} \times \mathbf {B} \right)~} (where one of Maxwell's equations was used). This is the local contribution to the magnetic energy due to fluid motion. Thus the term − u ⋅ ( J × B ) {\displaystyle \;-\mathbf {u} \cdot (\mathbf {J} \times \mathbf {B} )\;} is the rate of transformation of kinetic energy to magnetic energy. This has to be non-negative at least in part of the volume, for the dynamo to produce magnetic field. From the diagram above, it is not clear why this term should be positive. A simple argument can be based on consideration of net effects. To create the magnetic field, the net electric current must wrap around the axis of rotation of the planet. In that case, for the term to be positive, the net flow of conducting matter must be towards the axis of rotation. The diagram only shows a net flow from the poles to the equator. However mass conservation requires an additional flow from the equator toward the poles. If that flow was along the axis of rotation, that implies the circulation would be completed by a flow from the ones shown towards the axis of rotation, producing the desired effect. === Order of magnitude of the magnetic field created by Earth's dynamo === The above formula for the rate of conversion of kinetic energy to magnetic energy, is equivalent to a rate of work done by a force of J × B {\displaystyle \;\mathbf {J} \times \mathbf {B} \;} on the outer core matter, whose velocity is u {\displaystyle \mathbf {u} } . This work is the result of non-magnetic forces acting on the fluid. Of those, the gravitational force and the centrifugal force are conservative and therefore have no overall contribution to fluid moving in closed loops. Ekman number (defined above), which is the ratio between the two remaining forces, namely the viscosity and Coriolis force, is very low inside Earth's outer core, because its viscosity is low (1.2–1.5 ×10−2 pascal-second) due to its liquidity. Thus the main time-averaged contribution to the work is from Coriolis force, whose size is − 2 ρ Ω × u , {\displaystyle \;-2\rho \,\mathbf {\Omega } \times \mathbf {u} \;,} though this quantity and J × B {\displaystyle \mathbf {J} \times \mathbf {B} } are related only indirectly and are not in general equal locally (thus they affect each other but not in the same place and time). The current density J is itself the result of the magnetic field according to Ohm's law. Again, due to matter motion and current flow, this is not necessarily the field at the same place and time. However these relations can still be used to deduce orders of magnitude of the quantities in question. In terms of order of magnitude, J B ∼ ρ Ω u {\displaystyle \;J\,B\sim \rho \,\Omega \,u\;} and J ∼ σ u B {\displaystyle \;J\sim \sigma uB\;} , giving σ u B 2 ∼ ρ Ω u , {\displaystyle \;\sigma \,u\,B^{2}\sim \rho \,\Omega \,u\;,} or: B ∼ ρ Ω σ {\displaystyle B\sim {\sqrt {{\frac {\,\rho \,\Omega \,}{\sigma }}\;}}} The exact ratio between both sides is the square root of Elsasser number. Note that the magnetic field direction cannot be inferred from this approximation (at least not its sign) as it appears squared, and is, indeed, sometimes reversed, though in general it lies on a similar axis to that of Ω {\displaystyle \mathbf {\Omega } } . For earth outer core, ρ is approximately 104 kg/m3, Ω = 2π/day = 7.3×10−5/second and σ is approximately 107Ω−1m−1 . This gives 2.7×10−4 Tesla. The magnetic field of a magnetic dipole has an inverse cubic dependence in distance, so its order of magnitude at the earth surface can be approximated by multiplying the above result with (Router core⁄REarth )3 = (2890⁄6370)3 = 0.093 , giving 2.5×10−5 Tesla, not far from the measured value of 3×10−5 Tesla at the equator. == Numerical models == Broadly, models of the geodynamo attempt to produce magnetic fields consistent with observed data given certain conditions and equations as mentioned in the sections above. Implementing the magnetohydrodynamic equations successfully was of particular significance because they pushed dynamo models to self-consistency. Though geodynamo models are especially prevalent, dynamo models are not necessarily restricted to the geodynamo; solar and general dynamo models are also of interest. Studying dynamo models has utility in the field of geophysics as doing so can identify how various mechanisms form magnetic fields like those produced by astrophysical bodies like Earth and how they cause magnetic fields to exhibit certain features, such as pole reversals. The equations used in numerical models of dynamo are highly complex. For decades, theorists were confined to two dimensional kinematic dynamo models described above, in which the fluid motion is chosen in advance and the effect on the magnetic field calculated. The progression from linear to nonlinear, three dimensional models of dynamo was largely hindered by the search for solutions to magnetohydrodynamic equations, which eliminate the need for many of the assumptions made in kinematic models and allow self-consistency. The first self-consistent dynamo models, ones that determine both the fluid motions and the magnetic field, were developed by two groups in 1995, one in Japan and one in the United States. The latter was made as a model with regards to the geodynamo and received significant attention because it successfully reproduced some of the characteristics of the Earth's field. Following this breakthrough, there was a large swell in development of reasonable, three dimensional dynamo models. Though many self-consistent models now exist, there are significant differences among the models, both in the results they produce and the way they were developed. Given the complexity of developing a geodynamo model, there are many places where discrepancies can occur such as when making assumptions involving the mechanisms that provide energy for the dynamo, when choosing values for parameters used in equations, or when normalizing equations. In spite of the many differences that may occur, most models have shared features like clear axial dipoles. In many of these models, phenomena like secular variation and geomagnetic polarity reversals have also been successfully recreated. === Observations === Many observations can be made from dynamo models. Models can be used to estimate how magnetic fields vary with time and can be compared to observed paleomagnetic data to find similarities between the model and the Earth. Due to the uncertainty of paleomagnetic observations, however, comparisons may not be entirely valid or useful. Simplified geodynamo models have shown relationships between the dynamo number (determined by variance in rotational rates in the outer core and mirror-asymmetric convection (e.g. when convection favors one direction in the north and the other in the south)) and magnetic pole reversals as well as found similarities between the geodynamo and the Sun's dynamo. In many models, it appears that magnetic fields have somewhat random magnitudes that follow a normal trend that average to zero. In addition to these observations, general observations about the mechanisms powering the geodynamo can be made based on how accurately the model reflects actual data collected from Earth. === Modern modelling === The complexity of dynamo modelling is so great that models of the geodynamo are limited by the current power of supercomputers, particularly because calculating the Ekman and Rayleigh number of the outer core is extremely difficult and requires a vast number of computations. Many improvements have been proposed in dynamo modelling since the self-consistent breakthrough in 1995. One suggestion in studying the complex magnetic field changes is applying spectral methods to simplify computations. Ultimately, until considerable improvements in computer power are made, the methods for computing realistic dynamo models will have to be made more efficient, so making improvements in methods for computing the model is of high importance for the advancement of numerical dynamo modelling. == Notable people == Stanislav I. Braginsky, research geophysicist == See also == Antidynamo theorem Rotating magnetic field Secular variation == References == Demorest, Paul (21 May 2001). "Dynamo Theory and Earth's magnetic Field (term paper)" (PDF). Archived from the original (PDF) on 21 February 2007. Retrieved 14 October 2011. Fitzpatrick, Richard (18 May 2002). "MHD Dynamo Theory". Plasma Physics. University of Texas at Austin. Retrieved 14 October 2011. Merrill, Ronald T.; McElhinny, Michael W.; McFadden, Phillip L. (1996). The magnetic field of the earth: Paleomagnetism, the core, and the deep mantle. Academic Press. ISBN 978-0-12-491246-5. Stern, David P. "Chapter 12: The dynamo process". The Great Magnet, the Earth. Retrieved 14 October 2011. Stern, David P. "Chapter 13: Dynamo in the Earth's Core". The Great Magnet, the Earth. Retrieved 14 October 2011.
Wikipedia/Dynamo_theory
Collision theory is a principle of chemistry used to predict the rates of chemical reactions. It states that when suitable particles of the reactant hit each other with the correct orientation, only a certain amount of collisions result in a perceptible or notable change; these successful changes are called successful collisions. The successful collisions must have enough energy, also known as activation energy, at the moment of impact to break the pre-existing bonds and form all new bonds. This results in the products of the reaction. The activation energy is often predicted using the transition state theory. Increasing the concentration of the reactant brings about more collisions and hence more successful collisions. Increasing the temperature increases the average kinetic energy of the molecules in a solution, increasing the number of collisions that have enough energy. Collision theory was proposed independently by Max Trautz in 1916 and William Lewis in 1918. When a catalyst is involved in the collision between the reactant molecules, less energy is required for the chemical change to take place, and hence more collisions have sufficient energy for the reaction to occur. The reaction rate therefore increases. Collision theory is closely related to chemical kinetics. Collision theory was initially developed for the gas reaction system with no dilution. But most reactions involve solutions, for example, gas reactions in a carrying inert gas, and almost all reactions in solutions. The collision frequency of the solute molecules in these solutions is now controlled by diffusion or Brownian motion of individual molecules. The flux of the diffusive molecules follows Fick's laws of diffusion. For particles in a solution, an example model to calculate the collision frequency and associated coagulation rate is the Smoluchowski coagulation equation proposed by Marian Smoluchowski in a seminal 1916 publication. In this model, Fick's flux at the infinite time limit is used to mimic the particle speed of the collision theory. == Rate equations == The rate for a bimolecular gas-phase reaction, A + B → product, predicted by collision theory is r ( T ) = k n A n B = Z ρ exp ⁡ ( − E a R T ) {\displaystyle r(T)=kn_{\text{A}}n_{\text{B}}=Z\rho \exp \left({\frac {-E_{\text{a}}}{RT}}\right)} where: k is the rate constant in units of (number of molecules)−1⋅s−1⋅m3. nA is the number density of A in the gas in units of m−3. nB is the number density of B in the gas in units of m−3. E.g. for a gas mixture with gas A concentration 0.1 mol⋅L−1 and B concentration 0.2 mol⋅L−1, the number of density of A is 0.1×6.02×1023÷10−3 = 6.02×1025 m−3, the number of density of B is 0.2×6.02×1023÷10−3 = 1.2×1026 m−3 Z is the collision frequency in units of m−3⋅s−1. ρ {\displaystyle \rho } is the steric factor. Ea is the activation energy of the reaction, in units of J⋅mol−1. T is the temperature in units of K. R is the gas constant in units of J mol−1K−1. The unit of r(T) can be converted to mol⋅L−1⋅s−1, after divided by (1000×NA), where NA is the Avogadro constant. For a reaction between A and B, the collision frequency calculated with the hard-sphere model with the unit number of collisions per m3 per second is: Z = n A n B σ AB 8 k B T π μ AB = 10 6 N A 2 [A][B] σ AB 8 k B T π μ AB {\displaystyle Z=n_{\text{A}}n_{\text{B}}\sigma _{\text{AB}}{\sqrt {\frac {8k_{\text{B}}T}{\pi \mu _{\text{AB}}}}}=10^{6}N_{A}^{2}{\text{[A][B]}}\sigma _{\text{AB}}{\sqrt {\frac {8k_{\text{B}}T}{\pi \mu _{\text{AB}}}}}} where: nA is the number density of A in the gas in units of m−3. nB is the number density of B in the gas in units of m−3. E.g. for a gas mixture with gas A concentration 0.1 mol⋅L−1 and B concentration 0.2 mol⋅L−1, the number of density of A is 0.1×6.02×1023÷10−3 = 6.02×1025 m−3, the number of density of B is 0.2×6.02×1023÷10−3 = 1.2×1026 m−3. σAB is the reaction cross section (unit m2), the area when two molecules collide with each other, simplified to σ AB = π ( r A + r B ) 2 {\displaystyle \sigma _{\text{AB}}=\pi (r_{\text{A}}+r_{\text{B}})^{2}} , where rA the radius of A and rB the radius of B in unit m. kB is the Boltzmann constant unit J⋅K−1. T is the absolute temperature (unit K). μAB is the reduced mass of the reactants A and B, μ AB = m A m B m A + m B {\displaystyle \mu _{\text{AB}}={\frac {{m_{\text{A}}}{m_{\text{B}}}}{{m_{\text{A}}}+{m_{\text{B}}}}}} (unit kg). NA is the Avogadro constant. [A] is molar concentration of A in unit mol⋅L−1. [B] is molar concentration of B in unit mol⋅L−1. Z can be converted to mole collision per liter per second dividing by 1000NA. If all the units that are related to dimension are converted to dm, i.e. mol⋅dm−3 for [A] and [B], dm2 for σAB, dm2⋅kg⋅s−2⋅K−1 for the Boltzmann constant, then Z = N A 2 σ AB 8 k B T π μ AB [ A ] [ B ] = k [ A ] [ B ] {\displaystyle Z=N_{\text{A}}^{2}\sigma _{\text{AB}}{\sqrt {\frac {8k_{\text{B}}T}{\pi \mu _{\text{AB}}}}}[{\text{A}}][{\text{B}}]=k[A][B]} unit mol⋅dm−3⋅s−1. == Quantitative insights == === Derivation === Consider the bimolecular elementary reaction: A + B → C In collision theory it is considered that two particles A and B will collide if their nuclei get closer than a certain distance. The area around a molecule A in which it can collide with an approaching B molecule is called the cross section (σAB) of the reaction and is, in simplified terms, the area corresponding to a circle whose radius ( r A B {\displaystyle r_{AB}} ) is the sum of the radii of both reacting molecules, which are supposed to be spherical. A moving molecule will therefore sweep a volume π r A B 2 c A {\displaystyle \pi r_{AB}^{2}c_{A}} per second as it moves, where c A {\displaystyle c_{A}} is the average velocity of the particle. (This solely represents the classical notion of a collision of solid balls. As molecules are quantum-mechanical many-particle systems of electrons and nuclei based upon the Coulomb and exchange interactions, generally they neither obey rotational symmetry nor do they have a box potential. Therefore, more generally the cross section is defined as the reaction probability of a ray of A particles per areal density of B targets, which makes the definition independent from the nature of the interaction between A and B. Consequently, the radius r A B {\displaystyle r_{AB}} is related to the length scale of their interaction potential.) From kinetic theory it is known that a molecule of A has an average velocity (different from root mean square velocity) of c A = 8 k B T π m A {\displaystyle c_{A}={\sqrt {\frac {8k_{\text{B}}T}{\pi m_{A}}}}} , where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant, and m A {\displaystyle m_{A}} is the mass of the molecule. The solution of the two-body problem states that two different moving bodies can be treated as one body which has the reduced mass of both and moves with the velocity of the center of mass, so, in this system μ A B {\displaystyle \mu _{AB}} must be used instead of m A {\displaystyle m_{A}} . Thus, for a given molecule A, it travels t = l / c A = 1 / ( n B σ A B c A ) {\displaystyle t=l/c_{A}=1/(n_{B}\sigma _{AB}c_{A})} before hitting a molecule B if all B is fixed with no movement, where l {\displaystyle l} is the average traveling distance. Since B also moves, the relative velocity can be calculated using the reduced mass of A and B. Therefore, the total collision frequency, of all A molecules, with all B molecules, is Z = n A n B σ A B 8 k B T π μ A B = 10 6 N A 2 [ A ] [ B ] σ A B 8 k B T π μ A B = z [ A ] [ B ] , {\displaystyle Z=n_{\text{A}}n_{\text{B}}\sigma _{AB}{\sqrt {\frac {8k_{\text{B}}T}{\pi \mu _{AB}}}}=10^{6}N_{A}^{2}[A][B]\sigma _{AB}{\sqrt {\frac {8k_{\text{B}}T}{\pi \mu _{AB}}}}=z[A][B],} From Maxwell–Boltzmann distribution it can be deduced that the fraction of collisions with more energy than the activation energy is e − E a R T {\displaystyle e^{\frac {-E_{\text{a}}}{RT}}} . Therefore, the rate of a bimolecular reaction for ideal gases will be r = z ρ [ A ] [ B ] exp ⁡ ( − E a R T ) , {\displaystyle r=z\rho [A][B]\exp \left({\frac {-E_{\text{a}}}{RT}}\right),} in unit number of molecular reactions s−1⋅m−3, where: Z is the collision frequency with unit s−1⋅m−3. The z is Z without [A][B]. ρ {\displaystyle \rho } is the steric factor, which will be discussed in detail in the next section, Ea is the activation energy (per mole) of the reaction in unit J/mol, T is the absolute temperature in unit K, R is the gas constant in unit J/mol/K. [A] is molar concentration of A in unit mol/L, [B] is molar concentration of B in unit mol/L. The product zρ is equivalent to the preexponential factor of the Arrhenius equation. === Validity of the theory and steric factor === Once a theory is formulated, its validity must be tested, that is, compare its predictions with the results of the experiments. When the expression form of the rate constant is compared with the rate equation for an elementary bimolecular reaction, r = k ( T ) [ A ] [ B ] {\displaystyle r=k(T)[A][B]} , it is noticed that k ( T ) = N A σ A B ρ 8 k B T π μ A B exp ⁡ ( − E a R T ) {\displaystyle k(T)=N_{A}\sigma _{AB}\rho {\sqrt {\frac {8k_{\text{B}}T}{\pi \mu _{AB}}}}\exp \left({\frac {-E_{\text{a}}}{RT}}\right)} unit M−1⋅s−1 (= dm3⋅mol−1⋅s−1), with all dimension unit dm including kB. This expression is similar to the Arrhenius equation and gives the first theoretical explanation for the Arrhenius equation on a molecular basis. The weak temperature dependence of the preexponential factor is so small compared to the exponential factor that it cannot be measured experimentally, that is, "it is not feasible to establish, on the basis of temperature studies of the rate constant, whether the predicted T⁠1/2⁠ dependence of the preexponential factor is observed experimentally". ==== Steric factor ==== If the values of the predicted rate constants are compared with the values of known rate constants, it is noticed that collision theory fails to estimate the constants correctly, and the more complex the molecules are, the more it fails. The reason for this is that particles have been supposed to be spherical and able to react in all directions, which is not true, as the orientation of the collisions is not always proper for the reaction. For example, in the hydrogenation reaction of ethylene the H2 molecule must approach the bonding zone between the atoms, and only a few of all the possible collisions fulfill this requirement. To alleviate this problem, a new concept must be introduced: the steric factor ρ. It is defined as the ratio between the experimental value and the predicted one (or the ratio between the frequency factor and the collision frequency): ρ = A observed Z calculated , {\displaystyle \rho ={\frac {A_{\text{observed}}}{Z_{\text{calculated}}}},} and it is most often less than unity. Usually, the more complex the reactant molecules, the lower the steric factor. Nevertheless, some reactions exhibit steric factors greater than unity: the harpoon reactions, which involve atoms that exchange electrons, producing ions. The deviation from unity can have different causes: the molecules are not spherical, so different geometries are possible; not all the kinetic energy is delivered into the right spot; the presence of a solvent (when applied to solutions), etc. Collision theory can be applied to reactions in solution; in that case, the solvent cage has an effect on the reactant molecules, and several collisions can take place in a single encounter, which leads to predicted preexponential factors being too large. ρ values greater than unity can be attributed to favorable entropic contributions. == Alternative collision models for diluted solutions == Collision in diluted gas or liquid solution is regulated by diffusion instead of direct collisions, which can be calculated from Fick's laws of diffusion. Theoretical models to calculate the collision frequency in solutions have been proposed by Marian Smoluchowski in a seminal 1916 publication at the infinite time limit. For a diluted solution in the gas or the liquid phase, the collision equation developed for neat gas is not suitable when diffusion takes control of the collision frequency, i.e., the direct collision between the two molecules no longer dominates. For any given molecule A, it has to collide with a lot of solvent molecules, let's say molecule C, before finding the B molecule to react with. Thus the probability of collision should be calculated using the Brownian motion model, which can be approximated to a diffusive flux using various boundary conditions that yield different equations in the Smoluchowski. For the diffusive collision, at the infinite time limit when the molecular flux can be calculated from the Fick's laws of diffusion, in 1916 Smoluchowski derived a collision frequency between molecule A and B in a diluted solution: Z A B = 4 π R D r C A C B {\displaystyle Z_{AB}=4\pi RD_{r}C_{A}C_{B}} where: Z A B {\displaystyle Z_{AB}} is the collision frequency, unit #collisions/s in 1 m3 of solution. R {\displaystyle R} is the radius of the collision cross-section, unit m. D r {\displaystyle D_{r}} is the relative diffusion constant between A and B, unit m2/s, and D r = D A + D B {\displaystyle D_{r}=D_{A}+D_{B}} . C A {\displaystyle C_{A}} and C B {\displaystyle C_{B}} are the number concentrations of molecules A and B in the solution respectively, unit #molecule/m3. or Z A B = 1000 N A ∗ 4 π R D r [ A ] [ B ] = k [ A ] [ B ] {\displaystyle Z_{AB}=1000N_{A}*4\pi RD_{r}[A][B]=k[A][B]} where: Z A B {\displaystyle Z_{AB}} is in unit mole collisions/s in 1 L of solution. N A {\displaystyle N_{\text{A}}} is the Avogadro constant. R {\displaystyle R} is the radius of the collision cross-section, unit m. D r {\displaystyle D_{r}} is the relative diffusion constant between A and B, unit m2/s. [ A ] {\displaystyle [A]} and [ B ] {\displaystyle [B]} are the molar concentrations of A and B respectively, unit mol/L. k {\displaystyle k} is the diffusive collision rate constant, unit L mol−1 s−1. == See also == Two-dimensional gas Rate equation == References == == External links == Introduction to Collision Theory
Wikipedia/Collision_theory
Literary theory is the systematic study of the nature of literature and of the methods for literary analysis. Since the 19th century, literary scholarship includes literary theory and considerations of intellectual history, moral philosophy, social philosophy, and interdisciplinary themes relevant to how people interpret meaning. In the humanities in modern academia, the latter style of literary scholarship is an offshoot of post-structuralism. Consequently, the word theory became an umbrella term for scholarly approaches to reading texts, some of which are informed by strands of semiotics, cultural studies, philosophy of language, and continental philosophy, often witnessed within Western canon along with some postmodernist theory. == History == The practice of literary theory became a profession in the 20th century, but it has historical roots that run as far back as ancient Greece (Aristotle's Poetics is an often cited early example), ancient India (Bharata Muni's Natya Shastra), and ancient Rome (Longinus's On the Sublime). In medieval times, scholars in the Middle East (Al-Jahiz's al-Bayan wa-'l-tabyin and al-Hayawan, and ibn al-Mu'tazz's Kitab al-Badi) and Europe continued to produce works based on literary studies. The aesthetic theories of philosophers from ancient philosophy through the 18th and 19th centuries are important influences on current literary study. The theory and criticism of literature are tied to the history of literature. Some scholars, both theoretical and anti-theoretical, refer to the 1980s and 1990s debates on the academic merits of theory as "the theory wars". Proponents and critics of the turn to theory take different (and often conflicting) positions about what counts as a theory or what it means to theorize within/about/alongside literature or other cultural creations. == Overview == One of the fundamental questions of literary theory is "what is literature?" and "how should or do we read?". Some contemporary theorists and literary scholars believe either that "literature" cannot be defined or that it can refer to any use of language. Specific theories are distinguished not only by their methods and conclusions, but even by how they create meaning in a "text". However, some theorists acknowledge that these texts do not have a singular, fixed meaning which is deemed "correct". Since theorists of literature often draw on very heterogeneous traditions of Continental philosophy and the philosophy of language, any classification of their approaches is only an approximation. There are many types of literary theory, which take different approaches to texts. Broad schools of theory that have historically been important include historical and biographical criticism, New Criticism, formalism, Russian formalism, and structuralism, post-structuralism, Marxism or historical materialism, feminism and French feminism, post-colonialism, new historicism, deconstruction, reader-response criticism, narratology and psychoanalytic criticism. == Differences among schools == The different interpretive and epistemological perspectives of different schools of theory often arise from, and so give support to, different moral and political commitments. For instance, the work of the New Critics often contained an implicit moral dimension, and sometimes even a religious one: a New Critic might read a poem by T. S. Eliot or Gerard Manley Hopkins for its degree of honesty in expressing the torment and contradiction of a serious search for belief in the modern world. Meanwhile, a Marxist critic might find such judgments merely ideological rather than critical; the Marxist would say that the New Critical reading did not keep enough. Or a post-structuralist critic might simply avoid the issue by understanding the religious meaning of a poem as an allegory of meaning, treating the poem's references to "God" by discussing their referential nature rather than what they refer to. Such a disagreement cannot be easily resolved, because it is inherent in the radically different terms and goals (that is, the theories) of the critics. Their theories of reading derive from vastly different intellectual traditions: the New Critic bases his work on an East-Coast American scholarly and religious tradition, while the Marxist derives his thought from a body of critical social and economic thought, the post-structuralist's work emerges from twentieth-century Continental philosophy of language. In the late 1950s, the Canadian literary critic Northrop Frye attempted to establish an approach for reconciling historical criticism and New Criticism while addressing concerns of early reader-response and numerous psychological and social approaches. His approach, laid out in his Anatomy of Criticism, was explicitly structuralist, relying on the assumption of an intertextual "order of words" and universality of certain structural types. His approach held sway in English literature programs for several decades but lost favor during the ascendance of post-structuralism. For some theories of literature (especially certain kinds of formalism), the distinction between "literary" and other sorts of texts is of paramount importance. Other schools (particularly post-structuralism in its various forms: new historicism, deconstruction, some strains of Marxism and feminism) have sought to break down distinctions between the two and have applied the tools of textual interpretation to a wide range of "texts", including film, non-fiction, historical writing, and even cultural events. Mikhail Bakhtin argued that the "utter inadequacy" of literary theory is evident when it is forced to deal with the novel; while other genres are fairly stabilized, the novel is still developing. Another crucial distinction among the various theories of literary interpretation is intentionality, the amount of weight given to the author's own opinions about and intentions for a work. For most pre-20th century approaches, the author's intentions are a guiding factor and an important determiner of the "correct" interpretation of texts. The New Criticism was the first school to disavow the role of the author in interpreting texts, preferring to focus on "the text itself" in a close reading. In fact, as much contention as there is between formalism and later schools, they share the tenet that the author's interpretation of a work is no more inherently meaningful than any other. == Schools == Listed below are some of the most commonly identified schools of literary theory, along with their major authors: Aestheticism – associated with Romanticism, a philosophy defining aesthetic value as the primary goal in understanding literature. This includes both literary critics who have tried to understand and/or identify aesthetic values and those like Oscar Wilde who have stressed art for art's sake. Oscar Wilde, Walter Pater, Harold Bloom African-American literary theory American pragmatism and other American approaches Harold Bloom, Stanley Fish, Richard Rorty Cognitive literary theory – applies research in cognitive science and philosophy of mind to the study of literature and culture. Frederick Luis Aldama, Mary Thomas Crane, Nancy Easterlin, William Flesch, David Herman, Suzanne Keen, Patrick Colm Hogan, Alan Richardson, Ellen Spolsky, Blakey Vermeule, Lisa Zunshine Cambridge criticism – close examination of the literary text and the relation of literature to social issues I.A. Richards, F.R. Leavis, Q.D. Leavis, William Empson. Critical race theory Cultural studies – emphasizes the role of literature in everyday life Raymond Williams, Dick Hebdige, and Stuart Hall (British Cultural Studies); Max Horkheimer and Theodor Adorno; Michel de Certeau; also Paul Gilroy, John Guillory Darwinian literary studies – situates literature in the context of evolution and natural selection Deconstruction – a strategy of "close" reading that elicits the ways that key terms and concepts may be paradoxical or self-undermining, rendering their meaning undecidable Jacques Derrida, Paul de Man, J. Hillis Miller, Philippe Lacoue-Labarthe, Gayatri Spivak, Avital Ronell Descriptive poetics Brian McHale Feminist literary criticism Eco-criticism – explores cultural connections and human relationships to the natural world Gender (see feminist literary criticism) – which emphasizes themes of gender relations Luce Irigaray, Judith Butler, Hélène Cixous, Julia Kristeva, Elaine Showalter Formalism – a school of literary criticism and literary theory having mainly to do with structural purposes of a particular text German hermeneutics and philology Friedrich Schleiermacher, Wilhelm Dilthey, Hans-Georg Gadamer, Erich Auerbach, René Wellek Marxism (see Marxist literary criticism) – which emphasizes themes of class conflict Georg Lukács, Valentin Voloshinov, Raymond Williams, Terry Eagleton, Fredric Jameson, Theodor Adorno, Walter Benjamin Narratology New Criticism – looks at literary works on the basis of what is written, and not at the goals of the author or biographical issues W. K. Wimsatt, F. R. Leavis, John Crowe Ransom, Cleanth Brooks, Robert Penn Warren New historicism – which examines the work through its historical context and seeks to understand cultural and intellectual history through literature Stephen Greenblatt, Louis Montrose, Jonathan Goldberg, H. Aram Veeser Postcolonialism – focuses on the influences of colonialism in literature, especially regarding the historical conflict resulting from the exploitation of less developed countries and indigenous peoples by Western nations Edward Said, Gayatri Chakravorty Spivak, Homi Bhabha and Declan Kiberd Postmodernism – criticism of the conditions present in the twentieth century, often with concern for those viewed as social deviants or the Other Michel Foucault, Roland Barthes, Gilles Deleuze, Félix Guattari and Maurice Blanchot Post-structuralism – a catch-all term for various theoretical approaches (such as deconstruction) that criticize or go beyond Structuralism's aspirations to create a rational science of culture by extrapolating the model of linguistics to other discursive and aesthetic formations Roland Barthes, Michel Foucault, Julia Kristeva Psychoanalysis (see psychoanalytic literary criticism) – explores the role of consciousnesses and the unconscious in literature including that of the author, reader, and characters in the text Sigmund Freud, Jacques Lacan, Harold Bloom, Slavoj Žižek, Viktor Tausk Queer theory – examines, questions, and criticizes the role of gender identity and sexuality in literature Judith Butler, Eve Kosofsky Sedgwick, Michel Foucault Reader-response criticism – focuses upon the active response of the reader to a text Louise Rosenblatt, Wolfgang Iser, Norman Holland, Hans-Robert Jauss, Stuart Hall Realist James Wood Russian formalism Victor Shklovsky, Vladimir Propp Structuralism and semiotics (see semiotic literary criticism) – examines the universal underlying structures in a text, the linguistic units in a text and how the author conveys meaning through any structures Ferdinand de Saussure, Roman Jakobson, Claude Lévi-Strauss, Roland Barthes, Mikhail Bakhtin, Juri Lotman, Umberto Eco, Jacques Ehrmann, Northrop Frye and morphology of folklore Other theorists: Robert Graves, Alamgir Hashmi, John Sutherland, Leslie Fiedler, Kenneth Burke, Paul Bénichou, Barbara Johnson == See also == == Notes == == References == Peter Barry. Beginning Theory: An Introduction to Literary and Cultural Theory. ISBN 0-7190-6268-3. Jonathan Culler. (1997) Literary Theory: A Very Short Introduction. Oxford: Oxford University Press. ISBN 0-19-285383-X. Terry Eagleton. Literary Theory: An Introduction. ISBN 0-8166-1251-X. Terry Eagleton. After Theory. ISBN 0-465-01773-8. Jean-Michel Rabaté. The Future of Theory. ISBN 0-631-23013-0. The Johns Hopkins Guide to Literary Theory and Criticism. ISBN 0-8018-4560-2. Modern Criticism and Theory: A Reader. Ed. David Lodge and Nigel Wood. 2nd Ed. ISBN 0-582-31287-6 Theory's Empire: An Anthology of Dissent. Ed. Daphne Patai and Will H. Corral. ISBN 0-231-13417-7. Bakhtin, M. M. (1981) The Dialogic Imagination: Four Essays. Ed. Michael Holquist. Trans. Caryl Emerson and Michael Holquist. Austin and London: University of Texas Press. René Wellek. A History of Modern Criticism: 1750–1950. Yale University Press, 1955–1992, 8 volumes. == Further reading == Carroll, Joseph (2012) [2007]. "Evolutionary approaches to literature & drama". In Dunbar, Robin; Barrett, Louise (eds.). Oxford Handbook of Evolutionary Psychology. Oxford University Press. pp. 637–648. doi:10.1093/oxfordhb/9780198568308.013.0044. ISBN 978-0-19-856830-8. Retrieved 2020-05-13. Castle, Gregory. Blackwell Guide to Literary Theory. Malden, MA: Blackwell Publishing, 2007. Culler, Jonathan. The Literary in Theory. Stanford: Stanford University Press, 2007. Terry Eagleton. Literary Theory. Minneapolis: University of Minnesota Press, 2008. (http://www.upress.umn.edu/) Literary Theory: An Anthology. Edited by Julie Rivkin and Michael Ryan. Malden, MA: Blackwell Publishing, 2004. Lisa Zunshine, ed. Introduction to Cognitive Cultural Studies. Baltimore: The Johns Hopkins University Press, 2010 Writing: what for and for whom. The joys and travails of the artist, edited by Ralf van Bühren. Rome: EDUSC, 2024. == External links == Aristotle's Poetics (350 BCE) A translation By S. H. Butcher Longinus's On the Sublime (1st century CE) A translation By H. L. Havell Sir Philip Sidney's Defence of Poesie (1595) Internet Encyclopedia of Philosophy: "Literary Theory", by Vince Brewton Introduction to Modern Literary Theory "A Bibliography of Literary Theory, Criticism and Philology", by José Ángel García Landa Annotated bibliography on literary theory The Litcrit Toolkit Critical Literary Theory Purdue OWL Johns Hopkins Guide to Literary Theory & Criticism
Wikipedia/Literary_theory
The Rice–Ramsperger–Kassel–Marcus (RRKM) theory is a theory of chemical reactivity. It was developed by Rice and Ramsperger in 1927 and Kassel in 1928 (RRK theory) and generalized (into the RRKM theory) in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable the computation of simple estimates of the unimolecular reaction rates from a few characteristics of the potential energy surface. == Assumption == Assume that the molecule consists of harmonic oscillators, which are connected and can exchange energy with each other. Assume the possible excitation energy of the molecule to be E, which enables the reaction to occur. The rate of intra-molecular energy distribution is much faster than that of reaction itself. As a corollary to the above, the potential energy surface does not have any "bottlenecks" for which certain vibrational modes may be trapped for longer than the average time of the reaction == Derivation == Assume that A* is an excited molecule: A ∗ → k ( E ) A ‡ → P {\displaystyle A^{*}{\xrightarrow {k(E)}}A^{\ddagger }\rightarrow P} where P stands for product, and A‡ for the critical atomic configuration with the maximum energy E0 along the reaction coordinate. The unimolecular rate constant k u n i {\displaystyle k_{\mathrm {uni} }} is obtained as follows: k u n i = 1 h Q r Q v ∫ E 0 ∞ d E ∑ J = 0 ∞ ( 2 J + 1 ) G ‡ ( E , J ) exp ( − E k b T ) 1 + k ( E , J ) ω , {\displaystyle k_{\mathrm {uni} }={\frac {1}{hQ_{r}Q_{v}}}\int \limits _{E_{0}}^{\infty }\mathrm {d} E\sum _{J=0}^{\infty }{\frac {(2J+1)G^{\ddagger }(E,J)\exp \!\left({\frac {-E}{k_{b}T}}\right)}{1+{\frac {k(E,J)}{\omega }}}},} where k ( E , J ) {\displaystyle k(E,J)} is the microcanonical transition state theory rate constant, G ‡ {\displaystyle G^{\ddagger }} is the sum of states for the active degrees of freedom in the transition state, J {\displaystyle J} is the quantum number of angular momentum, ω {\displaystyle \omega } is the collision frequency between A ∗ {\displaystyle A^{*}} molecule and bath molecules, Q r {\displaystyle Q_{r}} and Q v {\displaystyle Q_{v}} are the molecular vibrational and external rotational partition functions. == See also == Transition state theory == References == == External links == An RRKM online calculator
Wikipedia/RRKM_theory
Music theory is the study of theoretical frameworks for understanding the practices and possibilities of music. The Oxford Companion to Music describes three interrelated uses of the term "music theory": The first is the "rudiments", that are needed to understand music notation (key signatures, time signatures, and rhythmic notation); the second is learning scholars' views on music from antiquity to the present; the third is a sub-topic of musicology that "seeks to define processes and general principles in music". The musicological approach to theory differs from music analysis "in that it takes as its starting-point not the individual work or performance but the fundamental materials from which it is built." Music theory is frequently concerned with describing how musicians and composers make music, including tuning systems and composition methods among other topics. Because of the ever-expanding conception of what constitutes music, a more inclusive definition could be the consideration of any sonic phenomena, including silence. This is not an absolute guideline, however; for example, the study of "music" in the Quadrivium liberal arts university curriculum, that was common in medieval Europe, was an abstract system of proportions that was carefully studied at a distance from actual musical practice. But this medieval discipline became the basis for tuning systems in later centuries and is generally included in modern scholarship on the history of music theory. Music theory as a practical discipline encompasses the methods and concepts that composers and other musicians use in creating and performing music. The development, preservation, and transmission of music theory in this sense may be found in oral and written music-making traditions, musical instruments, and other artifacts. For example, ancient instruments from prehistoric sites around the world reveal details about the music they produced and potentially something of the musical theory that might have been used by their makers. In ancient and living cultures around the world, the deep and long roots of music theory are visible in instruments, oral traditions, and current music-making. Many cultures have also considered music theory in more formal ways such as written treatises and music notation. Practical and scholarly traditions overlap, as many practical treatises about music place themselves within a tradition of other treatises, which are cited regularly just as scholarly writing cites earlier research. In modern academia, music theory is a subfield of musicology, the wider study of musical cultures and history. Guido Adler, however, in one of the texts that founded musicology in the late 19th century, wrote that "the science of music originated at the same time as the art of sounds", where "the science of music" (Musikwissenschaft) obviously meant "music theory". Adler added that music only could exist when one began measuring pitches and comparing them to each other. He concluded that "all people for which one can speak of an art of sounds also have a science of sounds". One must deduce that music theory exists in all musical cultures of the world. Music theory is often concerned with abstract musical aspects such as tuning and tonal systems, scales, consonance and dissonance, and rhythmic relationships. There is also a body of theory concerning practical aspects, such as the creation or the performance of music, orchestration, ornamentation, improvisation, and electronic sound production. A person who researches or teaches music theory is a music theorist. University study, typically to the MA or PhD level, is required to teach as a tenure-track music theorist in a US or Canadian university. Methods of analysis include mathematics, graphic analysis, and especially analysis enabled by western music notation. Comparative, descriptive, statistical, and other methods are also used. Music theory textbooks, especially in the United States of America, often include elements of musical acoustics, considerations of musical notation, and techniques of tonal composition (harmony and counterpoint), among other topics. == History == === Antiquity === ==== Mesopotamia ==== Several surviving Sumerian and Akkadian clay tablets include musical information of a theoretical nature, mainly lists of intervals and tunings. The scholar Sam Mirelman reports that the earliest of these texts dates from before 1500 BCE, a millennium earlier than surviving evidence from any other culture of comparable musical thought. Further, "All the Mesopotamian texts [about music] are united by the use of a terminology for music that, according to the approximate dating of the texts, was in use for over 1,000 years." ==== China ==== Much of Chinese music history and theory remains unclear. Chinese theory starts from numbers, the main musical numbers being twelve, five and eight. Twelve refers to the number of pitches on which the scales can be constructed, Five refers to the Pentatonic Scale (primarily uses a 5-note scale), And Eight refers to the eight categories of Chinese Music Instruments; classified by the material they are made from: (Metal, Stone, Silk, Bamboo, Gourd, Clay, Leather, and Wood). The Lüshi chunqiu from about 238 BCE recalls the legend of Ling Lun. On order of the Yellow Emperor, Ling Lun collected twelve bamboo lengths with thick and even nodes. Blowing on one of these like a pipe, he found its sound agreeable and named it huangzhong, the "Yellow Bell." He then heard phoenixes singing. The male and female phoenix each sang six tones. Ling Lun cut his bamboo pipes to match the pitches of the phoenixes, producing twelve pitch pipes in two sets: six from the male phoenix and six from the female: these were called the lülü or later the shierlü. Apart from technical and structural aspects, ancient Chinese music theory also discusses topics such as the nature and functions of music. The Yueji ("Record of music", c1st and 2nd centuries BCE), for example, manifests Confucian moral theories of understanding music in its social context. Studied and implemented by Confucian scholar-officials [...], these theories helped form a musical Confucianism that overshadowed but did not erase rival approaches. These include the assertion of Mozi (c. 468 – c. 376 BCE) that music wasted human and material resources, and Laozi's claim that the greatest music had no sounds. [...] Even the music of the qin zither, a genre closely affiliated with Confucian scholar-officials, includes many works with Daoist references, such as Tianfeng huanpei ("Heavenly Breeze and Sounds of Jade Pendants"). ==== India ==== The Samaveda and Yajurveda (c. 1200 – 1000 BCE) are among the earliest testimonies of Indian music, but properly speaking, they contain no theory. The Natya Shastra, written between 200 BCE to 200 CE, discusses intervals (Śrutis), scales (Grāmas), consonances and dissonances, classes of melodic structure (Mūrchanās, modes?), melodic types (Jātis), instruments, etc. ==== Greece ==== Early preserved Greek writings on music theory include two types of works: technical manuals describing the Greek musical system including notation, scales, consonance and dissonance, rhythm, and types of musical compositions; treatises on the way in which music reveals universal patterns of order leading to the highest levels of knowledge and understanding. Several names of theorists are known before these works, including Pythagoras (c. 570 ~ c. 495 BCE), Philolaus (c. 470 ~ (c. 385 BCE), Archytas (428–347 BCE), and others. Works of the first type (technical manuals) include Anonymous (erroneously attributed to Euclid) (1989) [4th–3rd century BCE]. Barker, Andrew (ed.). Κατατομή κανόνος [Division of the Canon]. Greek Musical Writings. Vol. 2: Harmonic and Acoustic Theory. Cambridge, UK: Cambridge University Press. pp. 191–208. English trans. Theon of Smyrna. Τωv κατά τό μαθηματικόν χρησίμων είς τήν Πλάτωνος άνάγνωσις [On the Mathematics Useful for Understanding Plato] (in Greek). 115–140 CE. Nicomachus of Gerasa. Άρμονικόν έγχειρίδιον [Manual of Harmonics]. 100–150 CE. Cleonides. Είσαγωγή άρμονική [Introduction to Harmonics] (in Greek). 2nd century CE. Gaudentius. Άρμονική είσαγωγή [Harmonic Introduction] (in Greek). 3rd or 4th century CE. Bacchius Geron. Είσαγωγή τέχνης μουσικής [Introduction to the Art of Music]. 4th century CE or later. Alypius of Alexandria. Είσαγωγή μουσική [Introduction to Music] (in Greek). 4th–5th century CE. More philosophical treatises of the second type include Aristoxenus. Άρμονικά στοιχεία [Harmonic Elements] (in Greek). 375~360 BCE, before 320 BCE. Aristoxenus. Ρυθμικά στοιχεία [Rhythmic Elements] (in Greek). Ptolemaios (Πτολεμαίος), Claudius. Άρμονικά [Harmonics] (in Greek). 127–148 CE. Porphyrius. Είς τά άρμονικά Πτολεμαίον ύπόμνημα [On Ptolemy's Harmonics] (in Greek). c. 232~233 – c. 305 CE. === Post-classical or Medieval Period === ==== China ==== The pipa instrument carried with it a theory of musical modes that subsequently led to the Sui and Tang theory of 84 musical modes. ==== Arabic countries / Persian countries ==== Medieval Arabic music theorists include: Abū Yūsuf Ya'qūb al-Kindi (Bagdad, 873 CE), who uses the first twelve letters of the alphabet to describe the twelve frets on five strings of the oud, producing a chromatic scale of 25 degrees. [Yaḥyā ibn] al-Munajjim (Baghdad, 856–912), author of Risāla fī al-mūsīqī ("Treatise on music", MS GB-Lbl Oriental 2361) which describes a Pythagorean tuning of the oud and a system of eight modes perhaps inspired by Ishaq al-Mawsili (767–850). Abū n-Nașr Muḥammad al-Fārābi (Persia, 872? – Damas, 950 or 951 CE), author of Kitab al-Musiqa al-Kabir ("The Great Book of Music"). 'Ali ibn al-Husayn ul-Isfahānī (897–967), known as Abu al-Faraj al-Isfahani, author of Kitāb al-Aghānī ("The Book of Songs"). Abū 'Alī al-Ḥusayn ibn ʿAbd-Allāh ibn Sīnā, known as Avicenna (c. 980 – 1037), whose contribution to music theory consists mainly in Chapter 12 of the section on mathematics of his Kitab Al-Shifa ("The Book of Healing"). al-Ḥasan ibn Aḥmad ibn 'Ali al-Kātib, author of Kamāl adab al Ghinā' ("The Perfection of Musical Knowledge"), copied in 1225 (Istanbul, Topkapi Museum, Ms 1727). Safi al-Din al-Urmawi (1216–1294 CE), author of the Kitabu al-Adwār ("Treatise of musical cycles") and ar-Risālah aš-Šarafiyyah ("Epistle to Šaraf"). Mubārak Šāh, commentator of Safi al-Din's Kitāb al-Adwār (British Museum, Ms 823). Anon. LXI, Anonymous commentary on Safi al-Din's Kitāb al-Adwār. Shams al-dῑn al-Saydᾱwῑ Al-Dhahabῑ (14th century CE (?)), music theorist. Author of Urjῡza fi'l-mῡsῑqᾱ ("A Didactic Poem on Music"). ==== Europe ==== The Latin treatise De institutione musica by the Roman philosopher Boethius (written c. 500, translated as Fundamentals of Music) was a touchstone for other writings on music in medieval Europe. Boethius represented Classical authority on music during the Middle Ages, as the Greek writings on which he based his work were not read or translated by later Europeans until the 15th century. This treatise carefully maintains distance from the actual practice of music, focusing mostly on the mathematical proportions involved in tuning systems and on the moral character of particular modes. Several centuries later, treatises began to appear which dealt with the actual composition of pieces of music in the plainchant tradition. At the end of the ninth century, Hucbald worked towards more precise pitch notation for the neumes used to record plainchant. Guido d'Arezzo wrote a letter to Michael of Pomposa in 1028, entitled Epistola de ignoto cantu, in which he introduced the practice of using syllables to describe notes and intervals. This was the source of the hexachordal solmization that was to be used until the end of the Middle Ages. Guido also wrote about emotional qualities of the modes, the phrase structure of plainchant, the temporal meaning of the neumes, etc.; his chapters on polyphony "come closer to describing and illustrating real music than any previous account" in the Western tradition. During the thirteenth century, a new rhythm system called mensural notation grew out of an earlier, more limited method of notating rhythms in terms of fixed repetitive patterns, the so-called rhythmic modes, which were developed in France around 1200. An early form of mensural notation was first described and codified in the treatise Ars cantus mensurabilis ("The art of measured chant") by Franco of Cologne (c. 1280). Mensural notation used different note shapes to specify different durations, allowing scribes to capture rhythms which varied instead of repeating the same fixed pattern; it is a proportional notation, in the sense that each note value is equal to two or three times the shorter value, or half or a third of the longer value. This same notation, transformed through various extensions and improvements during the Renaissance, forms the basis for rhythmic notation in European classical music today. === Modern === ==== Middle Eastern and Central Asian countries ==== Bāqiyā Nāyinῑ (Uzbekistan, 17th century CE), Uzbek author and music theorist. Author of Zamzama e wahdat-i-mῡsῑqῑ ["The Chanting of Unity in Music"]. Baron Francois Rodolphe d'Erlanger (Tunis, Tunisia, 1910–1932 CE), French musicologist. Author of La musique arabe and Ta'rῑkh al-mῡsῑqᾱ al-arabiyya wa-usῡluha wa-tatawwurᾱtuha ["A History of Arabian Music, its principles and its Development"] D'Erlanger divulges that the Arabic music scale is derived from the Greek music scale, and that Arabic music is connected to certain features of Arabic culture, such as astrology. ==== Europe ==== Renaissance Baroque 1750–1900 As Western musical influence spread throughout the world in the 1800s, musicians adopted Western theory as an international standard—but other theoretical traditions in both textual and oral traditions remain in use. For example, the long and rich musical traditions unique to ancient and current cultures of Africa are primarily oral, but describe specific forms, genres, performance practices, tunings, and other aspects of music theory. Sacred harp music uses a different kind of scale and theory in practice. The music focuses on the solfege "fa, sol, la" on the music scale. Sacred Harp also employs a different notation involving "shape notes", or notes that are shaped to correspond to a certain solfege syllable on the music scale. Sacred Harp music and its music theory originated with Reverend Thomas Symmes in 1720, where he developed a system for "singing by note" to help his church members with note accuracy. === Contemporary === == Fundamentals of music == Music is composed of aural phenomena; "music theory" considers how those phenomena apply in music. Music theory considers melody, rhythm, counterpoint, harmony, form, tonal systems, scales, tuning, intervals, consonance, dissonance, durational proportions, the acoustics of pitch systems, composition, performance, orchestration, ornamentation, improvisation, electronic sound production, etc. === Pitch === Pitch is the lowness or highness of a tone, for example the difference between middle C and a higher C. The frequency of the sound waves producing a pitch can be measured precisely, but the perception of pitch is more complex because single notes from natural sources are usually a complex mix of many frequencies. Accordingly, theorists often describe pitch as a subjective sensation rather than an objective measurement of sound. Specific frequencies are often assigned letter names. Today most orchestras assign concert A (the A above middle C on the piano) to the frequency of 440 Hz. This assignment is somewhat arbitrary; for example, in 1859 France, the same A was tuned to 435 Hz. Such differences can have a noticeable effect on the timbre of instruments and other phenomena. Thus, in historically informed performance of older music, tuning is often set to match the tuning used in the period when it was written. Additionally, many cultures do not attempt to standardize pitch, often considering that it should be allowed to vary depending on genre, style, mood, etc. The difference in pitch between two notes is called an interval. The most basic interval is the unison, which is simply two notes of the same pitch. The octave interval is two pitches that are either double or half the frequency of one another. The unique characteristics of octaves gave rise to the concept of pitch class: pitches of the same letter name that occur in different octaves may be grouped into a single "class" by ignoring the difference in octave. For example, a high C and a low C are members of the same pitch class—the class that contains all C's. Musical tuning systems, or temperaments, determine the precise size of intervals. Tuning systems vary widely within and between world cultures. In Western culture, there have long been several competing tuning systems, all with different qualities. Internationally, the system known as equal temperament is most commonly used today because it is considered the most satisfactory compromise that allows instruments of fixed tuning (e.g. the piano) to sound acceptably in tune in all keys. === Scales and modes === Notes can be arranged in a variety of scales and modes. Western music theory generally divides the octave into a series of twelve pitches, called a chromatic scale, within which the interval between adjacent tones is called a semitone, or half step. Selecting tones from this set of 12 and arranging them in patterns of semitones and whole tones creates other scales. The most commonly encountered scales are the seven-toned major, the harmonic minor, the melodic minor, and the natural minor. Other examples of scales are the octatonic scale and the pentatonic or five-tone scale, which is common in folk music and blues. Non-Western cultures often use scales that do not correspond with an equally divided twelve-tone division of the octave. For example, classical Ottoman, Persian, Indian and Arabic musical systems often make use of multiples of quarter tones (half the size of a semitone, as the name indicates), for instance in 'neutral' seconds (three quarter tones) or 'neutral' thirds (seven quarter tones)—they do not normally use the quarter tone itself as a direct interval. In traditional Western notation, the scale used for a composition is usually indicated by a key signature at the beginning to designate the pitches that make up that scale. As the music progresses, the pitches used may change and introduce a different scale. Music can be transposed from one scale to another for various purposes, often to accommodate the range of a vocalist. Such transposition raises or lowers the overall pitch range, but preserves the intervallic relationships of the original scale. For example, transposition from the key of C major to D major raises all pitches of the scale of C major equally by a whole tone. Since the interval relationships remain unchanged, transposition may be unnoticed by a listener, however other qualities may change noticeably because transposition changes the relationship of the overall pitch range compared to the range of the instruments or voices that perform the music. This often affects the music's overall sound, as well as having technical implications for the performers. The interrelationship of the keys most commonly used in Western tonal music is conveniently shown by the circle of fifths. Unique key signatures are also sometimes devised for a particular composition. During the Baroque period, emotional associations with specific keys, known as the doctrine of the affections, were an important topic in music theory, but the unique tonal colorings of keys that gave rise to that doctrine were largely erased with the adoption of equal temperament. However, many musicians continue to feel that certain keys are more appropriate to certain emotions than others. Indian classical music theory continues to strongly associate keys with emotional states, times of day, and other extra-musical concepts and notably, does not employ equal temperament. === Consonance and dissonance === Consonance and dissonance are subjective qualities of the sonority of intervals that vary widely in different cultures and over the ages. Consonance (or concord) is the quality of an interval or chord that seems stable and complete in itself. Dissonance (or discord) is the opposite in that it feels incomplete and "wants to" resolve to a consonant interval. Dissonant intervals seem to clash. Consonant intervals seem to sound comfortable together. Commonly, perfect fourths, fifths, and octaves and all major and minor thirds and sixths are considered consonant. All others are dissonant to a greater or lesser degree. Context and many other aspects can affect apparent dissonance and consonance. For example, in a Debussy prelude, a major second may sound stable and consonant, while the same interval may sound dissonant in a Bach fugue. In the Common practice era, the perfect fourth is considered dissonant when not supported by a lower third or fifth. Since the early 20th century, Arnold Schoenberg's concept of "emancipated" dissonance, in which traditionally dissonant intervals can be treated as "higher," more remote consonances, has become more widely accepted. === Rhythm === Rhythm is produced by the sequential arrangement of sounds and silences in time. Meter measures music in regular pulse groupings, called measures or bars. The time signature or meter signature specifies how many beats are in a measure, and which value of written note is counted or felt as a single beat. Through increased stress, or variations in duration or articulation, particular tones may be accented. There are conventions in most musical traditions for regular and hierarchical accentuation of beats to reinforce a given meter. Syncopated rhythms contradict those conventions by accenting unexpected parts of the beat. Playing simultaneous rhythms in more than one time signature is called polyrhythm. In recent years, rhythm and meter have become an important area of research among music scholars. The most highly cited of these recent scholars are Maury Yeston, Fred Lerdahl and Ray Jackendoff, Jonathan Kramer, and Justin London. === Melody === A melody is a group of musical sounds in agreeable succession or arrangement. Because melody is such a prominent aspect in so much music, its construction and other qualities are a primary interest of music theory. The basic elements of melody are pitch, duration, rhythm, and tempo. The tones of a melody are usually drawn from pitch systems such as scales or modes. Melody may consist, to increasing degree, of the figure, motive, semi-phrase, antecedent and consequent phrase, and period or sentence. The period may be considered the complete melody, however some examples combine two periods, or use other combinations of constituents to create larger form melodies. === Chord === A chord, in music, is any harmonic set of three or more notes that is heard as if sounding simultaneously.: pp. 67, 359 : p. 63  These need not actually be played together: arpeggios and broken chords may, for many practical and theoretical purposes, constitute chords. Chords and sequences of chords are frequently used in modern Western, West African, and Oceanian music, whereas they are absent from the music of many other parts of the world.: p. 15  The most frequently encountered chords are triads, so called because they consist of three distinct notes: further notes may be added to give seventh chords, extended chords, or added tone chords. The most common chords are the major and minor triads and then the augmented and diminished triads. The descriptions major, minor, augmented, and diminished are sometimes referred to collectively as chordal quality. Chords are also commonly classed by their root note—so, for instance, the chord C major may be described as a triad of major quality built on the note C. Chords may also be classified by inversion, the order in which the notes are stacked. A series of chords is called a chord progression. Although any chord may in principle be followed by any other chord, certain patterns of chords have been accepted as establishing key in common-practice harmony. To describe this, chords are numbered, using Roman numerals (upward from the key-note), per their diatonic function. Common ways of notating or representing chords in western music other than conventional staff notation include Roman numerals, figured bass (much used in the Baroque era), chord letters (sometimes used in modern musicology), and various systems of chord charts typically found in the lead sheets used in popular music to lay out the sequence of chords so that the musician may play accompaniment chords or improvise a solo. === Harmony === In music, harmony is the use of simultaneous pitches (tones, notes), or chords.: p. 15  The study of harmony involves chords and their construction and chord progressions and the principles of connection that govern them. Harmony is often said to refer to the "vertical" aspect of music, as distinguished from melodic line, or the "horizontal" aspect. Counterpoint, which refers to the interweaving of melodic lines, and polyphony, which refers to the relationship of separate independent voices, is thus sometimes distinguished from harmony. In popular and jazz harmony, chords are named by their root plus various terms and characters indicating their qualities. For example, a lead sheet may indicate chords such as C major, D minor, and G dominant seventh. In many types of music, notably Baroque, Romantic, modern, and jazz, chords are often augmented with "tensions". A tension is an additional chord member that creates a relatively dissonant interval in relation to the bass. It is part of a chord, but is not one of the chord tones (1 3 5 7). Typically, in the classical common practice period a dissonant chord (chord with tension) "resolves" to a consonant chord. Harmonization usually sounds pleasant to the ear when there is a balance between the consonant and dissonant sounds. In simple words, that occurs when there is a balance between "tense" and "relaxed" moments. === Timbre === Timbre, sometimes called "color", or "tone color," is the principal phenomenon that allows us to distinguish one instrument from another when both play at the same pitch and volume, a quality of a voice or instrument often described in terms like bright, dull, shrill, etc. It is of considerable interest in music theory, especially because it is one component of music that has as yet, no standardized nomenclature. It has been called "... the psychoacoustician's multidimensional waste-basket category for everything that cannot be labeled pitch or loudness," but can be accurately described and analyzed by Fourier analysis and other methods because it results from the combination of all sound frequencies, attack and release envelopes, and other qualities that a tone comprises. Timbre is principally determined by two things: (1) the relative balance of overtones produced by a given instrument due its construction (e.g. shape, material), and (2) the envelope of the sound (including changes in the overtone structure over time). Timbre varies widely between different instruments, voices, and to lesser degree, between instruments of the same type due to variations in their construction, and significantly, the performer's technique. The timbre of most instruments can be changed by employing different techniques while playing. For example, the timbre of a trumpet changes when a mute is inserted into the bell, the player changes their embouchure, or volume. A voice can change its timbre by the way the performer manipulates their vocal apparatus, (e.g. the shape of the vocal cavity or mouth). Musical notation frequently specifies alteration in timbre by changes in sounding technique, volume, accent, and other means. These are indicated variously by symbolic and verbal instruction. For example, the word dolce (sweetly) indicates a non-specific, but commonly understood soft and "sweet" timbre. Sul tasto instructs a string player to bow near or over the fingerboard to produce a less brilliant sound. Cuivre instructs a brass player to produce a forced and stridently brassy sound. Accent symbols like marcato (^) and dynamic indications (pp) can also indicate changes in timbre. ==== Dynamics ==== In music, "dynamics" normally refers to variations of intensity or volume, as may be measured by physicists and audio engineers in decibels or phons. In music notation, however, dynamics are not treated as absolute values, but as relative ones. Because they are usually measured subjectively, there are factors besides amplitude that affect the performance or perception of intensity, such as timbre, vibrato, and articulation. The conventional indications of dynamics are abbreviations for Italian words like forte (f) for loud and piano (p) for soft. These two basic notations are modified by indications including mezzo piano (mp) for moderately soft (literally "half soft") and mezzo forte (mf) for moderately loud, sforzando or sforzato (sfz) for a surging or "pushed" attack, or fortepiano (fp) for a loud attack with a sudden decrease to a soft level. The full span of these markings usually range from a nearly inaudible pianissississimo (pppp) to a loud-as-possible fortissississimo (ffff). Greater extremes of pppppp and fffff and nuances such as p+ or più piano are sometimes found. Other systems of indicating volume are also used in both notation and analysis: dB (decibels), numerical scales, colored or different sized notes, words in languages other than Italian, and symbols such as those for progressively increasing volume (crescendo) or decreasing volume (diminuendo or decrescendo), often called "hairpins" when indicated with diverging or converging lines as shown in the graphic above. ==== Articulation ==== Articulation is the way the performer sounds notes. For example, staccato is the shortening of duration compared to the written note value, legato performs the notes in a smoothly joined sequence with no separation. Articulation is often described rather than quantified, therefore there is room to interpret how to execute precisely each articulation. For example, staccato is often referred to as "separated" or "detached" rather than having a defined or numbered amount by which to reduce the notated duration. Violin players use a variety of techniques to perform different qualities of staccato. The manner in which a performer decides to execute a given articulation is usually based on the context of the piece or phrase, but many articulation symbols and verbal instructions depend on the instrument and musical period (e.g. viol, wind; classical, baroque; etc.). There is a set of articulations that most instruments and voices perform in common. They are—from long to short: legato (smooth, connected); tenuto (pressed or played to full notated duration); marcato (accented and detached); staccato ("separated", "detached"); martelé (heavily accented or "hammered"). Many of these can be combined to create certain "in-between" articulations. For example, portato is the combination of tenuto and staccato. Some instruments have unique methods by which to produce sounds, such as spiccato for bowed strings, where the bow bounces off the string. === Texture === In music, texture is how the melodic, rhythmic, and harmonic materials are combined in a composition, thus determining the overall quality of the sound in a piece. Texture is often described in regard to the density, or thickness, and range, or width, between lowest and highest pitches, in relative terms as well as more specifically distinguished according to the number of voices, or parts, and the relationship between these voices. For example, a thick texture contains many "layers" of instruments. One of these layers could be a string section, or another brass. The thickness also is affected by the number and the richness of the instruments playing the piece. The thickness varies from light to thick. A lightly textured piece will have light, sparse scoring. A thickly or heavily textured piece will be scored for many instruments. A piece's texture may be affected by the number and character of parts playing at once, the timbre of the instruments or voices playing these parts and the harmony, tempo, and rhythms used. The types categorized by number and relationship of parts are analyzed and determined through the labeling of primary textural elements: primary melody, secondary melody, parallel supporting melody, static support, harmonic support, rhythmic support, and harmonic and rhythmic support. Common types included monophonic texture (a single melodic voice, such as a piece for solo soprano or solo flute), biphonic texture (two melodic voices, such as a duo for bassoon and flute in which the bassoon plays a drone note and the flute plays the melody), polyphonic texture and homophonic texture (chords accompanying a melody). === Form or structure === The term musical form (or musical architecture) refers to the overall structure or plan of a piece of music, and it describes the layout of a composition as divided into sections. In the tenth edition of The Oxford Companion to Music, Percy Scholes defines musical form as "a series of strategies designed to find a successful mean between the opposite extremes of unrelieved repetition and unrelieved alteration." According to Richard Middleton, musical form is "the shape or structure of the work." He describes it through difference: the distance moved from a repeat; the latter being the smallest difference. Difference is quantitative and qualitative: how far, and of what type, different. In many cases, form depends on statement and restatement, unity and variety, and contrast and connection. === Expression === Musical expression is the art of playing or singing music with emotional communication. The elements of music that comprise expression include dynamic indications, such as forte or piano, phrasing, differing qualities of timbre and articulation, color, intensity, energy and excitement. All of these devices can be incorporated by the performer. A performer aims to elicit responses of sympathetic feeling in the audience, and to excite, calm or otherwise sway the audience's physical and emotional responses. Musical expression is sometimes thought to be produced by a combination of other parameters, and sometimes described as a transcendent quality that is more than the sum of measurable quantities such as pitch or duration. Expression on instruments can be closely related to the role of the breath in singing, and the voice's natural ability to express feelings, sentiment and deep emotions. Whether these can somehow be categorized is perhaps the realm of academics, who view expression as an element of musical performance that embodies a consistently recognizable emotion, ideally causing a sympathetic emotional response in its listeners. The emotional content of musical expression is distinct from the emotional content of specific sounds (e.g., a startlingly-loud 'bang') and of learned associations (e.g., a national anthem), but can rarely be completely separated from its context. The components of musical expression continue to be the subject of extensive and unresolved dispute. === Notation === Musical notation is the written or symbolized representation of music. This is most often achieved by the use of commonly understood graphic symbols and written verbal instructions and their abbreviations. There are many systems of music notation from different cultures and different ages. Traditional Western notation evolved during the Middle Ages and remains an area of experimentation and innovation. In the 2000s, computer file formats have become important as well. Spoken language and hand signs are also used to symbolically represent music, primarily in teaching. In standard Western music notation, tones are represented graphically by symbols (notes) placed on a staff or staves, the vertical axis corresponding to pitch and the horizontal axis corresponding to time. Note head shapes, stems, flags, ties and dots are used to indicate duration. Additional symbols indicate keys, dynamics, accents, rests, etc. Verbal instructions from the conductor are often used to indicate tempo, technique, and other aspects. In Western music, a range of different music notation systems are used. In Western Classical music, conductors use printed scores that show all of the instruments' parts and orchestra members read parts with their musical lines written out. In popular styles of music, much less of the music may be notated. A rock band may go into a recording session with just a handwritten chord chart indicating the song's chord progression using chord names (e.g., C major, D minor, G7, etc.). All of the chord voicings, rhythms and accompaniment figures are improvised by the band members. == As academic discipline == The scholarly study of music theory in the twentieth century has a number of different subfields, each of which takes a different perspective on what are the primary phenomenon of interest and the most useful methods for investigation. === Analysis === Musical analysis is the attempt to answer the question how does this music work? The method employed to answer this question, and indeed exactly what is meant by the question, differs from analyst to analyst, and according to the purpose of the analysis. According to Ian Bent, "analysis, as a pursuit in its own right, came to be established only in the late 19th century; its emergence as an approach and method can be traced back to the 1750s. However, it existed as a scholarly tool, albeit an auxiliary one, from the Middle Ages onwards." Adolf Bernhard Marx was influential in formalising concepts about composition and music understanding towards the second half of the 19th century. The principle of analysis has been variously criticized, especially by composers, such as Edgard Varèse's claim that, "to explain by means of [analysis] is to decompose, to mutilate the spirit of a work". Schenkerian analysis is a method of musical analysis of tonal music based on the theories of Heinrich Schenker (1868–1935). The goal of a Schenkerian analysis is to interpret the underlying structure of a tonal work and to help reading the score according to that structure. The theory's basic tenets can be viewed as a way of defining tonality in music. A Schenkerian analysis of a passage of music shows hierarchical relationships among its pitches, and draws conclusions about the structure of the passage from this hierarchy. The analysis makes use of a specialized symbolic form of musical notation that Schenker devised to demonstrate various techniques of elaboration. The most fundamental concept of Schenker's theory of tonality may be that of tonal space. The intervals between the notes of the tonic triad form a tonal space that is filled with passing and neighbour notes, producing new triads and new tonal spaces, open for further elaborations until the surface of the work (the score) is reached. Although Schenker himself usually presents his analyses in the generative direction, starting from the fundamental structure (Ursatz) to reach the score, the practice of Schenkerian analysis more often is reductive, starting from the score and showing how it can be reduced to its fundamental structure. The graph of the Ursatz is arrhythmic, as is a strict-counterpoint cantus firmus exercise. Even at intermediate levels of the reduction, rhythmic notation (open and closed noteheads, beams and flags) shows not rhythm but the hierarchical relationships between the pitch-events. Schenkerian analysis is subjective. There is no mechanical procedure involved and the analysis reflects the musical intuitions of the analyst. The analysis represents a way of hearing (and reading) a piece of music. Transformational theory is a branch of music theory developed by David Lewin in the 1980s, and formally introduced in his 1987 work, Generalized Musical Intervals and Transformations. The theory, which models musical transformations as elements of a mathematical group, can be used to analyze both tonal and atonal music. The goal of transformational theory is to change the focus from musical objects—such as the "C major chord" or "G major chord"—to relations between objects. Thus, instead of saying that a C major chord is followed by G major, a transformational theorist might say that the first chord has been "transformed" into the second by the "Dominant operation." (Symbolically, one might write "Dominant(C major) = G major.") While traditional musical set theory focuses on the makeup of musical objects, transformational theory focuses on the intervals or types of musical motion that can occur. According to Lewin's description of this change in emphasis, "[The transformational] attitude does not ask for some observed measure of extension between reified 'points'; rather it asks: 'If I am at s and wish to get to t, what characteristic gesture should I perform in order to arrive there?'" === Music perception and cognition === Music psychology or the psychology of music may be regarded as a branch of both psychology and musicology. It aims to explain and understand musical behavior and experience, including the processes through which music is perceived, created, responded to, and incorporated into everyday life. Modern music psychology is primarily empirical; its knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. Music psychology is a field of research with practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human aptitude, skill, intelligence, creativity, and social behavior. Music psychology can shed light on non-psychological aspects of musicology and musical practice. For example, it contributes to music theory through investigations of the perception and computational modelling of musical structures such as melody, harmony, tonality, rhythm, meter, and form. Research in music history can benefit from systematic study of the history of musical syntax, or from psychological analyses of composers and compositions in relation to perceptual, affective, and social responses to their music. === Genre and technique === A music genre is a conventional category that identifies some pieces of music as belonging to a shared tradition or set of conventions. It is to be distinguished from musical form and musical style, although in practice these terms are sometimes used interchangeably. Music can be divided into different genres in many different ways. The artistic nature of music means that these classifications are often subjective and controversial, and some genres may overlap. There are even varying academic definitions of the term genre itself. In his book Form in Tonal Music, Douglass M. Green distinguishes between genre and form. He lists madrigal, motet, canzona, ricercar, and dance as examples of genres from the Renaissance period. To further clarify the meaning of genre, Green writes, "Beethoven's Op. 61 and Mendelssohn's Op. 64 are identical in genre—both are violin concertos—but different in form. However, Mozart's Rondo for Piano, K. 511, and the Agnus Dei from his Mass, K. 317 are quite different in genre but happen to be similar in form." Some, like Peter van der Merwe, treat the terms genre and style as the same, saying that genre should be defined as pieces of music that came from the same style or "basic musical language." Others, such as Allan F. Moore, state that genre and style are two separate terms, and that secondary characteristics such as subject matter can also differentiate between genres. A music genre or subgenre may also be defined by the musical techniques, the style, the cultural context, and the content and spirit of the themes. Geographical origin is sometimes used to identify a music genre, though a single geographical category will often include a wide variety of subgenres. Timothy Laurie argues that "since the early 1980s, genre has graduated from being a subset of popular music studies to being an almost ubiquitous framework for constituting and evaluating musical research objects". Musical technique is the ability of instrumental and vocal musicians to exert optimal control of their instruments or vocal cords to produce precise musical effects. Improving technique generally entails practicing exercises that improve muscular sensitivity and agility. To improve technique, musicians often practice fundamental patterns of notes such as the natural, minor, major, and chromatic scales, minor and major triads, dominant and diminished sevenths, formula patterns and arpeggios. For example, triads and sevenths teach how to play chords with accuracy and speed. Scales teach how to move quickly and gracefully from one note to another (usually by step). Arpeggios teach how to play broken chords over larger intervals. Many of these components of music are found in compositions, for example, a scale is a very common element of classical and romantic era compositions. Heinrich Schenker argued that musical technique's "most striking and distinctive characteristic" is repetition. Works known as études (meaning "study") are also frequently used for the improvement of technique. === Mathematics === Music theorists sometimes use mathematics to understand music, and although music has no axiomatic foundation in modern mathematics, mathematics is "the basis of sound" and sound itself "in its musical aspects... exhibits a remarkable array of number properties", simply because nature itself "is amazingly mathematical". The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory. Some composers have incorporated the golden ratio and Fibonacci numbers into their work. There is a long history of examining the relationships between music and mathematics. Though ancient Chinese, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios. In the modern era, musical set theory uses the language of mathematical set theory in an elementary way to organize musical objects and describe their relationships. To analyze the structure of a piece of (typically atonal) music using musical set theory, one usually starts with a set of tones, which could form motives or chords. By applying simple operations such as transposition and inversion, one can discover deep structures in the music. Operations such as transposition and inversion are called isometries because they preserve the intervals between tones in a set. Expanding on the methods of musical set theory, some theorists have used abstract algebra to analyze music. For example, the pitch classes in an equally tempered octave form an abelian group with 12 elements. It is possible to describe just intonation in terms of a free abelian group. === Serial composition and set theory === In music theory, serialism is a method or technique of composition that uses a series of values to manipulate different musical elements. Serialism began primarily with Arnold Schoenberg's twelve-tone technique, though his contemporaries were also working to establish serialism as one example of post-tonal thinking. Twelve-tone technique orders the twelve notes of the chromatic scale, forming a row or series and providing a unifying basis for a composition's melody, harmony, structural progressions, and variations. Other types of serialism also work with sets, collections of objects, but not necessarily with fixed-order series, and extend the technique to other musical dimensions (often called "parameters"), such as duration, dynamics, and timbre. The idea of serialism is also applied in various ways in the visual arts, design, and architecture "Integral serialism" or "total serialism" is the use of series for aspects such as duration, dynamics, and register as well as pitch. Other terms, used especially in Europe to distinguish post-World War II serial music from twelve-tone music and its American extensions, are "general serialism" and "multiple serialism". Musical set theory provides concepts for categorizing musical objects and describing their relationships. Many of the notions were first elaborated by Howard Hanson (1960) in connection with tonal music, and then mostly developed in connection with atonal music by theorists such as Allen Forte (1973), drawing on the work in twelve-tone theory of Milton Babbitt. The concepts of set theory are very general and can be applied to tonal and atonal styles in any equally tempered tuning system, and to some extent more generally than that. One branch of musical set theory deals with collections (sets and permutations) of pitches and pitch classes (pitch-class set theory), which may be ordered or unordered, and can be related by musical operations such as transposition, inversion, and complementation. The methods of musical set theory are sometimes applied to the analysis of rhythm as well. === Musical semiotics === Music semiology (semiotics) is the study of signs as they pertain to music on a variety of levels. Following Roman Jakobson, Kofi Agawu adopts the idea of musical semiosis being introversive or extroversive—that is, musical signs within a text and without. "Topics", or various musical conventions (such as horn calls, dance forms, and styles), have been treated suggestively by Agawu, among others. The notion of gesture is beginning to play a large role in musico-semiotic enquiry. "There are strong arguments that music inhabits a semiological realm which, on both ontogenetic and phylogenetic levels, has developmental priority over verbal language." Writers on music semiology include Kofi Agawu (on topical theory, Heinrich Schenker, Robert Hatten (on topic, gesture), Raymond Monelle (on topic, musical meaning), Jean-Jacques Nattiez (on introversive taxonomic analysis and ethnomusicological applications), Anthony Newcomb (on narrativity), and Eero Tarasti. Roland Barthes, himself a semiotician and skilled amateur pianist, wrote about music in Image-Music-Text, The Responsibilities of Form, and Eiffel Tower, though he did not consider music to be a semiotic system. Signs, meanings in music, happen essentially through the connotations of sounds, and through the social construction, appropriation and amplification of certain meanings associated with these connotations. The work of Philip Tagg (Ten Little Tunes, Fernando the Flute, Music's Meanings) provides one of the most complete and systematic analysis of the relation between musical structures and connotations in western and especially popular, television and film music. The work of Leonard B. Meyer in Style and Music theorizes the relationship between ideologies and musical structures and the phenomena of style change, and focuses on romanticism as a case study. === Education and careers === Music theory in the practical sense has been a part of education at conservatories and music schools for centuries, but the status music theory currently has within academic institutions is relatively recent. In the 1970s, few universities had dedicated music theory programs, many music theorists had been trained as composers or historians, and there was a belief among theorists that the teaching of music theory was inadequate and that the subject was not properly recognised as a scholarly discipline in its own right. A growing number of scholars began promoting the idea that music theory should be taught by theorists, rather than composers, performers or music historians. This led to the founding of the Society for Music Theory in the United States in 1977. In Europe, the French Société d'Analyse musicale was founded in 1985. It called the First European Conference of Music Analysis for 1989, which resulted in the foundation of the Société belge d'Analyse musicale in Belgium and the Gruppo analisi e teoria musicale in Italy the same year, the Society for Music Analysis in the UK in 1991, the Vereniging voor Muziektheorie in the Netherlands in 1999 and the Gesellschaft für Musiktheorie in Germany in 2000. They were later followed by the Russian Society for Music Theory in 2013, the Polish Society for Music Analysis in 2015 and the Sociedad de Análisis y Teoría Musical in Spain in 2020, and others are in construction. These societies coordinate the publication of music theory scholarship and support the professional development of music theory researchers. They formed in 2018 a network of European societies for Theory and/or Analysis of Music, the EuroT&AM As part of their initial training, music theorists will typically complete a B.Mus or a B.A. in music (or a related field) and in many cases an M.A. in music theory. Some individuals apply directly from a bachelor's degree to a PhD, and in these cases, they may not receive an M.A. In the 2010s, given the increasingly interdisciplinary nature of university graduate programs, some applicants for music theory PhD programs may have academic training both in music and outside of music (e.g., a student may apply with a B.Mus. and a Masters in Music Composition or Philosophy of Music). Most music theorists work as instructors, lecturers or professors in colleges, universities or conservatories. The job market for tenure-track professor positions is very competitive: with an average of around 25 tenure-track positions advertised per year in the past decade, 80–100 PhD graduates are produced each year (according to the Survey of Earned Doctorates) who compete not only with each other for those positions but with job seekers that received PhD's in previous years who are still searching for a tenure-track job. Applicants must hold a completed PhD or the equivalent degree (or expect to receive one within a year of being hired—called an "ABD", for "All But Dissertation" stage) and (for more senior positions) have a strong record of publishing in peer-reviewed journals. Some PhD-holding music theorists are only able to find insecure positions as sessional lecturers. The job tasks of a music theorist are the same as those of a professor in any other humanities discipline: teaching undergraduate and/or graduate classes in this area of specialization and, in many cases some general courses (such as Music appreciation or Introduction to Music Theory), conducting research in this area of expertise, publishing research articles in peer-reviewed journals, authoring book chapters, books or textbooks, traveling to conferences to present papers and learn about research in the field, and, if the program includes a graduate school, supervising M.A. and PhD students and giving them guidance on the preparation of their theses and dissertations. Some music theory professors may take on senior administrative positions in their institution, such as Dean or Chair of the School of Music. == See also == List of music theorists Music psychology Musicology Theory of painting == Notes == == References == === Sources === == Further reading == == External links == Dillen, Oscar van, Outline of basic music theory (2011)
Wikipedia/Music_theory
In theoretical chemistry, Specific ion Interaction Theory (SIT theory) is a theory used to estimate single-ion activity coefficients in electrolyte solutions at relatively high concentrations. It does so by taking into consideration interaction coefficients between the various ions present in solution. Interaction coefficients are determined from equilibrium constant values obtained with solutions at various ionic strengths. The determination of SIT interaction coefficients also yields the value of the equilibrium constant at infinite dilution. == Background == This theory arises from the need to derive activity coefficients of solutes when their concentrations are too high to be predicted accurately by the Debye–Hückel theory. Activity coefficients are needed because an equilibrium constant is defined in chemical thermodynamics as the ratio of activities but is usually measured using concentrations. The protonation of a monobasic acid will be used to simplify the presentation. The equilibrium for protonation of the conjugate base, A− of the acid HA, may be written as: H + + A − ↽ − − ⇀ HA {\displaystyle {\ce {H+ + A- <=> HA}}} for which the association constant K is defined as: K = { HA } { H + } { A − } {\displaystyle K={\frac {{\ce {\{HA\}}}}{{\ce {\{H^+\}\{A^{-}\}}}}}} where {HA}, {H+}, and {A–} represent the activity of the corresponding chemical species. The role of water in the association equilibrium is ignored as in all but the most concentrated solutions the activity of water is constant. K is defined here as an association constant, the reciprocal of an acid dissociation constant. Each activity term { } can be expressed as the product of a concentration [ ] and an activity coefficient γ. For example, { H A } = [ H A ] × γ H A {\displaystyle \{HA\}=[HA]\times \gamma _{HA}} where the square brackets represent a concentration and γ is an activity coefficient. Thus the equilibrium constant can be expressed as a product of a concentration ratio and an activity coefficient ratio. K = [ HA ] [ H + ] [ A − ] × γ HA γ H + γ A − {\displaystyle K={\frac {{\ce {[HA]}}}{{\ce {[H^+][A^{-}]}}}}\times {\frac {\gamma _{{\ce {HA}}}}{\gamma _{{\ce {H^+}}}\gamma _{{\ce {A^-}}}}}} Taking the logarithms: log ⁡ K = log ⁡ K 0 + log ⁡ γ HA − log ⁡ γ H + − log ⁡ γ A − {\displaystyle \log K=\log K^{0}+\log \gamma _{{\ce {HA}}}-\log \gamma _{{\ce {H^+}}}-\log \gamma _{{\ce {A^{-}}}}} where: K 0 = [ HA ] [ H + ] [ A − ] {\displaystyle K^{0}={\frac {{\ce {[HA]}}}{{\ce {[H^+][A^{-}]}}}}} at infinite dilution of the solution K0 is the hypothetical value that the equilibrium constant K would have if the solution of the acid HA was infinitely diluted and that the activity coefficients of all the species in solution were equal to one. It is a common practice to determine equilibrium constants in solutions containing an electrolyte at high ionic strength such that the activity coefficients are effectively constant. However, when the ionic strength is changed the measured equilibrium constant will also change, so there is a need to estimate individual (single ion) activity coefficients. Debye–Hückel theory provides a means to do this, but it is accurate only at very low concentrations. Hence the need for an extension to Debye–Hückel theory. Two main approaches have been used. SIT theory, discussed here and Pitzer equations. == Development == SIT theory was first proposed by Brønsted in 1922 and was further developed by Guggenheim in 1955. Scatchard extended the theory in 1936 to allow the interaction coefficients to vary with ionic strength. The theory was mainly of theoretical interest until 1945 because of the difficulty of determining equilibrium constants before the glass electrode was invented. Subsequently, Ciavatta developed the theory further in 1980. The activity coefficient of the jth ion in solution is written as γj when concentrations are on the molal concentration scale and as yj when concentrations are on the molar concentration scale. (The molality scale is preferred in thermodynamics because molal concentrations are independent of temperature). The basic idea of SIT theory is that the activity coefficient can be expressed as log ⁡ γ j = − z j 2 0.51 I 1 + 1.5 I + ∑ k ϵ j k m k {\displaystyle \log \gamma _{j}=-z_{j}^{2}{\frac {0.51{\sqrt {I}}}{1+1.5{\sqrt {I}}}}+\sum _{k}\epsilon _{jk}m_{k}} (molalities) or log ⁡ y j = − z j 2 0.51 I 1 + 1.5 I + ∑ k b j k c k {\displaystyle \log y_{j}=-z_{j}^{2}{\frac {0.51{\sqrt {I}}}{1+1.5{\sqrt {I}}}}+\sum _{k}b_{jk}c_{k}} (molar concentrations) where z is the electrical charge on the ion, I is the ionic strength, ε and b are interaction coefficients and m and c are concentrations. The summation extends over the other ions present in solution, which includes the ions produced by the background electrolyte. The first term in these expressions comes from Debye–Hückel theory. The second term shows how the contributions from "interaction" are dependent on concentration. Thus, the interaction coefficients are used as corrections to Debye–Hückel theory when concentrations are higher than the region of validity of that theory. The activity coefficient of a neutral species can be assumed to depend linearly on ionic strength, as in log ⁡ γ = k m I {\displaystyle \log \gamma =k_{m}I\,} where km is a Sechenov coefficient. In the example of a monobasic acid HA, assuming that the background electrolyte is the salt NaNO3, the interaction coefficients will be for interaction between H+ and NO3−, and between A− and Na+. == Determination and application == Firstly, equilibrium constants are determined at a number of different ionic strengths, at a chosen temperature and particular background electrolyte. The interaction coefficients are then determined by fitting to the observed equilibrium constant values. The procedure also provides the value of K at infinite dilution. It is not limited to monobasic acids. and can also be applied to metal complexes. The SIT and Pitzer approaches have been compared recently. The Bromley equation has also been compared to both SIT and Pitzer equations. It has been shown that the SIT equation is a practical simplification of a more complicated hypothesis, that is rigorously applicable only at trace concentrations of reactant and product species immersed in a surrounding electrolyte medium. == References == == External links == SIT program A PC program to correct stability constants for changes in ionic strength using SIT theory and to estimate SIT parameters with full statistics. Contains an editable database of published SIT parameters. It also provides routines to inter-convert MolaRities (c) and MolaLities (m), and lg K(c) and lg K(m).
Wikipedia/Specific_ion_interaction_theory
The history of science covers the development of science from ancient times to the present. It encompasses all three major branches of science: natural, social, and formal. Protoscience, early sciences, and natural philosophies such as alchemy and astrology that existed during the Bronze Age, Iron Age, classical antiquity and the Middle Ages, declined during the early modern period after the establishment of formal disciplines of science in the Age of Enlightenment. The earliest roots of scientific thinking and practice can be traced to Ancient Egypt and Mesopotamia during the 3rd and 2nd millennia BCE. These civilizations' contributions to mathematics, astronomy, and medicine influenced later Greek natural philosophy of classical antiquity, wherein formal attempts were made to provide explanations of events in the physical world based on natural causes. After the fall of the Western Roman Empire, knowledge of Greek conceptions of the world deteriorated in Latin-speaking Western Europe during the early centuries (400 to 1000 CE) of the Middle Ages, but continued to thrive in the Greek-speaking Byzantine Empire. Aided by translations of Greek texts, the Hellenistic worldview was preserved and absorbed into the Arabic-speaking Muslim world during the Islamic Golden Age. The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th century revived the learning of natural philosophy in the West. Traditions of early science were also developed in ancient India and separately in ancient China, the Chinese model having influenced Vietnam, Korea and Japan before Western exploration. Among the Pre-Columbian peoples of Mesoamerica, the Zapotec civilization established their first known traditions of astronomy and mathematics for producing calendars, followed by other civilizations such as the Maya. Natural philosophy was transformed by the Scientific Revolution that transpired during the 16th and 17th centuries in Europe, as new ideas and discoveries departed from previous Greek conceptions and traditions. The New Science that emerged was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. More "revolutions" in subsequent centuries soon followed. The chemical revolution of the 18th century, for instance, introduced new quantitative methods and measurements for chemistry. In the 19th century, new perspectives regarding the conservation of energy, age of Earth, and evolution came into focus. And in the 20th century, new discoveries in genetics and physics laid the foundations for new sub disciplines such as molecular biology and particle physics. Moreover, industrial and military concerns as well as the increasing complexity of new research endeavors ushered in the era of "big science," particularly after World War II. == Approaches to history of science == The nature of the history of science is a topic of debate (as is, by implication, the definition of science itself). The history of science is often seen as a linear story of progress, but historians have come to see the story as more complex. Alfred Edward Taylor has characterised lean periods in the advance of scientific discovery as "periodical bankruptcies of science". Science is a human activity, and scientific contributions have come from people from a wide range of different backgrounds and cultures. Historians of science increasingly see their field as part of a global history of exchange, conflict and collaboration. The relationship between science and religion has been variously characterized in terms of "conflict", "harmony", "complexity", and "mutual independence", among others. Events in Europe such as the Galileo affair of the early 17th century – associated with the scientific revolution and the Age of Enlightenment – led scholars such as John William Draper to postulate (c. 1874) a conflict thesis, suggesting that religion and science have been in conflict methodologically, factually and politically throughout history. The "conflict thesis" has since lost favor among the majority of contemporary scientists and historians of science. However, some contemporary philosophers and scientists, such as Richard Dawkins, still subscribe to this thesis. Historians have emphasized that trust is necessary for agreement on claims about nature. In this light, the 1660 establishment of the Royal Society and its code of experiment – trustworthy because witnessed by its members – has become an important chapter in the historiography of science. Many people in modern history (typically women and persons of color) were excluded from elite scientific communities and characterized by the science establishment as inferior. Historians in the 1980s and 1990s described the structural barriers to participation and began to recover the contributions of overlooked individuals. Historians have also investigated the mundane practices of science such as fieldwork and specimen collection, correspondence, drawing, record-keeping, and the use of laboratory and field equipment. == Prehistory == In prehistoric times, knowledge and technique were passed from generation to generation in an oral tradition. For instance, the domestication of maize for agriculture has been dated to about 9,000 years ago in southern Mexico, before the development of writing systems. Similarly, archaeological evidence indicates the development of astronomical knowledge in preliterate societies. The oral tradition of preliterate societies had several features, the first of which was its fluidity. New information was constantly absorbed and adjusted to new circumstances or community needs. There were no archives or reports. This fluidity was closely related to the practical need to explain and justify a present state of affairs. Another feature was the tendency to describe the universe as just sky and earth, with a potential underworld. They were also prone to identify causes with beginnings, thereby providing a historical origin with an explanation. There was also a reliance on a "medicine man" or "wise woman" for healing, knowledge of divine or demonic causes of diseases, and in more extreme cases, for rituals such as exorcism, divination, songs, and incantations. Finally, there was an inclination to unquestioningly accept explanations that might be deemed implausible in more modern times while at the same time not being aware that such credulous behaviors could have posed problems. The development of writing enabled humans to store and communicate knowledge across generations with much greater accuracy. Its invention was a prerequisite for the development of philosophy and later science in ancient times. Moreover, the extent to which philosophy and science would flourish in ancient times depended on the efficiency of a writing system (e.g., use of alphabets). == Ancient Near East == The earliest roots of science can be traced to the Ancient Near East c. 3000–1200 BCE – in particular to Ancient Egypt and Mesopotamia. === Ancient Egypt === ==== Number system and geometry ==== Starting c. 3000 BCE, the ancient Egyptians developed a numbering system that was decimal in character and had oriented their knowledge of geometry to solving practical problems such as those of surveyors and builders. Their development of geometry was itself a necessary development of surveying to preserve the layout and ownership of farmland, which was flooded annually by the Nile. The 3-4-5 right triangle and other rules of geometry were used to build rectilinear structures, and the post and lintel architecture of Egypt. ==== Disease and healing ==== Egypt was also a center of alchemy research for much of the Mediterranean. According to the medical papyri (written c. 2500–1200 BCE), the ancient Egyptians believed that disease was mainly caused by the invasion of bodies by evil forces or spirits. Thus, in addition to medicine, therapies included prayer, incantation, and ritual. The Ebers Papyrus, written c. 1600 BCE, contains medical recipes for treating diseases related to the eyes, mouth, skin, internal organs, and extremities, as well as abscesses, wounds, burns, ulcers, swollen glands, tumors, headaches, and bad breath. The Edwin Smith Papyrus, written at about the same time, contains a surgical manual for treating wounds, fractures, and dislocations. The Egyptians believed that the effectiveness of their medicines depended on the preparation and administration under appropriate rituals. Medical historians believe that ancient Egyptian pharmacology, for example, was largely ineffective. Both the Ebers and Edwin Smith papyri applied the following components to the treatment of disease: examination, diagnosis, treatment, and prognosis, which display strong parallels to the basic empirical method of science and, according to G. E. R. Lloyd, played a significant role in the development of this methodology. ==== Calendar ==== The ancient Egyptians even developed an official calendar that contained twelve months, thirty days each, and five days at the end of the year. Unlike the Babylonian calendar or the ones used in Greek city-states at the time, the official Egyptian calendar was much simpler as it was fixed and did not take lunar and solar cycles into consideration. === Mesopotamia === The ancient Mesopotamians had extensive knowledge about the chemical properties of clay, sand, metal ore, bitumen, stone, and other natural materials, and applied this knowledge to practical use in manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing. Metallurgy required knowledge about the properties of metals. Nonetheless, the Mesopotamians seem to have had little interest in gathering information about the natural world for the mere sake of gathering information and were far more interested in studying the manner in which the gods had ordered the universe. Biology of non-human organisms was generally only written about in the context of mainstream academic disciplines. Animal physiology was studied extensively for the purpose of divination; the anatomy of the liver, which was seen as an important organ in haruspicy, was studied in particularly intensive detail. Animal behavior was also studied for divinatory purposes. Most information about the training and domestication of animals was probably transmitted orally without being written down, but one text dealing with the training of horses has survived. ==== Mesopotamian medicine ==== The ancient Mesopotamians had no distinction between "rational science" and magic. When a person became ill, doctors prescribed magical formulas to be recited as well as medicinal treatments. The earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur (c. 2112 BCE – c. 2004 BCE). The most extensive Babylonian medical text, however, is the Diagnostic Handbook written by the ummânū, or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069–1046 BCE). In East Semitic cultures, the main medicinal authority was a kind of exorcist-healer known as an āšipu. The profession was generally passed down from father to son and was held in extremely high regard. Of less frequent recourse was another kind of healer known as an asu, who corresponds more closely to a modern physician and treated physical symptoms using primarily folk remedies composed of various herbs, animal products, and minerals, as well as potions, enemas, and ointments or poultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practiced prophylaxis and took measures to prevent the spread of disease. ==== Astronomy and celestial divination ==== In Babylonian astronomy, records of the motions of the stars, planets, and the moon are left on thousands of clay tablets created by scribes. Even today, astronomical periods identified by Mesopotamian proto-scientists are still widely used in Western calendars such as the solar year and the lunar month. Using this data, they developed mathematical methods to compute the changing length of daylight in the course of the year, predict the appearances and disappearances of the Moon and planets, and eclipses of the Sun and Moon. Only a few astronomers' names are known, such as that of Kidinnu, a Chaldean astronomer and mathematician. Kiddinu's value for the solar year is in use for today's calendars. Babylonian astronomy was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian A. Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in Islam, and in the West—if not indeed all subsequent endeavour in the exact sciences—depend upon Babylonian astronomy in decisive and fundamental ways." To the Babylonians and other Near Eastern cultures, messages from the gods or omens were concealed in all natural phenomena that could be deciphered and interpreted by those who are adept. Hence, it was believed that the gods could speak through all terrestrial objects (e.g., animal entrails, dreams, malformed births, or even the color of a dog urinating on a person) and celestial phenomena. Moreover, Babylonian astrology was inseparable from Babylonian astronomy. ==== Mathematics ==== The Mesopotamian cuneiform tablet Plimpton 322, dating to the 18th century BCE, records a number of Pythagorean triplets (3, 4, 5) and (5, 12, 13) ..., hinting that the ancient Mesopotamians might have been aware of the Pythagorean theorem over a millennium before Pythagoras. == Ancient and medieval South Asia and East Asia == Mathematical achievements from Mesopotamia had some influence on the development of mathematics in India, and there were confirmed transmissions of mathematical ideas between India and China, which were bidirectional. Nevertheless, the mathematical and scientific achievements in India and particularly in China occurred largely independently from those of Europe and the confirmed early influences that these two civilizations had on the development of science in Europe in the pre-modern era were indirect, with Mesopotamia and later the Islamic World acting as intermediaries. The arrival of modern science, which grew out of the Scientific Revolution, in India and China and the greater Asian region in general can be traced to the scientific activities of Jesuit missionaries who were interested in studying the region's flora and fauna during the 16th to 17th century. === India === ==== Mathematics ==== The earliest traces of mathematical knowledge in the Indian subcontinent appear with the Indus Valley Civilisation (c. 3300 – c. 1300 BCE). The people of this civilization made bricks whose dimensions were in the proportion 4:2:1, which is favorable for the stability of a brick structure. They also tried to standardize measurement of length to a high degree of accuracy. They designed a ruler—the Mohenjo-daro ruler—whose length of approximately 1.32 in (34 mm) was divided into ten equal parts. Bricks manufactured in ancient Mohenjo-daro often had dimensions that were integral multiples of this unit of length. The Bakhshali manuscript contains problems involving arithmetic, algebra and geometry, including mensuration. The topics covered include fractions, square roots, arithmetic and geometric progressions, solutions of simple equations, simultaneous linear equations, quadratic equations and indeterminate equations of the second degree. In the 3rd century BCE, Pingala presents the Pingala-sutras, the earliest known treatise on Sanskrit prosody. He also presents a numerical system by adding one to the sum of place values. Pingala's work also includes material related to the Fibonacci numbers, called mātrāmeru. Indian astronomer and mathematician Aryabhata (476–550), in his Aryabhatiya (499) introduced the sine function in trigonometry and the number 0. In 628, Brahmagupta suggested that gravity was a force of attraction. He also lucidly explained the use of zero as both a placeholder and a decimal digit, along with the Hindu–Arabic numeral system now used universally throughout the world. Arabic translations of the two astronomers' texts were soon available in the Islamic world, introducing what would become Arabic numerals to the Islamic world by the 9th century. Narayana Pandita (1340–1400) was an Indian mathematician. Plofker writes that his texts were the most significant Sanskrit mathematics treatises after those of Bhaskara II, other than the Kerala school.: 52  He wrote the Ganita Kaumudi (lit. "Moonlight of mathematics") in 1356 about mathematical operations. The work anticipated many developments in combinatorics. Between the 14th and 16th centuries, the Kerala school of astronomy and mathematics made significant advances in astronomy and especially mathematics, including fields such as trigonometry and analysis. In particular, Madhava of Sangamagrama led advancement in analysis by providing the infinite and taylor series expansion of some trigonometric functions and pi approximation. Parameshvara (1380–1460), presents a case of the Mean Value theorem in his commentaries on Govindasvāmi and Bhāskara II. The Yuktibhāṣā was written by Jyeshtadeva in 1530. ==== Astronomy ==== The first textual mention of astronomical concepts comes from the Vedas, religious literature of India. According to Sarma (2008): "One finds in the Rigveda intelligent speculations about the genesis of the universe from nonexistence, the configuration of the universe, the spherical self-supporting earth, and the year of 360 days divided into 12 equal parts of 30 days each with a periodical intercalary month.". The first 12 chapters of the Siddhanta Shiromani, written by Bhāskara in the 12th century, cover topics such as: mean longitudes of the planets; true longitudes of the planets; the three problems of diurnal rotation; syzygies; lunar eclipses; solar eclipses; latitudes of the planets; risings and settings; the moon's crescent; conjunctions of the planets with each other; conjunctions of the planets with the fixed stars; and the patas of the sun and moon. The 13 chapters of the second part cover the nature of the sphere, as well as significant astronomical and trigonometric calculations based on it. In the Tantrasangraha treatise, Nilakantha Somayaji's updated the Aryabhatan model for the interior planets, Mercury, and Venus and the equation that he specified for the center of these planets was more accurate than the ones in European or Islamic astronomy until the time of Johannes Kepler in the 17th century. Jai Singh II of Jaipur constructed five observatories called Jantar Mantars in total, in New Delhi, Jaipur, Ujjain, Mathura and Varanasi; they were completed between 1724 and 1735. ==== Grammar ==== Some of the earliest linguistic activities can be found in Iron Age India (1st millennium BCE) with the analysis of Sanskrit for the purpose of the correct recitation and interpretation of Vedic texts. The most notable grammarian of Sanskrit was Pāṇini (c. 520–460 BCE), whose grammar formulates close to 4,000 rules for Sanskrit. Inherent in his analytic approach are the concepts of the phoneme, the morpheme and the root. The Tolkāppiyam text, composed in the early centuries of the common era, is a comprehensive text on Tamil grammar, which includes sutras on orthography, phonology, etymology, morphology, semantics, prosody, sentence structure and the significance of context in language. ==== Medicine ==== Findings from Neolithic graveyards in what is now Pakistan show evidence of proto-dentistry among an early farming culture. The ancient text Suśrutasamhitā of Suśruta describes procedures on various forms of surgery, including rhinoplasty, the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical procedures. The Charaka Samhita of Charaka describes ancient theories on human body, etiology, symptomology and therapeutics for a wide range of diseases. It also includes sections on the importance of diet, hygiene, prevention, medical education, and the teamwork of a physician, nurse and patient necessary for recovery to health. ==== Politics and state ==== An ancient Indian treatise on statecraft, economic policy and military strategy by Kautilya and Viṣhṇugupta, who are traditionally identified with Chāṇakya (c. 350–283 BCE). In this treatise, the behaviors and relationships of the people, the King, the State, the Government Superintendents, Courtiers, Enemies, Invaders, and Corporations are analyzed and documented. Roger Boesche describes the Arthaśāstra as "a book of political realism, a book analyzing how the political world does work and not very often stating how it ought to work, a book that frequently discloses to a king what calculating and sometimes brutal measures he must carry out to preserve the state and the common good." ==== Logic ==== The development of Indian logic dates back to the Chandahsutra of Pingala and anviksiki of Medhatithi Gautama (c. 6th century BCE); the Sanskrit grammar rules of Pāṇini (c. 5th century BCE); the Vaisheshika school's analysis of atomism (c. 6th century BCE to 2nd century BCE); the analysis of inference by Gotama (c. 6th century BCE to 2nd century CE), founder of the Nyaya school of Hindu philosophy; and the tetralemma of Nagarjuna (c. 2nd century CE). Indian logic stands as one of the three original traditions of logic, alongside the Greek and the Chinese logic. The Indian tradition continued to develop through early to modern times, in the form of the Navya-Nyāya school of logic. In the 2nd century, the Buddhist philosopher Nagarjuna refined the Catuskoti form of logic. The Catuskoti is also often glossed Tetralemma (Greek) which is the name for a largely comparable, but not equatable, 'four corner argument' within the tradition of Classical logic. Navya-Nyāya developed a sophisticated language and conceptual scheme that allowed it to raise, analyse, and solve problems in logic and epistemology. It systematised all the Nyāya concepts into four main categories: sense or perception (pratyakşa), inference (anumāna), comparison or similarity (upamāna), and testimony (sound or word; śabda). === China === ==== Chinese mathematics ==== From the earliest the Chinese used a positional decimal system on counting boards in order to calculate. To express 10, a single rod is placed in the second box from the right. The spoken language uses a similar system to English: e.g. four thousand two hundred and seven. No symbol was used for zero. By the 1st century BCE, negative numbers and decimal fractions were in use and The Nine Chapters on the Mathematical Art included methods for extracting higher order roots by Horner's method and solving linear equations and by Pythagoras' theorem. Cubic equations were solved in the Tang dynasty and solutions of equations of order higher than 3 appeared in print in 1245 CE by Ch'in Chiu-shao. Pascal's triangle for binomial coefficients was described around 1100 by Jia Xian. Although the first attempts at an axiomatization of geometry appear in the Mohist canon in 330 BCE, Liu Hui developed algebraic methods in geometry in the 3rd century CE and also calculated pi to 5 significant figures. In 480, Zu Chongzhi improved this by discovering the ratio 355 113 {\displaystyle {\tfrac {355}{113}}} which remained the most accurate value for 1200 years. ==== Astronomical observations ==== Astronomical observations from China constitute the longest continuous sequence from any civilization and include records of sunspots (112 records from 364 BCE), supernovas (1054), lunar and solar eclipses. By the 12th century, they could reasonably accurately make predictions of eclipses, but the knowledge of this was lost during the Ming dynasty, so that the Jesuit Matteo Ricci gained much favor in 1601 by his predictions. By 635 Chinese astronomers had observed that the tails of comets always point away from the sun. From antiquity, the Chinese used an equatorial system for describing the skies and a star map from 940 was drawn using a cylindrical (Mercator) projection. The use of an armillary sphere is recorded from the 4th century BCE and a sphere permanently mounted in equatorial axis from 52 BCE. In 125 CE Zhang Heng used water power to rotate the sphere in real time. This included rings for the meridian and ecliptic. By 1270 they had incorporated the principles of the Arab torquetum. In the Song Empire (960–1279) of Imperial China, Chinese scholar-officials unearthed, studied, and cataloged ancient artifacts. ==== Inventions ==== To better prepare for calamities, Zhang Heng invented a seismometer in 132 CE which provided instant alert to authorities in the capital Luoyang that an earthquake had occurred in a location indicated by a specific cardinal or ordinal direction. Although no tremors could be felt in the capital when Zhang told the court that an earthquake had just occurred in the northwest, a message came soon afterwards that an earthquake had indeed struck 400 to 500 km (250 to 310 mi) northwest of Luoyang (in what is now modern Gansu). Zhang called his device the 'instrument for measuring the seasonal winds and the movements of the Earth' (Houfeng didong yi 候风地动仪), so-named because he and others thought that earthquakes were most likely caused by the enormous compression of trapped air. There are many notable contributors to early Chinese disciplines, inventions, and practices throughout the ages. One of the best examples would be the medieval Song Chinese Shen Kuo (1031–1095), a polymath and statesman who was the first to describe the magnetic-needle compass used for navigation, discovered the concept of true north, improved the design of the astronomical gnomon, armillary sphere, sight tube, and clepsydra, and described the use of drydocks to repair boats. After observing the natural process of the inundation of silt and the find of marine fossils in the Taihang Mountains (hundreds of miles from the Pacific Ocean), Shen Kuo devised a theory of land formation, or geomorphology. He also adopted a theory of gradual climate change in regions over time, after observing petrified bamboo found underground at Yan'an, Shaanxi. If not for Shen Kuo's writing, the architectural works of Yu Hao would be little known, along with the inventor of movable type printing, Bi Sheng (990–1051). Shen's contemporary Su Song (1020–1101) was also a brilliant polymath, an astronomer who created a celestial atlas of star maps, wrote a treatise related to botany, zoology, mineralogy, and metallurgy, and had erected a large astronomical clocktower in Kaifeng city in 1088. To operate the crowning armillary sphere, his clocktower featured an escapement mechanism and the world's oldest known use of an endless power-transmitting chain drive. The Jesuit China missions of the 16th and 17th centuries "learned to appreciate the scientific achievements of this ancient culture and made them known in Europe. Through their correspondence European scientists first learned about the Chinese science and culture." Western academic thought on the history of Chinese technology and science was galvanized by the work of Joseph Needham and the Needham Research Institute. Among the technological accomplishments of China were, according to the British scholar Needham, the water-powered celestial globe (Zhang Heng), dry docks, sliding calipers, the double-action piston pump, the blast furnace, the multi-tube seed drill, the wheelbarrow, the suspension bridge, the winnowing machine, gunpowder, the raised-relief map, toilet paper, the efficient harness, along with contributions in logic, astronomy, medicine, and other fields. However, cultural factors prevented these Chinese achievements from developing into "modern science". According to Needham, it may have been the religious and philosophical framework of Chinese intellectuals which made them unable to accept the ideas of laws of nature: It was not that there was no order in nature for the Chinese, but rather that it was not an order ordained by a rational personal being, and hence there was no conviction that rational personal beings would be able to spell out in their lesser earthly languages the divine code of laws which he had decreed aforetime. The Taoists, indeed, would have scorned such an idea as being too naïve for the subtlety and complexity of the universe as they intuited it. == Pre-Columbian Mesoamerica == During the Middle Formative Period (c. 900 BCE – c. 300 BCE) of Pre-Columbian Mesoamerica, the Zapotec civilization, heavily influenced by the Olmec civilization, established the first known full writing system of the region (possibly predated by the Olmec Cascajal Block), as well as the first known astronomical calendar in Mesoamerica. Following a period of initial urban development in the Preclassical period, the Classic Maya civilization (c. 250 CE – c. 900 CE) built on the shared heritage of the Olmecs by developing the most sophisticated systems of writing, astronomy, calendrical science, and mathematics among Mesoamerican peoples. The Maya developed a positional numeral system with a base of 20 that included the use of zero for constructing their calendars. Maya writing, which was developed by 200 BCE, widespread by 100 BCE, and rooted in Olmec and Zapotec scripts, contains easily discernible calendar dates in the form of logographs representing numbers, coefficients, and calendar periods amounting to 20 days and even 20 years for tracking social, religious, political, and economic events in 360-day years. == Classical antiquity and Greco-Roman science == The contributions of the Ancient Egyptians and Mesopotamians in the areas of astronomy, mathematics, and medicine had entered and shaped Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes. Inquiries were also aimed at such practical goals such as establishing a reliable calendar or determining how to cure a variety of illnesses. The ancient people who were considered the first scientists may have thought of themselves as natural philosophers, as practitioners of a skilled profession (for example, physicians), or as followers of a religious tradition (for example, temple healers). === Pre-socratics === The earliest Greek philosophers, known as the pre-Socratics, provided competing answers to the question found in the myths of their neighbors: "How did the ordered cosmos in which we live come to be?" The pre-Socratic philosopher Thales (640–546 BCE) of Miletus, identified by later authors such as Aristotle as the first of the Ionian philosophers, postulated non-supernatural explanations for natural phenomena. For example, that land floats on water and that earthquakes are caused by the agitation of the water upon which the land floats, rather than the god Poseidon. Thales' student Pythagoras of Samos founded the Pythagorean school, which investigated mathematics for its own sake, and was the first to postulate that the Earth is spherical in shape. Leucippus (5th century BCE) introduced atomism, the theory that all matter is made of indivisible, imperishable units called atoms. This was greatly expanded on by his pupil Democritus and later Epicurus. === Natural philosophy === Plato and Aristotle produced the first systematic discussions of natural philosophy, which did much to shape later investigations of nature. Their development of deductive reasoning was of particular importance and usefulness to later scientific inquiry. Plato founded the Platonic Academy in 387 BCE, whose motto was "Let none unversed in geometry enter here," and also turned out many notable philosophers. Plato's student Aristotle introduced empiricism and the notion that universal truths can be arrived at via observation and induction, thereby laying the foundations of the scientific method. Aristotle also produced many biological writings that were empirical in nature, focusing on biological causation and the diversity of life. He made countless observations of nature, especially the habits and attributes of plants and animals on Lesbos, classified more than 540 animal species, and dissected at least 50. Aristotle's writings profoundly influenced subsequent Islamic and European scholarship, though they were eventually superseded in the Scientific Revolution. Aristotle also contributed to theories of the elements and the cosmos. He believed that the celestial bodies (such as the planets and the Sun) had something called an unmoved mover that put the celestial bodies in motion. Aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as God. Aristotle did not have the technological advancements that would have explained the motion of celestial bodies. In addition, Aristotle had many views on the elements. He believed that everything was derived of the elements earth, water, air, fire, and lastly the Aether. The Aether was a celestial element, and therefore made up the matter of the celestial bodies. The elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. The motion of these elements begins with earth being the closest to "the Earth," then water, air, fire, and finally Aether. In addition to the makeup of all things, Aristotle came up with theories as to why things did not return to their natural motion. He understood that water sits above earth, air above water, and fire above air in their natural state. He explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state. The important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. In the Hellenistic age scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and deliberate empirical research, in their scientific investigations. Thus, clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day. Neither reason nor inquiry began with the Ancient Greeks, but the Socratic method did, along with the idea of Forms, give great advances in geometry, logic, and the natural sciences. According to Benjamin Farrington, former professor of Classics at Swansea University: "Men were weighing for thousands of years before Archimedes worked out the laws of equilibrium; they must have had practical and intuitional knowledge of the principals involved. What Archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system." and again: "With astonishment we find ourselves on the threshold of modern science. Nor should it be supposed that by some trick of translation the extracts have been given an air of modernity. Far from it. The vocabulary of these writings and their style are the source from which our own vocabulary and style have been derived." === Greek astronomy === The astronomer Aristarchus of Samos was the first known person to propose a heliocentric model of the Solar System, while the geographer Eratosthenes accurately calculated the circumference of the Earth. Hipparchus (c. 190 – c. 120 BCE) produced the first systematic star catalog. The level of achievement in Hellenistic astronomy and engineering is impressively shown by the Antikythera mechanism (150–100 BCE), an analog computer for calculating the position of planets. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe. === Hellenistic medicine === There was not a defined societal structure for healthcare during the age of Hippocrates. At that time, society was not organized and knowledgeable as people still relied on pure religious reasoning to explain illnesses. Hippocrates introduced the first healthcare system based on science and clinical protocols. Hippocrates' theories about physics and medicine helped pave the way in creating an organized medical structure for society. In medicine, Hippocrates (c. 460–370 BCE) and his followers were the first to describe many diseases and medical conditions and developed the Hippocratic Oath for physicians, still relevant and in use today. Hippocrates' ideas are expressed in The Hippocratic Corpus. The collection notes descriptions of medical philosophies and how disease and lifestyle choices reflect on the physical body. Hippocrates influenced a Westernized, professional relationship among physician and patient. Hippocrates is also known as "the Father of Medicine". Herophilos (335–280 BCE) was the first to base his conclusions on dissection of the human body and to describe the nervous system. Galen (129 – c. 200 CE) performed many audacious operations—including brain and eye surgeries— that were not tried again for almost two millennia. === Greek mathematics === In Hellenistic Egypt, the mathematician Euclid laid down the foundations of mathematical rigor and introduced the concepts of definition, axiom, theorem and proof still in use today in his Elements, considered the most influential textbook ever written. Archimedes, considered one of the greatest mathematicians of all time, is credited with using the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He is also known in physics for laying the foundations of hydrostatics, statics, and the explanation of the principle of the lever. === Other developments === Theophrastus wrote some of the earliest descriptions of plants and animals, establishing the first taxonomy and looking at minerals in terms of their properties, such as hardness. Pliny the Elder produced one of the largest encyclopedias of the natural world in 77 CE, and was a successor to Theophrastus. For example, he accurately describes the octahedral shape of the diamond and noted that diamond dust is used by engravers to cut and polish other gems owing to its great hardness. His recognition of the importance of crystal shape is a precursor to modern crystallography, while notes on other minerals presages mineralogy. He recognizes other minerals have characteristic crystal shapes, but in one example, confuses the crystal habit with the work of lapidaries. Pliny was the first to show amber was a resin from pine trees, because of trapped insects within them. The development of archaeology has its roots in history and with those who were interested in the past, such as kings and queens who wanted to show past glories of their respective nations. The 5th-century-BCE Greek historian Herodotus was the first scholar to systematically study the past and perhaps the first to examine artifacts. === Greek scholarship under Roman rule === During the rule of Rome, famous historians such as Polybius, Livy and Plutarch documented the rise of the Roman Republic, and the organization and histories of other nations, while statesmen like Julius Caesar, Cicero, and others provided examples of the politics of the republic and Rome's empire and wars. The study of politics during this age was oriented toward understanding history, understanding methods of governing, and describing the operation of governments. The Roman conquest of Greece did not diminish learning and culture in the Greek provinces. On the contrary, the appreciation of Greek achievements in literature, philosophy, politics, and the arts by Rome's upper class coincided with the increased prosperity of the Roman Empire. Greek settlements had existed in Italy for centuries and the ability to read and speak Greek was not uncommon in Italian cities such as Rome. Moreover, the settlement of Greek scholars in Rome, whether voluntarily or as slaves, gave Romans access to teachers of Greek literature and philosophy. Conversely, young Roman scholars also studied abroad in Greece and upon their return to Rome, were able to convey Greek achievements to their Latin leadership. And despite the translation of a few Greek texts into Latin, Roman scholars who aspired to the highest level did so using the Greek language. The Roman statesman and philosopher Cicero (106 – 43 BCE) was a prime example. He had studied under Greek teachers in Rome and then in Athens and Rhodes. He mastered considerable portions of Greek philosophy, wrote Latin treatises on several topics, and even wrote Greek commentaries of Plato's Timaeus as well as a Latin translation of it, which has not survived. In the beginning, support for scholarship in Greek knowledge was almost entirely funded by the Roman upper class. There were all sorts of arrangements, ranging from a talented scholar being attached to a wealthy household to owning educated Greek-speaking slaves. In exchange, scholars who succeeded at the highest level had an obligation to provide advice or intellectual companionship to their Roman benefactors, or to even take care of their libraries. The less fortunate or accomplished ones would teach their children or perform menial tasks. The level of detail and sophistication of Greek knowledge was adjusted to suit the interests of their Roman patrons. That meant popularizing Greek knowledge by presenting information that were of practical value such as medicine or logic (for courts and politics) but excluding subtle details of Greek metaphysics and epistemology. Beyond the basics, the Romans did not value natural philosophy and considered it an amusement for leisure time. Commentaries and encyclopedias were the means by which Greek knowledge was popularized for Roman audiences. The Greek scholar Posidonius (c. 135-c. 51 BCE), a native of Syria, wrote prolifically on history, geography, moral philosophy, and natural philosophy. He greatly influenced Latin writers such as Marcus Terentius Varro (116-27 BCE), who wrote the encyclopedia Nine Books of Disciplines, which covered nine arts: grammar, rhetoric, logic, arithmetic, geometry, astronomy, musical theory, medicine, and architecture. The Disciplines became a model for subsequent Roman encyclopedias and Varro's nine liberal arts were considered suitable education for a Roman gentleman. The first seven of Varro's nine arts would later define the seven liberal arts of medieval schools. The pinnacle of the popularization movement was the Roman scholar Pliny the Elder (23/24–79 CE), a native of northern Italy, who wrote several books on the history of Rome and grammar. His most famous work was his voluminous Natural History. After the death of the Roman Emperor Marcus Aurelius in 180 CE, the favorable conditions for scholarship and learning in the Roman Empire were upended by political unrest, civil war, urban decay, and looming economic crisis. In around 250 CE, barbarians began attacking and invading the Roman frontiers. These combined events led to a general decline in political and economic conditions. The living standards of the Roman upper class was severely impacted, and their loss of leisure diminished scholarly pursuits. Moreover, during the 3rd and 4th centuries CE, the Roman Empire was administratively divided into two halves: Greek East and Latin West. These administrative divisions weakened the intellectual contact between the two regions. Eventually, both halves went their separate ways, with the Greek East becoming the Byzantine Empire. Christianity was also steadily expanding during this time and soon became a major patron of education in the Latin West. Initially, the Christian church adopted some of the reasoning tools of Greek philosophy in the 2nd and 3rd centuries CE to defend its faith against sophisticated opponents. Nevertheless, Greek philosophy received a mixed reception from leaders and adherents of the Christian faith. Some such as Tertullian (c. 155-c. 230 CE) were vehemently opposed to philosophy, denouncing it as heretic. Others such as Augustine of Hippo (354-430 CE) were ambivalent and defended Greek philosophy and science as the best ways to understand the natural world and therefore treated it as a handmaiden (or servant) of religion. Education in the West began its gradual decline, along with the rest of Western Roman Empire, due to invasions by Germanic tribes, civil unrest, and economic collapse. Contact with the classical tradition was lost in specific regions such as Roman Britain and northern Gaul but continued to exist in Rome, northern Italy, southern Gaul, Spain, and North Africa. == Middle Ages == In the Middle Ages, the classical learning continued in three major linguistic cultures and civilizations: Greek (the Byzantine Empire), Arabic (the Islamic world), and Latin (Western Europe). === Byzantine Empire === ==== Preservation of Greek heritage ==== The fall of the Western Roman Empire led to a deterioration of the classical tradition in the western part (or Latin West) of Europe during the 5th century. In contrast, the Byzantine Empire resisted the barbarian attacks and preserved and improved the learning. While the Byzantine Empire still held learning centers such as Constantinople, Alexandria and Antioch, Western Europe's knowledge was concentrated in monasteries until the development of medieval universities in the 12th centuries. The curriculum of monastic schools included the study of the few available ancient texts and of new works on practical subjects like medicine and timekeeping. In the sixth century in the Byzantine Empire, Isidore of Miletus compiled Archimedes' mathematical works in the Archimedes Palimpsest, where all Archimedes' mathematical contributions were collected and studied. John Philoponus, another Byzantine scholar, was the first to question Aristotle's teaching of physics, introducing the theory of impetus. The theory of impetus was an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics. The works of John Philoponus inspired Galileo Galilei ten centuries later. ==== Collapse ==== During the Fall of Constantinople in 1453, a number of Greek scholars fled to North Italy in which they fueled the era later commonly known as the "Renaissance" as they brought with them a great deal of classical learning including an understanding of botany, medicine, and zoology. Byzantium also gave the West important inputs: John Philoponus' criticism of Aristotelian physics, and the works of Dioscorides. === Islamic world === This was the period (8th–14th century CE) of the Islamic Golden Age where commerce thrived, and new ideas and technologies emerged such as the importation of papermaking from China, which made the copying of manuscripts inexpensive. ==== Translations and Hellenization ==== The eastward transmission of Greek heritage to Western Asia was a slow and gradual process that spanned over a thousand years, beginning with the Asian conquests of Alexander the Great in 335 BCE to the founding of Islam in the 7th century CE. The birth and expansion of Islam during the 7th century was quickly followed by its Hellenization. Knowledge of Greek conceptions of the world was preserved and absorbed into Islamic theology, law, culture, and commerce, which were aided by the translations of traditional Greek texts and some Syriac intermediary sources into Arabic during the 8th–9th century. ==== Education and scholarly pursuits ==== Madrasas were centers for many different religious and scientific studies and were the culmination of different institutions such as mosques based around religious studies, housing for out-of-town visitors, and finally educational institutions focused on the natural sciences. Unlike Western universities, students at a madrasa would learn from one specific teacher, who would issue a certificate at the completion of their studies called an Ijazah. An Ijazah differs from a western university degree in many ways one being that it is issued by a single person rather than an institution, and another being that it is not an individual degree declaring adequate knowledge over broad subjects, but rather a license to teach and pass on a very specific set of texts. Women were also allowed to attend madrasas, as both students and teachers, something not seen in high western education until the 1800s. Madrasas were more than just academic centers. The Suleymaniye Mosque, for example, was one of the earliest and most well-known madrasas, which was built by Suleiman the Magnificent in the 16th century. The Suleymaniye Mosque was home to a hospital and medical college, a kitchen, and children's school, as well as serving as a temporary home for travelers. Higher education at a madrasa (or college) was focused on Islamic law and religious science and students had to engage in self-study for everything else. And despite the occasional theological backlash, many Islamic scholars of science were able to conduct their work in relatively tolerant urban centers (e.g., Baghdad and Cairo) and were protected by powerful patrons. They could also travel freely and exchange ideas as there were no political barriers within the unified Islamic state. Islamic science during this time was primarily focused on the correction, extension, articulation, and application of Greek ideas to new problems. ==== Advancements in mathematics ==== Most of the achievements by Islamic scholars during this period were in mathematics. Arabic mathematics was a direct descendant of Greek and Indian mathematics. For instance, what is now known as Arabic numerals originally came from India, but Muslim mathematicians made several key refinements to the number system, such as the introduction of decimal point notation. Mathematicians such as Muhammad ibn Musa al-Khwarizmi (c. 780–850) gave his name to the concept of the algorithm, while the term algebra is derived from al-jabr, the beginning of the title of one of his publications. Islamic trigonometry continued from the works of Ptolemy's Almagest and Indian Siddhanta, from which they added trigonometric functions, drew up tables, and applied trignometry to spheres and planes. Many of their engineers, instruments makers, and surveyors contributed books in applied mathematics. It was in astronomy where Islamic mathematicians made their greatest contributions. Al-Battani (c. 858–929) improved the measurements of Hipparchus, preserved in the translation of Ptolemy's Hè Megalè Syntaxis (The great treatise) translated as Almagest. Al-Battani also improved the precision of the measurement of the precession of the Earth's axis. Corrections were made to Ptolemy's geocentric model by al-Battani, Ibn al-Haytham, Averroes and the Maragha astronomers such as Nasir al-Din al-Tusi, Mu'ayyad al-Din al-Urdi and Ibn al-Shatir. Scholars with geometric skills made significant improvements to the earlier classical texts on light and sight by Euclid, Aristotle, and Ptolemy. The earliest surviving Arabic treatises were written in the 9th century by Abū Ishāq al-Kindī, Qustā ibn Lūqā, and (in fragmentary form) Ahmad ibn Isā. Later in the 11th century, Ibn al-Haytham (known as Alhazen in the West), a mathematician and astronomer, synthesized a new theory of vision based on the works of his predecessors. His new theory included a complete system of geometrical optics, which was set in great detail in his Book of Optics. His book was translated into Latin and was relied upon as a principal source on the science of optics in Europe until the 17th century. ==== Institutionalization of medicine ==== The medical sciences were prominently cultivated in the Islamic world. The works of Greek medical theories, especially those of Galen, were translated into Arabic and there was an outpouring of medical texts by Islamic physicians, which were aimed at organizing, elaborating, and disseminating classical medical knowledge. Medical specialties started to emerge, such as those involved in the treatment of eye diseases such as cataracts. Ibn Sina (known as Avicenna in the West, c. 980–1037) was a prolific Persian medical encyclopedist wrote extensively on medicine, with his two most notable works in medicine being the Kitāb al-shifāʾ ("Book of Healing") and The Canon of Medicine, both of which were used as standard medicinal texts in both the Muslim world and in Europe well into the 17th century. Amongst his many contributions are the discovery of the contagious nature of infectious diseases, and the introduction of clinical pharmacology. Institutionalization of medicine was another important achievement in the Islamic world. Although hospitals as an institution for the sick emerged in the Byzantium empire, the model of institutionalized medicine for all social classes was extensive in the Islamic empire and was scattered throughout. In addition to treating patients, physicians could teach apprentice physicians, as well write and do research. The discovery of the pulmonary transit of blood in the human body by Ibn al-Nafis occurred in a hospital setting. ==== Decline ==== Islamic science began its decline in the 12th–13th century, before the Renaissance in Europe, due in part to the Christian reconquest of Spain and the Mongol conquests in the East in the 11th–13th century. The Mongols sacked Baghdad, capital of the Abbasid Caliphate, in 1258, which ended the Abbasid empire. Nevertheless, many of the conquerors became patrons of the sciences. Hulagu Khan, for example, who led the siege of Baghdad, became a patron of the Maragheh observatory. Islamic astronomy continued to flourish into the 16th century. === Western Europe === By the eleventh century, most of Europe had become Christian; stronger monarchies emerged; borders were restored; technological developments and agricultural innovations were made, increasing the food supply and population. Classical Greek texts were translated from Arabic and Greek into Latin, stimulating scientific discussion in Western Europe. In classical antiquity, Greek and Roman taboos had meant that dissection was usually banned, but in the Middle Ages medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (c. 1275–1326) produced the first known anatomy textbook based on human dissection. As a result of the Pax Mongolica, Europeans, such as Marco Polo, began to venture further and further east. The written accounts of Polo and his fellow travelers inspired other Western European maritime explorers to search for a direct sea route to Asia, ultimately leading to the Age of Discovery. Technological advances were also made, such as the early flight of Eilmer of Malmesbury (who had studied mathematics in 11th-century England), and the metallurgical achievements of the Cistercian blast furnace at Laskill. ==== Medieval universities ==== An intellectual revitalization of Western Europe started with the birth of medieval universities in the 12th century. These urban institutions grew from the informal scholarly activities of learned friars who visited monasteries, consulted libraries, and conversed with other fellow scholars. A friar who became well-known would attract a following of disciples, giving rise to a brotherhood of scholars (or collegium in Latin). A collegium might travel to a town or request a monastery to host them. However, if the number of scholars within a collegium grew too large, they would opt to settle in a town instead. As the number of collegia within a town grew, the collegia might request that their king grant them a charter that would convert them into a universitas. Many universities were chartered during this period, with the first in Bologna in 1088, followed by Paris in 1150, Oxford in 1167, and Cambridge in 1231. The granting of a charter meant that the medieval universities were partially sovereign and independent from local authorities. Their independence allowed them to conduct themselves and judge their own members based on their own rules. Furthermore, as initially religious institutions, their faculties and students were protected from capital punishment (e.g., gallows). Such independence was a matter of custom, which could, in principle, be revoked by their respective rulers if they felt threatened. Discussions of various subjects or claims at these medieval institutions, no matter how controversial, were done in a formalized way so as to declare such discussions as being within the bounds of a university and therefore protected by the privileges of that institution's sovereignty. A claim could be described as ex cathedra (literally "from the chair", used within the context of teaching) or ex hypothesi (by hypothesis). This meant that the discussions were presented as purely an intellectual exercise that did not require those involved to commit themselves to the truth of a claim or to proselytize. Modern academic concepts and practices such as academic freedom or freedom of inquiry are remnants of these medieval privileges that were tolerated in the past. The curriculum of these medieval institutions centered on the seven liberal arts, which were aimed at providing beginning students with the skills for reasoning and scholarly language. Students would begin their studies starting with the first three liberal arts or Trivium (grammar, rhetoric, and logic) followed by the next four liberal arts or Quadrivium (arithmetic, geometry, astronomy, and music). Those who completed these requirements and received their baccalaureate (or Bachelor of Arts) had the option to join the higher faculty (law, medicine, or theology), which would confer an LLD for a lawyer, an MD for a physician, or ThD for a theologian. Students who chose to remain in the lower faculty (arts) could work towards a Magister (or Master's) degree and would study three philosophies: metaphysics, ethics, and natural philosophy. Latin translations of Aristotle's works such as De Anima (On the Soul) and the commentaries on them were required readings. As time passed, the lower faculty was allowed to confer its own doctoral degree called the PhD. Many of the Masters were drawn to encyclopedias and had used them as textbooks. But these scholars yearned for the complete original texts of the Ancient Greek philosophers, mathematicians, and physicians such as Aristotle, Euclid, and Galen, which were not available to them at the time. These Ancient Greek texts were to be found in the Byzantine Empire and the Islamic World. ==== Translations of Greek and Arabic sources ==== Contact with the Byzantine Empire, and with the Islamic world during the Reconquista and the Crusades, allowed Latin Europe access to scientific Greek and Arabic texts, including the works of Aristotle, Ptolemy, Isidore of Miletus, John Philoponus, Jābir ibn Hayyān, al-Khwarizmi, Alhazen, Avicenna, and Averroes. European scholars had access to the translation programs of Raymond of Toledo, who sponsored the 12th century Toledo School of Translators from Arabic to Latin. Later translators like Michael Scotus would learn Arabic in order to study these texts directly. The European universities aided materially in the translation and propagation of these texts and started a new infrastructure which was needed for scientific communities. In fact, European university put many works about the natural world and the study of nature at the center of its curriculum, with the result that the "medieval university laid far greater emphasis on science than does its modern counterpart and descendent." At the beginning of the 13th century, there were reasonably accurate Latin translations of the main works of almost all the intellectually crucial ancient authors, allowing a sound transfer of scientific ideas via both the universities and the monasteries. By then, the natural philosophy in these texts began to be extended by scholastics such as Robert Grosseteste, Roger Bacon, Albertus Magnus and Duns Scotus. Precursors of the modern scientific method, influenced by earlier contributions of the Islamic world, can be seen already in Grosseteste's emphasis on mathematics as a way to understand nature, and in the empirical approach admired by Bacon, particularly in his Opus Majus. Pierre Duhem's thesis is that Stephen Tempier – the Bishop of Paris – Condemnation of 1277 led to the study of medieval science as a serious discipline, "but no one in the field any longer endorses his view that modern science started in 1277". However, many scholars agree with Duhem's view that the mid-late Middle Ages saw important scientific developments. ==== Medieval science ==== The first half of the 14th century saw much important scientific work, largely within the framework of scholastic commentaries on Aristotle's scientific writings. William of Ockham emphasized the principle of parsimony: natural philosophers should not postulate unnecessary entities, so that motion is not a distinct thing but is only the moving object and an intermediary "sensible species" is not needed to transmit an image of an object to the eye. Scholars such as Jean Buridan and Nicole Oresme started to reinterpret elements of Aristotle's mechanics. In particular, Buridan developed the theory that impetus was the cause of the motion of projectiles, which was a first step towards the modern concept of inertia. The Oxford Calculators began to mathematically analyze the kinematics of motion, making this analysis without considering the causes of motion. In 1348, the Black Death and other disasters sealed a sudden end to philosophic and scientific development. Yet, the rediscovery of ancient texts was stimulated by the Fall of Constantinople in 1453, when many Byzantine scholars sought refuge in the West. Meanwhile, the introduction of printing was to have great effect on European society. The facilitated dissemination of the printed word democratized learning and allowed ideas such as algebra to propagate more rapidly. These developments paved the way for the Scientific Revolution, where scientific inquiry, halted at the start of the Black Death, resumed. == Renaissance == === Revival of learning === The renewal of learning in Europe began with 12th century Scholasticism. The Northern Renaissance showed a decisive shift in focus from Aristotelian natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine). Thus modern science in Europe was resumed in a period of great upheaval: the Protestant Reformation and Catholic Counter-Reformation; the discovery of the Americas by Christopher Columbus; the Fall of Constantinople; but also the re-discovery of Aristotle during the Scholastic period presaged large social and political changes. Thus, a suitable environment was created in which it became possible to question scientific doctrine, in much the same way that Martin Luther and John Calvin questioned religious doctrine. The works of Ptolemy (astronomy) and Galen (medicine) were found not always to match everyday observations. Work by Vesalius on human cadavers found problems with the Galenic view of anatomy. The discovery of Cristallo contributed to the advancement of science in the period as well with its appearance out of Venice around 1450. The new glass allowed for better spectacles and eventually to the inventions of the telescope and microscope. Theophrastus' work on rocks, Peri lithōn, remained authoritative for millennia: its interpretation of fossils was not overturned until after the Scientific Revolution. During the Italian Renaissance, Niccolò Machiavelli established the emphasis of modern political science on direct empirical observation of political institutions and actors. Later, the expansion of the scientific paradigm during the Enlightenment further pushed the study of politics beyond normative determinations. In particular, the study of statistics, to study the subjects of the state, has been applied to polling and voting. In archaeology, the 15th and 16th centuries saw the rise of antiquarians in Renaissance Europe who were interested in the collection of artifacts. === Scientific Revolution and birth of New Science === The early modern period is seen as a flowering of the European Renaissance. There was a willingness to question previously held truths and search for new answers. This resulted in a period of major scientific advancements, now known as the Scientific Revolution, which led to the emergence of a New Science that was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. The Scientific Revolution is a convenient boundary between ancient thought and classical physics, and is traditionally held to have begun in 1543, when the books De humani corporis fabrica (On the Workings of the Human Body) by Andreas Vesalius, and also De Revolutionibus, by the astronomer Nicolaus Copernicus, were first printed. The period culminated with the publication of the Philosophiæ Naturalis Principia Mathematica in 1687 by Isaac Newton, representative of the unprecedented growth of scientific publications throughout Europe. Other significant scientific advances were made during this time by Galileo Galilei, Johannes Kepler, Edmond Halley, William Harvey, Pierre Fermat, Robert Hooke, Christiaan Huygens, Tycho Brahe, Marin Mersenne, Gottfried Leibniz, Isaac Newton, and Blaise Pascal. In philosophy, major contributions were made by Francis Bacon, Sir Thomas Browne, René Descartes, Baruch Spinoza, Pierre Gassendi, Robert Boyle, and Thomas Hobbes. Christiaan Huygens derived the centripetal and centrifugal forces and was the first to transfer mathematical inquiry to describe unobservable physical phenomena. William Gilbert did some of the earliest experiments with electricity and magnetism, establishing that the Earth itself is magnetic. ==== Heliocentrism ==== The heliocentric astronomical model of the universe was refined by Nicolaus Copernicus. Copernicus proposed the idea that the Earth and all heavenly spheres, containing the planets and other objects in the cosmos, rotated around the Sun. His heliocentric model also proposed that all stars were fixed and did not rotate on an axis, nor in any motion at all. His theory proposed the yearly rotation of the Earth and the other heavenly spheres around the Sun and was able to calculate the distances of planets using deferents and epicycles. Although these calculations were not completely accurate, Copernicus was able to understand the distance order of each heavenly sphere. The Copernican heliocentric system was a revival of the hypotheses of Aristarchus of Samos and Seleucus of Seleucia. Aristarchus of Samos did propose that the Earth rotated around the Sun but did not mention anything about the other heavenly spheres' order, motion, or rotation. Seleucus of Seleucia also proposed the rotation of the Earth around the Sun but did not mention anything about the other heavenly spheres. In addition, Seleucus of Seleucia understood that the Moon rotated around the Earth and could be used to explain the tides of the oceans, thus further proving his understanding of the heliocentric idea. == Age of Enlightenment == === Continuation of Scientific Revolution === The Scientific Revolution continued into the Age of Enlightenment, which accelerated the development of modern science. ==== Planets and orbits ==== The heliocentric model revived by Nicolaus Copernicus was followed by the model of planetary motion given by Johannes Kepler in the early 17th century, which proposed that the planets follow elliptical orbits, with the Sun at one focus of the ellipse. In Astronomia Nova (A New Astronomy), the first two of the laws of planetary motion were shown by the analysis of the orbit of Mars. Kepler introduced the revolutionary concept of planetary orbit. Because of his work astronomical phenomena came to be seen as being governed by physical laws. ==== Emergence of chemistry ==== A decisive moment came when "chemistry" was distinguished from alchemy by Robert Boyle in his work The Sceptical Chymist, in 1661; although the alchemical tradition continued for some time after his work. Other important steps included the gravimetric experimental practices of medical chemists like William Cullen, Joseph Black, Torbern Bergman and Pierre Macquer and through the work of Antoine Lavoisier ("father of modern chemistry") on oxygen and the law of conservation of mass, which refuted phlogiston theory. Modern chemistry emerged from the sixteenth through the eighteenth centuries through the material practices and theories promoted by alchemy, medicine, manufacturing and mining. ==== Calculus and Newtonian mechanics ==== In 1687, Isaac Newton published the Principia Mathematica, detailing two comprehensive and successful physical theories: Newton's laws of motion, which led to classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity. ==== Circulatory system ==== William Harvey published De Motu Cordis in 1628, which revealed his conclusions based on his extensive studies of vertebrate circulatory systems. He identified the central role of the heart, arteries, and veins in producing blood movement in a circuit, and failed to find any confirmation of Galen's pre-existing notions of heating and cooling functions. The history of early modern biology and medicine is often told through the search for the seat of the soul. Galen in his descriptions of his foundational work in medicine presents the distinctions between arteries, veins, and nerves using the vocabulary of the soul. ==== Scientific societies and journals ==== A critical innovation was the creation of permanent scientific societies and their scholarly journals, which dramatically sped the diffusion of new ideas. Typical was the founding of the Royal Society in London in 1660 and its journal in 1665 the Philosophical Transaction of the Royal Society, the first scientific journal in English. 1665 also saw the first journal in French, the Journal des sçavans. Science drawing on the works of Newton, Descartes, Pascal and Leibniz, science was on a path to modern mathematics, physics and technology by the time of the generation of Benjamin Franklin (1706–1790), Leonhard Euler (1707–1783), Mikhail Lomonosov (1711–1765) and Jean le Rond d'Alembert (1717–1783). Denis Diderot's Encyclopédie, published between 1751 and 1772 brought this new understanding to a wider audience. The impact of this process was not limited to science and technology, but affected philosophy (Immanuel Kant, David Hume), religion (the increasingly significant impact of science upon religion), and society and politics in general (Adam Smith, Voltaire). ==== Developments in geology ==== Geology did not undergo systematic restructuring during the Scientific Revolution but instead existed as a cloud of isolated, disconnected ideas about rocks, minerals, and landforms long before it became a coherent science. Robert Hooke formulated a theory of earthquakes, and Nicholas Steno developed the theory of superposition and argued that fossils were the remains of once-living creatures. Beginning with Thomas Burnet's Sacred Theory of the Earth in 1681, natural philosophers began to explore the idea that the Earth had changed over time. Burnet and his contemporaries interpreted Earth's past in terms of events described in the Bible, but their work laid the intellectual foundations for secular interpretations of Earth history. === Post-Scientific Revolution === ==== Bioelectricity ==== During the late 18th century, researchers such as Hugh Williamson and John Walsh experimented on the effects of electricity on the human body. Further studies by Luigi Galvani and Alessandro Volta established the electrical nature of what Volta called galvanism. ==== Developments in geology ==== Modern geology, like modern chemistry, gradually evolved during the 18th and early 19th centuries. Benoît de Maillet and the Comte de Buffon saw the Earth as much older than the 6,000 years envisioned by biblical scholars. Jean-Étienne Guettard and Nicolas Desmarest hiked central France and recorded their observations on some of the first geological maps. Aided by chemical experimentation, naturalists such as Scotland's John Walker, Sweden's Torbern Bergman, and Germany's Abraham Werner created comprehensive classification systems for rocks and minerals—a collective achievement that transformed geology into a cutting edge field by the end of the eighteenth century. These early geologists also proposed a generalized interpretations of Earth history that led James Hutton, Georges Cuvier and Alexandre Brongniart, following in the steps of Steno, to argue that layers of rock could be dated by the fossils they contained: a principle first applied to the geology of the Paris Basin. The use of index fossils became a powerful tool for making geological maps, because it allowed geologists to correlate the rocks in one locality with those of similar age in other, distant localities. ==== Birth of modern economics ==== The basis for classical economics forms Adam Smith's An Inquiry into the Nature and Causes of the Wealth of Nations, published in 1776. Smith criticized mercantilism, advocating a system of free trade with division of labour. He postulated an "invisible hand" that regulated economic systems made up of actors guided only by self-interest. The "invisible hand" mentioned in a lost page in the middle of a chapter in the middle of the "Wealth of Nations", 1776, advances as Smith's central message. ==== Social science ==== Anthropology can best be understood as an outgrowth of the Age of Enlightenment. It was during this period that Europeans attempted systematically to study human behavior. Traditions of jurisprudence, history, philology and sociology developed during this time and informed the development of the social sciences of which anthropology was a part. == 19th century == The 19th century saw the birth of science as a profession. William Whewell had coined the term scientist in 1833, which soon replaced the older term natural philosopher. === Developments in physics === In physics, the behavior of electricity and magnetism was studied by Giovanni Aldini, Alessandro Volta, Michael Faraday, Georg Ohm, and others. The experiments, theories and discoveries of Michael Faraday, Andre-Marie Ampere, James Clerk Maxwell, and their contemporaries led to the unification of the two phenomena into a single theory of electromagnetism as described by Maxwell's equations. Thermodynamics led to an understanding of heat and the notion of energy being defined. === Discovery of Neptune === In astronomy, the planet Neptune was discovered. Advances in astronomy and in optical systems in the 19th century resulted in the first observation of an asteroid (1 Ceres) in 1801, and the discovery of Neptune in 1846. === Developments in mathematics === In mathematics, the notion of complex numbers finally matured and led to a subsequent analytical theory; they also began the use of hypercomplex numbers. Karl Weierstrass and others carried out the arithmetization of analysis for functions of real and complex variables. It also saw rise to new progress in geometry beyond those classical theories of Euclid, after a period of nearly two thousand years. The mathematical science of logic likewise had revolutionary breakthroughs after a similarly long period of stagnation. But the most important step in science at this time were the ideas formulated by the creators of electrical science. Their work changed the face of physics and made possible for new technology to come about such as electric power, electrical telegraphy, the telephone, and radio. === Developments in chemistry === In chemistry, Dmitri Mendeleev, following the atomic theory of John Dalton, created the first periodic table of elements. Other highlights include the discoveries unveiling the nature of atomic structure and matter, simultaneously with chemistry – and of new kinds of radiation. The theory that all matter is made of atoms, which are the smallest constituents of matter that cannot be broken down without losing the basic chemical and physical properties of that matter, was provided by John Dalton in 1803, although the question took a hundred years to settle as proven. Dalton also formulated the law of mass relationships. In 1869, Dmitri Mendeleev composed his periodic table of elements on the basis of Dalton's discoveries. The synthesis of urea by Friedrich Wöhler opened a new research field, organic chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The later part of the 19th century saw the exploitation of the Earth's petrochemicals, after the exhaustion of the oil supply from whaling. By the 20th century, systematic production of refined materials provided a ready supply of products which provided not only energy, but also synthetic materials for clothing, medicine, and everyday disposable resources. Application of the techniques of organic chemistry to living organisms resulted in physiological chemistry, the precursor to biochemistry. === Age of the Earth === Over the first half of the 19th century, geologists such as Charles Lyell, Adam Sedgwick, and Roderick Murchison applied the new technique to rocks throughout Europe and eastern North America, setting the stage for more detailed, government-funded mapping projects in later decades. Midway through the 19th century, the focus of geology shifted from description and classification to attempts to understand how the surface of the Earth had changed. The first comprehensive theories of mountain building were proposed during this period, as were the first modern theories of earthquakes and volcanoes. Louis Agassiz and others established the reality of continent-covering ice ages, and "fluvialists" like Andrew Crombie Ramsay argued that river valleys were formed, over millions of years by the rivers that flow through them. After the discovery of radioactivity, radiometric dating methods were developed, starting in the 20th century. Alfred Wegener's theory of "continental drift" was widely dismissed when he proposed it in the 1910s, but new data gathered in the 1950s and 1960s led to the theory of plate tectonics, which provided a plausible mechanism for it. Plate tectonics also provided a unified explanation for a wide range of seemingly unrelated geological phenomena. Since the 1960s it has served as the unifying principle in geology. === Evolution and inheritance === Perhaps the most prominent, controversial, and far-reaching theory in all of science has been the theory of evolution by natural selection, which was independently formulated by Charles Darwin and Alfred Wallace. It was described in detail in Darwin's book The Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Implications of evolution on fields outside of pure science have led to both opposition and support from different parts of society, and profoundly influenced the popular understanding of "man's place in the universe". Separately, Gregor Mendel formulated the principles of inheritance in 1866, which became the basis of modern genetics. === Germ theory === Another important landmark in medicine and biology were the successful efforts to prove the germ theory of disease. Following this, Louis Pasteur made the first vaccine against rabies, and also made many discoveries in the field of chemistry, including the asymmetry of crystals. In 1847, Hungarian physician Ignác Fülöp Semmelweis dramatically reduced the occurrence of puerperal fever by simply requiring physicians to wash their hands before attending to women in childbirth. This discovery predated the germ theory of disease. However, Semmelweis' findings were not appreciated by his contemporaries and handwashing came into use only with discoveries by British surgeon Joseph Lister, who in 1865 proved the principles of antisepsis. Lister's work was based on the important findings by French biologist Louis Pasteur. Pasteur was able to link microorganisms with disease, revolutionizing medicine. He also devised one of the most important methods in preventive medicine, when in 1880 he produced a vaccine against rabies. Pasteur invented the process of pasteurization, to help prevent the spread of disease through milk and other foods. === Schools of economics === Karl Marx developed an alternative economic theory, called Marxian economics. Marxian economics is based on the labor theory of value and assumes the value of good to be based on the amount of labor required to produce it. Under this axiom, capitalism was based on employers not paying the full value of workers labor to create profit. The Austrian School responded to Marxian economics by viewing entrepreneurship as driving force of economic development. This replaced the labor theory of value by a system of supply and demand. === Founding of psychology === Psychology as a scientific enterprise that was independent from philosophy began in 1879 when Wilhelm Wundt founded the first laboratory dedicated exclusively to psychological research (in Leipzig). Other important early contributors to the field include Hermann Ebbinghaus (a pioneer in memory studies), Ivan Pavlov (who discovered classical conditioning), William James, and Sigmund Freud. Freud's influence has been enormous, though more as cultural icon than a force in scientific psychology. === Modern sociology === Modern sociology emerged in the early 19th century as the academic response to the modernization of the world. Among many early sociologists (e.g., Émile Durkheim), the aim of sociology was in structuralism, understanding the cohesion of social groups, and developing an "antidote" to social disintegration. Max Weber was concerned with the modernization of society through the concept of rationalization, which he believed would trap individuals in an "iron cage" of rational thought. Some sociologists, including Georg Simmel and W. E. B. Du Bois, used more microsociological, qualitative analyses. This microlevel approach played an important role in American sociology, with the theories of George Herbert Mead and his student Herbert Blumer resulting in the creation of the symbolic interactionism approach to sociology. In particular, just Auguste Comte, illustrated with his work the transition from a theological to a metaphysical stage and, from this, to a positive stage. Comte took care of the classification of the sciences as well as a transit of humanity towards a situation of progress attributable to a re-examination of nature according to the affirmation of 'sociality' as the basis of the scientifically interpreted society. === Romanticism === The Romantic Movement of the early 19th century reshaped science by opening up new pursuits unexpected in the classical approaches of the Enlightenment. The decline of Romanticism occurred because a new movement, Positivism, began to take hold of the ideals of the intellectuals after 1840 and lasted until about 1880. At the same time, the romantic reaction to the Enlightenment produced thinkers such as Johann Gottfried Herder and later Wilhelm Dilthey whose work formed the basis for the culture concept which is central to the discipline. Traditionally, much of the history of the subject was based on colonial encounters between Western Europe and the rest of the world, and much of 18th- and 19th-century anthropology is now classed as scientific racism. During the late 19th century, battles over the "study of man" took place between those of an "anthropological" persuasion (relying on anthropometrical techniques) and those of an "ethnological" persuasion (looking at cultures and traditions), and these distinctions became part of the later divide between physical anthropology and cultural anthropology, the latter ushered in by the students of Franz Boas. == 20th century == Science advanced dramatically during the 20th century. There were new and radical developments in the physical and life sciences, building on the progress from the 19th century. === Theory of relativity and quantum mechanics === The beginning of the 20th century brought the start of a revolution in physics. The long-held theories of Newton were shown not to be correct in all circumstances. Beginning in 1900, Max Planck, Albert Einstein, Niels Bohr and others developed quantum theories to explain various anomalous experimental results, by introducing discrete energy levels. Not only did quantum mechanics show that the laws of motion did not hold on small scales, but the theory of general relativity, proposed by Einstein in 1915, showed that the fixed background of spacetime, on which both Newtonian mechanics and special relativity depended, could not exist. In 1925, Werner Heisenberg and Erwin Schrödinger formulated quantum mechanics, which explained the preceding quantum theories. Currently, general relativity and quantum mechanics are inconsistent with each other, and efforts are underway to unify the two. === Big Bang === The observation by Edwin Hubble in 1929 that the speed at which galaxies recede positively correlates with their distance, led to the understanding that the universe is expanding, and the formulation of the Big Bang theory by Georges Lemaître. George Gamow, Ralph Alpher, and Robert Herman had calculated that there should be evidence for a Big Bang in the background temperature of the universe. In 1964, Arno Penzias and Robert Wilson discovered a 3 Kelvin background hiss in their Bell Labs radiotelescope (the Holmdel Horn Antenna), which was evidence for this hypothesis, and formed the basis for a number of results that helped determine the age of the universe. === Big science === In 1938 Otto Hahn and Fritz Strassmann discovered nuclear fission with radiochemical methods, and in 1939 Lise Meitner and Otto Robert Frisch wrote the first theoretical interpretation of the fission process, which was later improved by Niels Bohr and John A. Wheeler. Further developments took place during World War II, which led to the practical application of radar and the development and use of the atomic bomb. Around this time, Chien-Shiung Wu was recruited by the Manhattan Project to help develop a process for separating uranium metal into U-235 and U-238 isotopes by Gaseous diffusion. She was an expert experimentalist in beta decay and weak interaction physics. Wu designed an experiment (see Wu experiment) that enabled theoretical physicists Tsung-Dao Lee and Chen-Ning Yang to disprove the law of parity experimentally, winning them a Nobel Prize in 1957. Though the process had begun with the invention of the cyclotron by Ernest O. Lawrence in the 1930s, physics in the postwar period entered into a phase of what historians have called "Big Science", requiring massive machines, budgets, and laboratories in order to test their theories and move into new frontiers. The primary patron of physics became state governments, who recognized that the support of "basic" research could often lead to technologies useful to both military and industrial applications. === Advances in genetics === In the early 20th century, the study of heredity became a major investigation after the rediscovery in 1900 of the laws of inheritance developed by Mendel. The 20th century also saw the integration of physics and chemistry, with chemical properties explained as the result of the electronic structure of the atom. Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. Pauling's work culminated in the physical modelling of DNA, the secret of life (in the words of Francis Crick, 1953). In the same year, the Miller–Urey experiment demonstrated in a simulation of primordial processes, that basic constituents of proteins, simple amino acids, could themselves be built up from simpler molecules, kickstarting decades of research into the chemical origins of life. By 1953, James D. Watson and Francis Crick clarified the basic structure of DNA, the genetic material for expressing life in all its forms, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix. In their famous paper "Molecular structure of Nucleic Acids" In the late 20th century, the possibilities of genetic engineering became practical for the first time, and a massive international effort began in 1990 to map out an entire human genome (the Human Genome Project). The discipline of ecology typically traces its origin to the synthesis of Darwinian evolution and Humboldtian biogeography, in the late 19th and early 20th centuries. Equally important in the rise of ecology, however, were microbiology and soil science—particularly the cycle of life concept, prominent in the work of Louis Pasteur and Ferdinand Cohn. The word ecology was coined by Ernst Haeckel, whose particularly holistic view of nature in general (and Darwin's theory in particular) was important in the spread of ecological thinking. The field of ecosystem ecology emerged in the Atomic Age with the use of radioisotopes to visualize food webs and by the 1970s ecosystem ecology deeply influenced global environmental management. === Space exploration === In 1925, Cecilia Payne-Gaposchkin determined that stars were composed mostly of hydrogen and helium. She was dissuaded by astronomer Henry Norris Russell from publishing this finding in her PhD thesis because of the widely held belief that stars had the same composition as the Earth. However, four years later, in 1929, Henry Norris Russell came to the same conclusion through different reasoning and the discovery was eventually accepted. In 1987, supernova SN 1987A was observed by astronomers on Earth both visually, and in a triumph for neutrino astronomy, by the solar neutrino detectors at Kamiokande. But the solar neutrino flux was a fraction of its theoretically expected value. This discrepancy forced a change in some values in the standard model for particle physics. === Neuroscience as a distinct discipline === The understanding of neurons and the nervous system became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia. In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation. Neuroscience began to be recognized as a distinct academic discipline in its own right. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field. === Plate tectonics === Geologists' embrace of plate tectonics became part of a broadening of the field from a study of rocks into a study of the Earth as a planet. Other elements of this transformation include: geophysical studies of the interior of the Earth, the grouping of geology with meteorology and oceanography as one of the "earth sciences", and comparisons of Earth and the solar system's other rocky planets. === Applications === In terms of applications, a massive number of new technologies were developed in the 20th century. Technologies such as electricity, the incandescent light bulb, the automobile and the phonograph, first developed at the end of the 19th century, were perfected and universally deployed. The first car was introduced by Karl Benz in 1885. The first airplane flight occurred in 1903, and by the end of the century airliners flew thousands of miles in a matter of hours. The development of the radio, television and computers caused massive changes in the dissemination of information. Advances in biology also led to large increases in food production, as well as the elimination of diseases such as polio by Dr. Jonas Salk. Gene mapping and gene sequencing, invented by Drs. Mark Skolnik and Walter Gilbert, respectively, are the two technologies that made the Human Genome Project feasible. Computer science, built upon a foundation of theoretical linguistics, discrete mathematics, and electrical engineering, studies the nature and limits of computation. Subfields include computability, computational complexity, database design, computer networking, artificial intelligence, and the design of computer hardware. One area in which advances in computing have contributed to more general scientific development is by facilitating large-scale archiving of scientific data. Contemporary computer science typically distinguishes itself by emphasizing mathematical 'theory' in contrast to the practical emphasis of software engineering. Einstein's paper "On the Quantum Theory of Radiation" outlined the principles of the stimulated emission of photons. This led to the invention of the Laser (light amplification by the stimulated emission of radiation) and the optical amplifier which ushered in the Information Age. It is optical amplification that allows fiber optic networks to transmit the massive capacity of the Internet. Based on wireless transmission of electromagnetic radiation and global networks of cellular operation, the mobile phone became a primary means to access the internet. === Developments in political science and economics === In political science during the 20th century, the study of ideology, behaviouralism and international relations led to a multitude of 'pol-sci' subdisciplines including rational choice theory, voting theory, game theory (also used in economics), psephology, political geography/geopolitics, political anthropology/political psychology/political sociology, political economy, policy analysis, public administration, comparative political analysis and peace studies/conflict analysis. In economics, John Maynard Keynes prompted a division between microeconomics and macroeconomics in the 1920s. Under Keynesian economics macroeconomic trends can overwhelm economic choices made by individuals. Governments should promote aggregate demand for goods as a means to encourage economic expansion. Following World War II, Milton Friedman created the concept of monetarism. Monetarism focuses on using the supply and demand of money as a method for controlling economic activity. In the 1970s, monetarism has adapted into supply-side economics which advocates reducing taxes as a means to increase the amount of money available for economic expansion. Other modern schools of economic thought are New Classical economics and New Keynesian economics. New Classical economics was developed in the 1970s, emphasizing solid microeconomics as the basis for macroeconomic growth. New Keynesian economics was created partially in response to New Classical economics. It shows how imperfect competition and market rigidities, means monetary policy has real effects, and enables analysis of different policies. === Developments in psychology, sociology, and anthropology === Psychology in the 20th century saw a rejection of Freud's theories as being too unscientific, and a reaction against Edward Titchener's atomistic approach of the mind. This led to the formulation of behaviorism by John B. Watson, which was popularized by B.F. Skinner. Behaviorism proposed epistemologically limiting psychological study to overt behavior, since that could be reliably measured. Scientific knowledge of the "mind" was considered too metaphysical, hence impossible to achieve. The final decades of the 20th century have seen the rise of cognitive science, which considers the mind as once again a subject for investigation, using the tools of psychology, linguistics, computer science, philosophy, and neurobiology. New methods of visualizing the activity of the brain, such as PET scans and CAT scans, began to exert their influence as well, leading some researchers to investigate the mind by investigating the brain, rather than cognition. These new forms of investigation assume that a wide understanding of the human mind is possible, and that such an understanding may be applied to other research domains, such as artificial intelligence. Evolutionary theory was applied to behavior and introduced to anthropology and psychology, through the works of cultural anthropologist Napoleon Chagnon. Physical anthropology would become biological anthropology, incorporating elements of evolutionary biology. American sociology in the 1940s and 1950s was dominated largely by Talcott Parsons, who argued that aspects of society that promoted structural integration were therefore "functional". This structural functionalism approach was questioned in the 1960s, when sociologists came to see this approach as merely a justification for inequalities present in the status quo. In reaction, conflict theory was developed, which was based in part on the philosophies of Karl Marx. Conflict theorists saw society as an arena in which different groups compete for control over resources. Symbolic interactionism also came to be regarded as central to sociological thinking. Erving Goffman saw social interactions as a stage performance, with individuals preparing "backstage" and attempting to control their audience through impression management. While these theories are currently prominent in sociological thought, other approaches exist, including feminist theory, post-structuralism, rational choice theory, and postmodernism. In the mid-20th century, much of the methodologies of earlier anthropological and ethnographical study were reevaluated with an eye towards research ethics, while at the same time the scope of investigation has broadened far beyond the traditional study of "primitive cultures". == 21st century == In the early 21st century, some concepts that originated in 20th century physics were proven. On 4 July 2012, physicists working at CERN's Large Hadron Collider announced that they had discovered a new subatomic particle greatly resembling the Higgs boson, confirmed as such by the following March. Gravitational waves were first detected on 14 September 2015. The Human Genome Project was declared complete in 2003. The CRISPR gene editing technique developed in 2012 allowed scientists to precisely and easily modify DNA and led to the development of new medicine. In 2020, xenobots, a new class of living robotics, were invented; reproductive capabilities were introduced the following year. Positive psychology is a branch of psychology founded in 1998 by Martin Seligman that is concerned with the study of happiness, mental well-being, and positive human functioning, and is a reaction to 20th century psychology's emphasis on mental illness and dysfunction. == See also == == References == === Sources === Bruno, Leonard C. (1989). The Landmarks of Science. Facts on File. ISBN 978-0-8160-2137-6. Heilbron, John L., ed. (2003). The Oxford Companion to the History of Modern Science. Oxford University Press. ISBN 978-0-19-511229-0. Needham, Joseph; Wang, Ling (1954). Introductory Orientations. Science and Civilisation in China. Vol. 1. Cambridge University Press. Needham, Joseph (1986a). Mathematics and the Sciences of the Heavens and the Earth. Science and Civilisation in China. Vol. 3. Taipei: Caves Books Ltd. Needham, Joseph (1986c). Physics and Physical Technology, Part 2, Mechanical Engineering. Science and Civilisation in China. Vol. 4. Taipei: Caves Books Ltd. Needham, Joseph; Robinson, Kenneth G.; Huang, Jen-Yü (2004). "General Conclusions and Reflections". Science and Chinese society. Science and Civilisation in China. Vol. 7. Cambridge University Press. Sambursky, Shmuel (1974). Physical Thought from the Presocratics to the Quantum Physicists: an anthology selected, introduced and edited by Shmuel Sambursky. Pica Press. p. 584. ISBN 978-0-87663-712-8. == Further reading == == External links == 'What is the History of Science', British Academy British Society for the History of Science "Scientific Change". Internet Encyclopedia of Philosophy. The CNRS History of Science and Technology Research Center in Paris (France) (in French) Henry Smith Williams, History of Science, Vols 1–4, online text Digital Archives of the National Institute of Standards and Technology (NIST) Digital facsimiles of books from the History of Science Collection Archived 13 January 2020 at the Wayback Machine, Linda Hall Library Digital Collections Division of History of Science and Technology of the International Union of History and Philosophy of Science Giants of Science (website of the Institute of National Remembrance) History of Science Digital Collection: Utah State University – Contains primary sources by such major figures in the history of scientific inquiry as Otto Brunfels, Charles Darwin, Erasmus Darwin, Carolus Linnaeus Antony van Leeuwenhoek, Jan Swammerdam, James Sowerby, Andreas Vesalius, and others. History of Science Society ("HSS") Archived 15 September 2020 at the Wayback Machine Inter-Divisional Teaching Commission (IDTC) of the International Union for the History and Philosophy of Science (IUHPS) Archived 13 January 2020 at the Wayback Machine International Academy of the History of Science International History, Philosophy and Science Teaching Group IsisCB Explore: History of Science Index An open access discovery tool Museo Galileo – Institute and Museum of the History of Science in Florence, Italy National Center for Atmospheric Research (NCAR) Archives The official site of the Nobel Foundation. Features biographies and info on Nobel laureates The Royal Society, trailblazing science from 1650 to date Archived 18 August 2015 at the Wayback Machine The Vega Science Trust Free to view videos of scientists including Feynman, Perutz, Rotblat, Born and many Nobel Laureates. A Century of Science in America: with special reference to the American Journal of Science, 1818-1918
Wikipedia/Modern_science
A Lewis acid (named for the American physical chemist Gilbert N. Lewis) is a chemical species that contains an empty orbital which is capable of accepting an electron pair from a Lewis base to form a Lewis adduct. A Lewis base, then, is any species that has a filled orbital containing an electron pair which is not involved in bonding but may form a dative bond with a Lewis acid to form a Lewis adduct. For example, NH3 is a Lewis base, because it can donate its lone pair of electrons. Trimethylborane [(CH3)3B] is a Lewis acid as it is capable of accepting a lone pair. In a Lewis adduct, the Lewis acid and base share an electron pair furnished by the Lewis base, forming a dative bond. In the context of a specific chemical reaction between NH3 and Me3B, a lone pair from NH3 will form a dative bond with the empty orbital of Me3B to form an adduct NH3•BMe3. The terminology refers to the contributions of Gilbert N. Lewis. The terms nucleophile and electrophile are sometimes interchangeable with Lewis base and Lewis acid, respectively. These terms, especially their abstract noun forms nucleophilicity and electrophilicity, emphasize the kinetic aspect of reactivity, while the Lewis basicity and Lewis acidity emphasize the thermodynamic aspect of Lewis adduct formation. == Depicting adducts == In many cases, the interaction between the Lewis base and Lewis acid in a complex is indicated by an arrow indicating the Lewis base donating electrons toward the Lewis acid using the notation of a dative bond — for example, Me3B←NH3. Some sources indicate the Lewis base with a pair of dots (the explicit electrons being donated), which allows consistent representation of the transition from the base itself to the complex with the acid: Me3B + :NH3 → Me3B:NH3 A center dot may also be used to represent a Lewis adduct, such as Me3B·NH3. Another example is boron trifluoride diethyl etherate, BF3·Et2O. In a slightly different usage, the center dot is also used to represent hydrate coordination in various crystals, as in MgSO4·7H2O for hydrated magnesium sulfate, irrespective of whether the water forms a dative bond with the metal. Although there have been attempts to use computational and experimental energetic criteria to distinguish dative bonding from non-dative covalent bonds, for the most part, the distinction merely makes note of the source of the electron pair, and dative bonds, once formed, behave simply as other covalent bonds do, though they typically have considerable polar character. Moreover, in some cases (e.g., sulfoxides and amine oxides as R2S → O and R3N → O), the use of the dative bond arrow is just a notational convenience for avoiding the drawing of formal charges. In general, however, the donor–acceptor bond is viewed as simply somewhere along a continuum between idealized covalent bonding and ionic bonding. == Lewis acids == Lewis acids are diverse and the term is used loosely. Simplest are those that react directly with the Lewis base, such as boron trihalides and the pentahalides of phosphorus, arsenic, and antimony. In the same vein, CH+3 can be considered to be the Lewis acid in methylation reactions. However, the methyl cation never occurs as a free species in the condensed phase, and methylation reactions by reagents like CH3I take place through the simultaneous formation of a bond from the nucleophile to the carbon and cleavage of the bond between carbon and iodine (SN2 reaction). Textbooks disagree on this point: some asserting that alkyl halides are electrophiles but not Lewis acids, while others describe alkyl halides (e.g. CH3Br) as a type of Lewis acid. The IUPAC states that Lewis acids and Lewis bases react to form Lewis adducts, and defines electrophile as Lewis acids. === Simple Lewis acids === Some of the most studied examples of such Lewis acids are the boron trihalides and organoboranes: BF3 + F− → BF−4 In this adduct, all four fluoride centres (or more accurately, ligands) are equivalent. BF3 + OMe2 → BF3OMe2 Both BF4− and BF3OMe2 are Lewis base adducts of boron trifluoride. Many adducts violate the octet rule, such as the triiodide anion: I2 + I− → I−3 The variability of the colors of iodine solutions reflects the variable abilities of the solvent to form adducts with the Lewis acid I2. Some Lewis acids bind with two Lewis bases, a famous example being the formation of hexafluorosilicate: SiF4 + 2 F− → SiF2−6 === Complex Lewis acids === Most compounds considered to be Lewis acids require an activation step prior to formation of the adduct with the Lewis base. Complex compounds such as Et3Al2Cl3 and AlCl3 are treated as trigonal planar Lewis acids but exist as aggregates and polymers that must be degraded by the Lewis base. A simpler case is the formation of adducts of borane. Monomeric BH3 does not exist appreciably, so the adducts of borane are generated by degradation of diborane: B2H6 + 2 H− → 2 BH−4 In this case, an intermediate B2H−7 can be isolated. Many metal complexes serve as Lewis acids, but usually only after dissociating a more weakly bound Lewis base, often water. [Mg(H2O)6]2+ + 6 NH3 → [Mg(NH3)6]2+ + 6 H2O === H+ as Lewis acid === The proton (H+)  is one of the strongest but is also one of the most complicated Lewis acids. It is convention to ignore the fact that a proton is heavily solvated (bound to solvent). With this simplification in mind, acid-base reactions can be viewed as the formation of adducts: H+ + NH3 → NH+4 H+ + OH− → H2O === Applications of Lewis acids === A typical example of a Lewis acid in action is in the Friedel–Crafts alkylation reaction. The key step is the acceptance by AlCl3 of a chloride ion lone-pair, forming AlCl−4 and creating the strongly acidic, that is, electrophilic, carbonium ion. RCl +AlCl3 → R+ + AlCl−4 == Lewis bases == A Lewis base is an atomic or molecular species where the highest occupied molecular orbital (HOMO) is highly localized. Typical Lewis bases are conventional amines such as ammonia and alkyl amines. Other common Lewis bases include pyridine and its derivatives. They are nucleophilic in nature. Some of the main classes of Lewis bases are: amines of the formula NH3−xRx where R = alkyl or aryl. Related to these are pyridine and its derivatives. phosphines of the formula PR3−xArx. compounds of O, S, Se and Te in oxidation state −2, including water, ethers, ketones The most common Lewis bases are anions. The strength of Lewis basicity correlates with the pKa of the parent acid: acids with high pKa's give good Lewis bases. As usual, a weaker acid has a stronger conjugate base. Examples of Lewis bases based on the general definition of electron pair donor include: simple anions, such as H− and F− other lone-pair-containing species, such as H2O, NH3, HO−, and CH3− complex anions, such as sulfate electron-rich π-system Lewis bases, such as ethyne, ethene, and benzene The strength of Lewis bases have been evaluated for various Lewis acids, such as I2, SbCl5, and BF3. === Applications of Lewis bases === Nearly all electron pair donors that form compounds by binding transition elements can be viewed ligands. Thus, a large application of Lewis bases is to modify the activity and selectivity of metal catalysts. Chiral Lewis bases, generally multidentate, confer chirality on a catalyst, enabling asymmetric catalysis, which is useful for the production of pharmaceuticals. The industrial synthesis of the anti-hypertension drug mibefradil uses a chiral Lewis base (R-MeOBIPHEP), for example. == Hard and soft classification == Lewis acids and bases are commonly classified according to their hardness or softness. In this context hard implies small and nonpolarizable and soft indicates larger atoms that are more polarizable. typical hard acids: H+, alkali/alkaline earth metal cations, boranes, Zn2+ typical soft acids: Ag+, Mo(0), Ni(0), Pt2+ typical hard bases: ammonia and amines, water, carboxylates, fluoride and chloride typical soft bases: organophosphines, thioethers, carbon monoxide, iodide For example, an amine will displace phosphine from the adduct with the acid BF3. In the same way, bases could be classified. For example, bases donating a lone pair from an oxygen atom are harder than bases donating through a nitrogen atom. Although the classification was never quantified it proved to be very useful in predicting the strength of adduct formation, using the key concepts that hard acid—hard base and soft acid—soft base interactions are stronger than hard acid—soft base or soft acid—hard base interactions. Later investigation of the thermodynamics of the interaction suggested that hard—hard interactions are enthalpy favored, whereas soft—soft are entropy favored. == Quantifying Lewis acidity == Many methods have been devised to evaluate and predict Lewis acidity. Many are based on spectroscopic signatures such as shifts NMR signals or IR bands e.g. the Gutmann-Beckett method and the Childs method. The ECW model is a quantitative model that describes and predicts the strength of Lewis acid base interactions, −ΔH. The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is −ΔH = EAEB + CACB + W The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. and that single property scales are limited to a smaller range of acids or bases. == History == The concept originated with Gilbert N. Lewis who studied chemical bonding. In 1923, Lewis wrote An acid substance is one which can employ an electron lone pair from another molecule in completing the stable group of one of its own atoms. The Brønsted–Lowry acid–base theory was published in the same year. The two theories are distinct but complementary. A Lewis base is also a Brønsted–Lowry base, but a Lewis acid does not need to be a Brønsted–Lowry acid. The classification into hard and soft acids and bases (HSAB theory) followed in 1963. The strength of Lewis acid-base interactions, as measured by the standard enthalpy of formation of an adduct can be predicted by the Drago–Wayland two-parameter equation. === Reformulation of Lewis theory === Lewis had suggested in 1916 that two atoms are held together in a chemical bond by sharing a pair of electrons. When each atom contributed one electron to the bond, it was called a covalent bond. When both electrons come from one of the atoms, it was called a dative covalent bond or coordinate bond. The distinction is not very clear-cut. For example, in the formation of an ammonium ion from ammonia and hydrogen the ammonia molecule donates a pair of electrons to the proton; the identity of the electrons is lost in the ammonium ion that is formed. Nevertheless, Lewis suggested that an electron-pair donor be classified as a base and an electron-pair acceptor be classified as acid. A more modern definition of a Lewis acid is an atomic or molecular species with a localized empty atomic or molecular orbital of low energy. This lowest-energy unoccupied molecular orbital (LUMO) can accommodate a pair of electrons. === Comparison with Brønsted–Lowry theory === A Lewis base is often a Brønsted–Lowry base as it can donate a pair of electrons to H+; the proton is a Lewis acid as it can accept a pair of electrons. The conjugate base of a Brønsted–Lowry acid is also a Lewis base as loss of H+ from the acid leaves those electrons which were used for the A—H bond as a lone pair on the conjugate base. However, a Lewis base can be very difficult to protonate, yet still react with a Lewis acid. For example, carbon monoxide is a very weak Brønsted–Lowry base but it forms a strong adduct with BF3. In another comparison of Lewis and Brønsted–Lowry acidity by Brown and Kanner, 2,6-Di-tert-butylpyridine reacts to form the hydrochloride salt with HCl but does not react with BF3. This example demonstrates that steric factors, in addition to electron configuration factors, play a role in determining the strength of the interaction between the bulky di-t-butylpyridine and tiny proton. == See also == Acid Base (chemistry) Acid–base reaction Brønsted–Lowry acid–base theory Chiral Lewis acid Frustrated Lewis pair Gutmann–Beckett method ECW model Philosophy of chemistry == References == == Further reading == Jensen, W.B. (1980). The Lewis acid-base concepts : an overview. New York: Wiley. ISBN 0-471-03902-0. Yamamoto, Hisashi (1999). Lewis acid reagents : a practical approach. New York: Oxford University Press. ISBN 0-19-850099-8.
Wikipedia/Lewis_theory
Organizational theory refers to a series of interrelated concepts that involve the sociological study of the structures and operations of formal social organizations. Organizational theory also seeks to explain how interrelated units of organization either connect or do not connect with each other. Organizational theory also concerns understanding how groups of individuals behave, which may differ from the behavior of an individual. The behavior organizational theory often focuses on is goal-directed. Organizational theory covers both intra-organizational and inter-organizational fields of study. In the early 20th century, theories of organizations initially took a rational perspective but have since become more diverse. In a rational organization system, there are two significant parts: Specificity of Goals and Formalization. The division of labor is the specialization of individual labor roles, associated with increasing output and trade. Modernization theorist Frank Dobbin wrote that "modern institutions are transparently purposive and that we are in the midst of an extraordinary progression towards more efficiency." Max Weber's conception of bureaucracy is characterized by the presence of impersonal positions that are earned and not inherited, rule-governed decision-making, professionalism, chain of command, defined responsibility, and bounded authority. Contingency theory holds that an organization must try to maximize performance by minimizing the effects of various environmental and internal constraints, and that the ability to navigate this requisite variety may depend upon the development of a range of response mechanisms. Dwight Waldo in 1978 wrote that "[o]rganization theory is characterized by vogues, heterogeneity, claims and counterclaims." Organization theory cannot be described as an orderly progression of ideas or a unified body of knowledge in which each development builds carefully on and extends the one before it. Rather, developments in theory and descriptions for practice show disagreement about the purposes and uses of a theory of organization, the issues to which it should address itself (such as supervisory style and organizational culture), and the concepts and variables that should enter into such a theory. Suggestions to view organizations as a series of logical relationships between its participants have found its way into the theoretical relationships between diverging organizational theories as well, as explains the interdisciplinary nature of the field. == Background == === Rise of organizations === In 1820, about 20% of the United States population depended on a wage income. That percentage increased to 90% by 1950. Generally, by 1950, farmers and craftsmen were the only people not dependent on working for someone else. Prior to that time, most people were able to survive by hunting and farming their own food, making their own supplies, and remaining almost fully self-sufficient. As transportation became more efficient and technologies developed, self-sufficiency became an economically poor choice. As in the Lowell textile mills, various machines and processes were developed for each step of the production process, thus making mass production a cheaper and faster alternative to individual production. In addition, as the population grew and transportation improved, the pre-organizational system struggled to support the needs of the market. These conditions made for a wage-dependent population that sought out jobs in growing organizations, leading to a shift away from individual and family production. In addition to a shift to wage dependence, externalities from industrialization also created a perfect opportunity for the rise of organizations. Various negative effects such as pollution, workplace accidents, crowded cities, and unemployment became rising concerns. Rather than small groups such as families and churches being able to control these problems as they had in the past, new organizations and systems were required. These organizations were less personal, more distant, and more centralized, but what they lacked in locality they made up for in efficiency. Along with wage dependency and externalities, the growth of industry also played a large role in the development of organizations. Markets that were quickly growing needed workers urgently, so a need developed for organizational structures to guide and support those new workers. Some of the first New England factories initially relied on the daughters of farmers; later, as the economy changed, they began to gain workers from the former farming classes, and finally, from European immigrants. Many Europeans left their homes for the promises of US industry, and about 60% of those immigrants stayed in the country. They became a permanent class of workers in the economy, which allowed factories to increase production and produce more than they had before. With this large growth came the need for organizations and for leadership that was not previously needed in small businesses and firms. Overall, the historical and social context in which organizations arose in the United States allowed not only for the development of organizations, but also for their spread and growth. Wage dependency, externalities, and growth of industries all played into the change from individual, family, and small-group production and regulation to large organizations and structure. === Developments in theory === As people implemented organizations over time, many researchers have experimented as to which organizational theory fits them best. The theories of organizations include bureaucracy, rationalization (scientific management), and the division of labor. Each theory provides distinct advantages and disadvantages when applied. The classical perspective emerges from the Industrial Revolution in the private sector and the need for improved public administration in the public sector. Both efforts center on theories of efficiency. Classical works have seasoned and have been elaborated upon in depth. There are at least two subtopics under the classical perspective: the scientific management and bureaucracy theory. A number of sociologists and psychologists made major contributions to the study of the neoclassical perspective, which is also known as the human relations school of thought. The human relations movement was a movement which had the primary concerns of concentrating on topics such as morale, leadership. This perspective began in the 1920s with the Hawthorne studies, which gave emphasis to "affective and socio-psychological aspects of human behavior in organizations." The study, taking place at the "Hawthorne plant of the Western Electric Company between 1927 and 1932," would make Elton Mayo and his colleagues the most important contributors to the neoclassical perspective. There was a wave of scholarly attention to organizational theory in the 1950s, which from some viewpoints held the field to still be in its infancy. A 1959 symposium held by the Foundation for Research on Human Behavior in Ann Arbor, Michigan, was published as Modern Organization Theory. Among a group of eminent organizational theorists active during this decade were E. Wight Bakke, Chris Argyris, James G. March, Rensis Likert, Jacob Marschak, Anatol Rapoport, and William Foote Whyte. == Weberian bureaucracy == The scholar most closely associated with a theory of bureaucracy is Max Weber. In Economy and Society, his seminal book published in 1922, Weber describes its features. Bureaucracy, as characterized in Weber's terminology of ideal types, is marked by the presence of positions that are earned and not inherited. Rules govern decision-making. Those in positions of authority demonstrate professionalism. There is a chain of command and position-defined responsibility. Authority is bounded. Weber begins his discussion of bureaucracy by introducing the concept of jurisdictional areas: institutions governed by a specific set of rules or laws. In a jurisdictional area, regular activities are assigned as official duties. The authority to assign duties is governed by a set of rules. Duties are fulfilled continuously by qualified individuals. These elements make up a bureaucratic agency in the case of the state and bureaucratic enterprises in the private sector. There are several additional features that make up a Weberian bureaucracy: It is possible to find the utilization of hierarchical subordination in all bureaucratic structures. This means that higher-level offices supervise lower-level offices. In bureaucracies, personal possessions are kept separate from the monies of the agency or the enterprise. People who work within a bureaucracy are usually trained in the appropriate field of specialization. Bureaucratic officials are expected to contribute their full working capacity to the organization. Positions within a bureaucratic organization must follow a specific set of general rules. Weber argued that in a bureaucracy, taking on a position or office signifies an assumption of specific duties necessary for the smooth running of the organization. This conception is distinct from historical working relationships in which a worker served a specific ruler, not an institution. The hierarchical nature of bureaucracies allows employees to demonstrate achieved social status. When an officeholder is elected instead of appointed, that person is no longer a purely bureaucratic figure. He derives his power "from below" instead of "from above." When a high-ranking officer selects officials, they are more likely to be chosen for reasons related to the benefit of the superior than the competency of the new hire. When high-skilled employees are necessary for the bureaucracy and public opinion shapes decision-making, competent officers are more likely to be selected. According to Weber, if 'tenure for life' is legally guaranteed, an office becomes perceived as less prestigious than a position that can be replaced at any time. If 'tenure for life' or a 'right to the office' develops, there is a decrease in career opportunities for ambitious new hires and overall technical efficiency becomes less guaranteed. In a bureaucracy, salaries are provided to officials. The amount is determined on the basis of rank and helps to signify the desirability of a position. Bureaucratic positions also exist as part of stable career tracks that reward office-holders for seniority. Weber argues that the development of a money economy is the "normal precondition for the unchanged survival, if not the establishment, of pure bureaucratic administrations." Since bureaucracy requires sustained revenues from taxation or private profits in order to be maintained, a money economy is the most rational way to ensure its continued existence. Weber posits that officials in a bureaucracy have a property right to their office and attempt at exploitation by a superior means the abandonment of bureaucratic principles. He articulates that providing a status incentive to inferior officers helps them to maintain self-respect and fully participate in hierarchical frameworks. Michel Crozier reexamined Weber's theory in 1964 and determined that bureaucracy is flawed because hierarchy causes officers to engage in selfish power struggles that damage the efficiency of the organization. === Summary of characteristics of Weberian bureaucracy === Weber identified the following components of bureaucracy as essential: Official jurisdiction in all areas is ordered by rules or laws already implemented. There is an office hierarchy; a system of super- and sub-ordination in which higher offices supervise lower ones. The management of the modern office is based upon written rules, which are preserved in their original form. Office management requires training and specialization. When the office is developed/established it requires the full working capacity of individuals. Rules are stable and can be learned. Knowledge of these rules can be viewed as expertise within the bureaucracy (these allow for the management of society). When a bureaucracy is implemented, it can provide accountability, responsibility, control, and consistency. The hiring of employees will be an impersonal and equal system. Although the classical perspective encourages efficiency, it is often criticized as ignoring human needs. Also, it rarely takes into consideration human error or the variability of work performances (since each worker is different). In the case of the Space Shuttle Challenger disaster, NASA managers overlooked the possibility of human error. (See also: Three Mile Island accident.) === Efficiency and teleological arguments === Weber believed that a bureaucracy consists of six specific characteristics: hierarchy of command, impersonality, written rules of conduct, advancement based on achievement, specialized division of labor, and efficiency. This ultimate characteristic of Weberian bureaucracy, which states that bureaucracies are very efficient, is controversial and by no means accepted by all sociologists. There are certainly both positive and negative consequences to bureaucracy and strong arguments for both the efficiency and inefficiency of bureaucracies. While Max Weber's work was published in the late 1800s and early 1900s, before his death in 1920, his work is still referenced today in the field of sociology. Weber's theory of bureaucracy claims that it is extremely efficient, and even goes as far as to claim that bureaucracy is the most efficient form of organization. Weber claimed that bureaucracies are necessary to ensure the continued functioning of society, which has become drastically more modern and complex in the past century. Furthermore, he claimed that without the structured organization of bureaucracy, our complex society would be much worse off, because society would act in an inefficient and wasteful way. He saw bureaucracies as organizations driven towards certain goals, which they could carry out efficiently. In addition, within an organization that operates under bureaucratic standards, the members will be better off due to the heavy regulation and detailed structure. Not only does bureaucracy make it much more difficult for arbitrary and unfair personal favors to be carried out, it also means that promotions and hiring will generally be done completely by merit. Weber regarded bureaucracies as goal-driven, efficient organizations. But he also acknowledged their limitations. Weber recognized that there are constraints within the bureaucratic system. First of all, he noted that bureaucracies are ruled by very few people with considerable unregulated power. A consequence is oligarchy, whereby a limited number of officials gain political and economic power. Furthermore, Weber considered further bureaucratization to be an "inescapable fate" because it is thought to be superior to and more efficient than other forms of organization. Weber's analysis led him to believe that bureaucracies are too inherently limiting of individual human freedom. He feared that people would begin to be too controlled by bureaucracies. In his view, the strict methods of administration and legitimate forms of authority associated with bureaucracy act to eliminate human freedom. Weber tended to offer a teleological argument with regard to bureaucracy. Weber's idea of bureaucracy is considered teleological to the extent that he posits that bureaucracies aim to achieve specific goals. Weber claimed that bureaucracies are goal-oriented organizations that use their efficiency and rational principles to reach their goals. A teleological analysis of businesses leads to the inclusion of all involved stakeholders in decision-making. The teleological view of Weberian bureaucracy postulates that all actors in an organization have various ends or goals, and attempt to find the most efficient way to achieve these goals. === Criticism === "There is dangerous risk of oversimplification in making Weber seem cold and heartless to such a degree that an efficiently-run Nazi death camp might appear admirable." In reality, Weber believed that by using human logic in his system, organizations could achieve improvement of human condition in various workplaces. Another critique of Weber's theory is the argument of efficiency. Highest efficiency, in theory, can be attained through pure work with no regard for the workers (for example, long hours with little pay), which is why oversimplification can be dangerous. If we were to take one characteristic focusing on efficiency, it would seem like Weber is promoting unhealthy work conditions, when in fact, he wanted the complete opposite. Taking all of the characteristics that to Weber are hallmarks of bureaucracy, he recognized that a pure bureaucracy is nearly impossible to attain. Though his theories include characteristics of a highly efficient organization, these characteristics are only meant to serve as a model of how a bureaucratic organization works, recognizing that the manifestation of that model in life differs from the pure model. With this said, the characteristics of Weber's theory have to all be perfect for a bureaucracy to function at its highest potential. "Think of the concept as a bureau or desk with drawers in it, which seems to call out to you, demanding that everything must fit in its place." If one object in the drawer does not fit properly, the entire drawer becomes untidy, which is exactly the case in Weber's theory; if one characteristic is not fulfilled the rest of them are unable to work in unison, leaving the organization performing below its full potential. One characteristic that was meant to improve working conditions was his rule that "Organization follows hierarchical principle – subordinates follow orders or superiors, but have right of appeal (in contrast to more diffuse structure in traditional authority)." In other words, everyone in a company or any sort of work environment has the opportunity and right to disagree or to speak up if they are unhappy with something rather than not voice their opinion in fear of losing their job. Open communication is a very important part of Weber's bureaucracy, and is practiced today. Because of the communication it may not be the most efficient, but Weber would argue that improved human conditions are more important than efficiency. Weber's theory is not perfectly instantiated in real life. The elements of his theory are understood as "ideal types" and are not perfect reflections of individuals in their organizational roles and their interactions within organizations. Some individuals may regard Weber's model as good way to run an organization. == Rational system perspective == A rational organization system has two significant parts: (1) specificity of goals and (2) formalization. Goal specification provides guidelines for specific tasks to be completed along with a regulated way for resources to be allocated. Formalization is a way to standardize organizational behavior. As a result, there will be stable expectations, which create the rational organizational system. Scientific management: Frederick Winslow Taylor analyzed how to maximize the amount of output with the least amount of input. This was Taylor's attempt to rationalize the individual worker by: dividing work between managers and workers providing an incentive system (based on performance) scientifically trained workers developing a science for each individual's responsibilities making sure work gets done on time/efficiently Problems arose out of scientific management. One is that the standardization leads workers to rebel against mundanes. Another may see workers rejecting the incentive system because they are required to constantly work at their optimum level, an expectation that may be unrealistic. === Formal Organization === The concept of formal organization has been touched upon by a number of authors in the subject of organizational theory, such as Max Weber, whose bureaucratic models could be said to be an extension of the concept. In Chester Barnard's book The Functions of the Executive, formal organization is defined as "a system of contributors' activities that are consciously coordinated by the organization's purpose." This differs from informal organization, such as a human group, that consists of individuals and their interactions, but do not require these to be coordinated toward some common purpose, although formal organizations also consist of informal organizations, as sub-parts of their system. === Scientific management === The scientific management theory was introduced by Frederick Winslow Taylor to encourage production efficiency and productivity. Taylor argues that inefficiencies could be controlled through managing production as a science. Taylor defines scientific management as "concerned with knowing exactly what you want men to do and then see in that they do it in the best and cheapest way." According to Taylor, scientific management affects both workers and employers, and stresses control of the labor force by management. Taylor identifies four inherent principles of the scientific management theory: The creation of a scientific method of measurement that replaces the "rule-of-thumb" method Emphasis placed on the training of workers by management Cooperation between manager and workers to ensure aforementioned principles are being met Equal division of labor between managers and workers == Division of labor == Division of labor is the separation of tasks so that individuals may specialize, leading to cost efficiency. Adam Smith linked the division of labor to increased efficiency and output. According to Smith, the division of labor is efficient for three reasons: (a) occupational specialization, (b) savings from not changing tasks, and (c) machines augmenting human labor. Occupational specialization leads to increased productivity and distinct skill. Furthermore, Smith argued that the skill of workers should be matched with the technology they employ. Although division of labor is often viewed as inevitable in a capitalism, several problems emerge. These problems include alienation, lack of creativity, monotony, and lack of mobility. Adam Smith himself foresaw these problems and described the mental torpor the division of labor could create in workers. Creativity will naturally suffer due to the monotonous atmosphere that division of labor creates; repeatedly performing routines may not suit everyone. Furthermore, division of labor gives rise to employees that are not familiar with other parts of the job. They cannot assist employees of different parts of the system. == Modernization theory == Modernization "began when a nation's rural population started moving from the countryside to cities.": 3  It deals with the cessation of traditional methods in order to pursue more contemporary effective methods of organization. Urbanization is an inevitable characteristic of society because the formation of industries and factories induces profit maximization. It is fair to assume that along with the increase in population, as a result of the subsequent urbanization, is the demand for an intelligent and educated labor force.: 3  After the 1950s, Western culture utilized mass-media to communicate their good fortune—attributed to modernization. The coverage promoted "economic mobility" among the social class and increased the aspirations of many hopefuls in developing economic countries.: 4  Under this theory, any country could modernize by using Western civilization as a template. Although this theory of modernization seemed to pride itself on only the benefits, countries in the Middle East saw this movement in a different light. Middle Eastern countries believed that the media coverage of modernization implied that the more "traditional" societies have not "risen to a higher level of technological development.": 6  Consequently, they believed a movement that benefits those who have the monetary resources to modernize technological development would discriminate against the minorities and poor masses.: 6  Thus, they were reluctant to modernize because of the economic gap it would create between the rich and the poor. The growth of modernization took place beginning in the 1950s. For the ensuing decade, people analyzed the diffusion of technological innovations within Western society and the communication that helped it disperse globally. This first "wave," as it became known, had some significant ramifications. First, economic development was enhanced from the spread of new technological techniques. Second, modernization supported a more educated society (as mentioned above), and thus a more qualified labor-force. The second wave, taking place during the 1970s and 1980s, was a critical reframing of modernization, viewing the push of innovations of Western society onto developing countries as an exertion of dominance. It refuted the concept of relying heavily on mass media for the betterment of society. The last wave of modernization theory, which took place in the 1990s, depicts impersonality.: 737  As the use of newspapers, television, and radio becomes more prevalent, the need for direct contact, a concept traditional organizations took pride in, diminishes. Thus, organizational interactions become more distant. According to Frank Dobbin, the modern worldview is the idea that "modern institutions are transparently purposive and that we are in the midst an extraordinary progression towards more efficiency.": 138  This concept epitomizes the goal of modern firms, bureaucracies, and organizations to maximize efficiency. The key to achieving this goal is through scientific discoveries and innovations.: 139  Dobbin discusses the outdated role of culture in organizations. "New Institutionalists" explored the significance of culture in the modern organization.: 117  However, the rationalist worldview counters the use of cultural values in organizations, stating, "transcendental economic laws exist, that existing organizational structures must be functional under the parameters of those laws, [and] that the environment will eliminate organizations that adopt non-efficient solutions.": 138  These laws govern the modern organizations and lead them in the direction that will maximize profits efficiently. Thus, the modernity of organizations is to generate maximum profit, through the use of mass media, technological innovations, and social innovations in order to effectively allocate resources for the betterment of the global economy. == Hawthorne study == The Neoclassical perspective began with the Hawthorne studies in the 1920s. This approach gave emphasis to "affective and socio-psychological aspects of human behavior in organizations." The Hawthorne study suggested that employees have social and psychological needs along with economic needs in order to be motivated to complete their assigned tasks. This theory of management was a product of the strong opposition against "the Scientific and universal management process theory of Taylor and Fayol." This theory was a response to the way employees were treated in companies and how they were deprived of their needs and ambitions. In November 1924, a team of researcher – professors from the renowned Harvard Business School began investigating into the human aspects of work and working conditions at the Hawthorne plant of Western Electric Company, Chicago. The company was producing bells and other electric equipments for the telephone industry. Prominent professors in the research team included psychologist Elton Mayo, sociologists Roethlisberger and Whilehead, and company representative William Dickson. The team conducted four separate experimental and behavioral studies over a seven-year period. These were: "Illumination Experiments (1924–27) to find out the effect of illumination on worker's productivity." "Relay Assembly Test Room experiment (1927–28) to find out the effect of changes in number of work hour and related working condition on worker productivity." "Experiment in interviewing Working: In 1928, a number of researchers went directly to workers, kept the variables of previous experiment aside, and talked about what was, in their opinion, important to them. Around 20,000 workers were interviewed over a period of two years. The interviews enabled the researchers to discover a rich and intriguing world that was previously undiscovered and unexamined within the previously undertaken Hawthorne studies. The discovery of the informal organization and its relationship to the formal organization was the landmark of experiments in interviewing workers. This experiment led to a richer understanding of the social and interpersonal dynamics of people at work." "Bank wiring Room Experiments (1931–32) to find out social system of an organization." === Results === The Hawthorne studies helped conclude that "a human/social element operated in the workplace and that productivity increases were as much an outgrowth of group dynamics as of managerial demands and physical factors." The Hawthorne studies also concluded that although financial motives were important, social factors are just as important in defining the worker-productivity. The Hawthorne Effect was the improvement of productivity between the employees, characterized by: The satisfactory interrelationships between the coworkers Classification of personnel as social beings and proposes that sense of belonging in the workplace is important to increase productivity levels in the workforce. An effective management that understood the way people interacted and behaved within the group. The management attempts to improve interpersonal skills through motivations, leading, communication and counseling. Encouragement of managers to acquire minimal knowledge of behavioral sciences to be able to understand and improve the interactions between employees === Criticism === Critics believed that Mayo gave a lot of importance to the social side of the study rather than addressing the needs of an organization. Also, they believed that the study takes advantage of employees because it influences their emotions by making it seem as if they are satisfied and content, however it is merely a tool that is being used to further advance the productivity of the organization. == Polyphonic organizations == de:Niels Åkerstrøm Andersen's research about polyphonic organization arise out of his understanding of the society as functionally differentiated. The society is divided into a number of countless social systems; communication systems with their own values and commutative code. Niels Andersen is inspired by the German sociologist Niklas Luhmann and his theory about social systems. The core element of Luhmann's theory pivots around the problem of the contingency of the meaning. In other words, the system theory becomes a theory of communication and how meaning is created within different social systems. Niels Anders uses the elements of Luhmann's system theory to describe the differentiation of society and connect that to the evolution of the modern organization. According to Andersen, society is functionally differentiated into a wide range of systems with their own binary code. The binary codes set some distinctions between a positive and negative value and divide the world in two halves. Understandings of the world are made throughout one side of the binary code. Andersen says that an organizational system always communicates and creates meaning through a function system (binary code). In other words, an organization can only communicate through one side of one binary code at once. Throughout history organizations have always used several codes in their communication, but they have always had a primary codification. Andersen calls this type of organization a homophonic organization. The homophonic organization is no longer exercised in today's society. According to Andersen, today we have polyphonic organizations. Polyphonic organizations have emerged as a result of the way that the function systems have exploded beyond their organizational forms. A polyphonic organization is an organization that is connected to several function systems without a predefined primary function system (multiple binary codifications). In other words, the polyphonic organization is an organization that describes itself through many codes. Andersen addresses how it can be difficult for companies to plan their communication and action because they have to mediate between many codes at the same time. There is no longer a predicted hierarchy of codes and therefore no connection between organizations and specific communication. This can also create management challenges for companies because they have to take more factors into account compared to earlier. Andersen's view on polyphonic organizations provides a newer way to critically examine modern organization and their communication decisions. A scholar closely associated with the research about polyphonic organizations is de:Niels Åkerstrøm Andersen. Niels Andersen believes that modern organizations have exploded beyond their original organizational boundaries. For many years, private companies have automatically been understood as part of the economy in the same way that political parties are considered a part of politics and museums are considered a part of art. Today, concepts are linked together, according to Niels Å. Andersen, is this called the polyphonic organizational-movement. This claim was first made back in 1963 by Richard M. Cyert and James G. March in the book "A behavioral theory of the firm". They said that organizations rarely operate with only one value. According to Cyert and March, organizations actually often operate with more values in their everyday behavior. Niels Å. Andersen elaborates on this assertion in many of his publications. == Contingency theory == The contingency theory views organization design as "a constrained optimization problem," meaning that an organization must try to maximize performance by minimizing the effects of varying environmental and internal constraints. Contingency theory claims there is no best way to organize a corporation, to lead a company, or to make decisions. An organizational, leadership, or decision making style that is effective in some situations, may not be successful in other situations. The optimal organization, leadership, or decision making style depends upon various internal and external constraints (factors). === Factors === Some examples of such constraints (factors) include: The size of the organization How the firm adapts itself to its environment Differences among resources and operations activities 1. Contingency on the organization In the contingency theory on the organization, it states that there is no universal or one best way to manage an organization. Secondly, the organizational design and its subsystems must "fit" with the environment and lastly, effective organizations must not only have a proper "fit" with the environment, but also between its subsystems. 2. Contingency theory of leadership In the contingency theory of leadership, the success of the leader is a function of various factors in the form of subordinate, task, and/ or group variables. The following theories stress using different styles of leadership appropriate to the needs created by different organizational situations. Some of these theories are: The contingency theory: The contingency model theory, developed by Fred Fiedler, explains that group performance is a result of interaction between the style of the leader and the characteristics of the environment in which the leader works. The Hersey–Blanchard situational theory: This theory is an extension of Blake and Mouton's Managerial Grid and Reddin's 3-D Management style theory. This model expanded the notion of relationship and task dimensions to leadership, and readiness dimension. 3. Contingency theory of decision-making The effectiveness of a decision procedure depends upon a number of aspects of the situation: The importance of the decision quality and acceptance. The amount of relevant information possessed by the leader and subordinates. The amount of disagreement among subordinates with respect to their alternatives. === Criticism === It has been argued that the contingency theory implies that a leader switch is the only method to correct any problems facing leadership styles in certain organizational structures. In addition, the contingency model itself has been questioned in its credibility. == See also == Organizational culture – Customary behaviours in an organization Organizational studies – Academic fieldPages displaying short descriptions of redirect targets Outline of organizational theory – Overview of concepts related to organizational theory == References == == External links == Media related to Organizational theory at Wikimedia Commons Quotations related to Organizational theory at Wikiquote
Wikipedia/Organizational_theory
In the theory of finite population sampling, a sampling design specifies for every possible sample its probability of being drawn. == Mathematical formulation == Mathematically, a sampling design is denoted by the function P ( S ) {\displaystyle P(S)} which gives the probability of drawing a sample S . {\displaystyle S.} == An example of a sampling design == During Bernoulli sampling, P ( S ) {\displaystyle P(S)} is given by P ( S ) = q N sample ( S ) × ( 1 − q ) ( N pop − N sample ( S ) ) {\displaystyle P(S)=q^{N_{\text{sample}}(S)}\times (1-q)^{(N_{\text{pop}}-N_{\text{sample}}(S))}} where for each element q {\displaystyle q} is the probability of being included in the sample and N sample ( S ) {\displaystyle N_{\text{sample}}(S)} is the total number of elements in the sample S {\displaystyle S} and N pop {\displaystyle N_{\text{pop}}} is the total number of elements in the population (before sampling commenced). == Sample design for managerial research == In business research, companies must often generate samples of customers, clients, employees, and so forth to gather their opinions. Sample design is also a critical component of marketing research and employee research for many organizations. During sample design, firms must answer questions such as: What is the relevant population, sampling frame, and sampling unit? What is the appropriate margin of error that should be achieved? How should sampling error and non-sampling error be assessed and balanced? These issues require very careful consideration, and good commentaries are provided in several sources. == See also == Bernoulli sampling Sampling probability Sampling (statistics) == References == == Further reading == Sarndal, Swenson, and Wretman (1992), Model Assisted Survey Sampling, Springer-Verlag, ISBN 0-387-40620-4
Wikipedia/Sampling_design
In survey methodology, Poisson sampling (sometimes denoted as PO sampling: 61 ) is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample.: 85  Each element of the population may have a different probability of being included in the sample ( π i {\displaystyle \pi _{i}} ). The probability of being included in a sample during the drawing of a single sample is denoted as the first-order inclusion probability of that element ( p i {\displaystyle p_{i}} ). If all first-order inclusion probabilities are equal, Poisson sampling becomes equivalent to Bernoulli sampling, which can therefore be considered to be a special case of Poisson sampling. == A mathematical consequence of Poisson sampling == Mathematically, the first-order inclusion probability of the ith element of the population is denoted by the symbol π i {\displaystyle \pi _{i}} and the second-order inclusion probability that a pair consisting of the ith and jth element of the population that is sampled is included in a sample during the drawing of a single sample is denoted by π i j {\displaystyle \pi _{ij}} . The following relation is valid during Poisson sampling when i ≠ j {\displaystyle i\neq j} : π i j = π i × π j . {\displaystyle \pi _{ij}=\pi _{i}\times \pi _{j}.} π i i {\displaystyle \pi _{ii}} is defined to be π i {\displaystyle \pi _{i}} . == See also == Bernoulli sampling Poisson distribution Poisson process Sampling design == References ==
Wikipedia/Poisson_trial
Some interpretations of quantum mechanics posit a central role for an observer of a quantum phenomenon. The quantum mechanical observer is tied to the issue of observer effect, where a measurement necessarily requires interacting with the physical object being measured, affecting its properties through the interaction. The term "observable" has gained a technical meaning, denoting a Hermitian operator that represents a measurement.: 55  == Foundation == The theoretical foundation of the concept of measurement in quantum mechanics is a contentious issue deeply connected to the many interpretations of quantum mechanics. A key focus point is that of wave function collapse, for which several popular interpretations assert that measurement causes a discontinuous change into an eigenstate of the operator associated with the quantity that was measured, a change which is not time-reversible. More explicitly, the superposition principle (ψ = Σnanψn) of quantum physics dictates that for a wave function ψ, a measurement will result in a state of the quantum system of one of the m possible eigenvalues fn , n = 1, 2, ..., m, of the operator ∧F which is in the space of the eigenfunctions ψn , n = 1, 2, ..., m. Once one has measured the system, one knows its current state; and this prevents it from being in one of its other states ⁠— it has apparently decohered from them without prospects of future strong quantum interference. This means that the type of measurement one performs on the system affects the end-state of the system. An experimentally studied situation related to this is the quantum Zeno effect, in which a quantum state would decay if left alone, but does not decay because of its continuous observation. The dynamics of a quantum system under continuous observation are described by a quantum stochastic master equation known as the Belavkin equation. Further studies have shown that even observing the results after the photon is produced leads to collapsing the wave function and loading a back-history as shown by delayed choice quantum eraser. When discussing the wave function ψ which describes the state of a system in quantum mechanics, one should be cautious of a common misconception that assumes that the wave function ψ amounts to the same thing as the physical object it describes. This flawed concept must then require existence of an external mechanism, such as a measuring instrument, that lies outside the principles governing the time evolution of the wave function ψ, in order to account for the so-called "collapse of the wave function" after a measurement has been performed. But the wave function ψ is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, ψ is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function ψ must change abruptly after a measurement has been performed. A consequence of Bell's theorem is that measurement on one of two entangled particles can appear to have a nonlocal effect on the other particle. Additional problems related to decoherence arise when the observer is modeled as a quantum system. == Description == The Copenhagen interpretation, which is the most widely accepted interpretation of quantum mechanics among physicists,: 248  posits that an "observer" or a "measurement" is merely a physical process. One of the founders of the Copenhagen interpretation, Werner Heisenberg, wrote: Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory. Niels Bohr, also a founder of the Copenhagen interpretation, wrote: all unambiguous information concerning atomic objects is derived from the permanent marks such as a spot on a photographic plate, caused by the impact of an electron left on the bodies which define the experimental conditions. Far from involving any special intricacy, the irreversible amplification effects on which the recording of the presence of atomic objects rests rather remind us of the essential irreversibility inherent in the very concept of observation. The description of atomic phenomena has in these respects a perfectly objective character, in the sense that no explicit reference is made to any individual observer and that therefore, with proper regard to relativistic exigencies, no ambiguity is involved in the communication of information. Likewise, Asher Peres stated that "observers" in quantum physics are similar to the ubiquitous "observers" who send and receive light signals in special relativity. Obviously, this terminology does not imply the actual presence of human beings. These fictitious physicists may as well be inanimate automata that can perform all the required tasks, if suitably programmed.: 12  Critics of the special role of the observer also point out that observers can themselves be observed, leading to paradoxes such as that of Wigner's friend; and that it is not clear how much consciousness is required. As John Bell inquired, "Was the wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some highly qualified measurer—with a PhD?" == Anthropocentric interpretation == The prominence of seemingly subjective or anthropocentric ideas like "observer" in the early development of the theory has been a continuing source of disquiet and philosophical dispute. A number of new-age religious or philosophical views give the observer a more special role, or place constraints on who or what can be an observer. As an example of such claims, Fritjof Capra declared, "The crucial feature of atomic physics is that the human observer is not only necessary to observe the properties of an object, but is necessary even to define these properties." There is no credible peer-reviewed research that backs such claims. == Confusion with uncertainty principle == The uncertainty principle has been frequently confused with the observer effect, evidently even by its originator, Werner Heisenberg. The uncertainty principle in its standard form describes how precisely it is possible to measure the position and momentum of a particle at the same time. If the precision in measuring one quantity is increased, the precision in measuring the other decreases. An alternative version of the uncertainty principle, more in the spirit of an observer effect, fully accounts for the disturbance the observer has on a system and the error incurred, although this is not how the term "uncertainty principle" is most commonly used in practice. == See also == Observer effect (physics) Quantum foundations == References ==
Wikipedia/Observation_(physics)
In mathematics, a unitary transformation is a linear isomorphism that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation. == Formal definition == More precisely, a unitary transformation is an isometric isomorphism between two inner product spaces (such as Hilbert spaces). In other words, a unitary transformation is a bijective function U : H 1 → H 2 {\displaystyle U:H_{1}\to H_{2}} between two inner product spaces, H 1 {\displaystyle H_{1}} and H 2 , {\displaystyle H_{2},} such that ⟨ U x , U y ⟩ H 2 = ⟨ x , y ⟩ H 1 for all x , y ∈ H 1 . {\displaystyle \langle Ux,Uy\rangle _{H_{2}}=\langle x,y\rangle _{H_{1}}\quad {\text{ for all }}x,y\in H_{1}.} It is a linear isometry, as one can see by setting x = y . {\displaystyle x=y.} == Unitary operator == In the case when H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} are the same space, a unitary transformation is an automorphism of that Hilbert space, and then it is also called a unitary operator. == Antiunitary transformation == A closely related notion is that of antiunitary transformation, which is a bijective function U : H 1 → H 2 {\displaystyle U:H_{1}\to H_{2}\,} between two complex Hilbert spaces such that ⟨ U x , U y ⟩ = ⟨ x , y ⟩ ¯ = ⟨ y , x ⟩ {\displaystyle \langle Ux,Uy\rangle ={\overline {\langle x,y\rangle }}=\langle y,x\rangle } for all x {\displaystyle x} and y {\displaystyle y} in H 1 {\displaystyle H_{1}} , where the horizontal bar represents the complex conjugate. == See also == Antiunitary Orthogonal transformation Time reversal Unitary group Unitary operator Unitary matrix Wigner's theorem Unitary transformations in quantum mechanics
Wikipedia/Unitary_transformation
Model synthesis (also wave function collapse or 'wfc') is a family of constraint-solving algorithms commonly used in procedural generation, especially in the video game industry. Some video games known to have utilized variants of the algorithm include Bad North, Townscaper, and Caves of Qud. The first example of this type of algorithm was described by Paul Merrell, who termed it 'model synthesis' first in his 2007 i3D paper and also presented at the 2008 SIGGRAPH conference and his 2009 PhD thesis. The name 'wave function collapse' later became the popular name for a variant of that algorithm, after an implementation by Maxim Gumin was published in 2016 on a GitHub repository with that name. Gumin's implementation significantly popularised this style of algorithm, with it becoming widely adopted and adapted by technical artists and game developers over the following years. There were a number of inspirations to Gumin's implementation, including Merrell's PhD dissertation, and convolutional neural network style transfer. The popular name for the algorithm, 'wave function collapse', is from an analogy drawn between the algorithm's method and the concept of superposition and observation in quantum mechanics. Some innovations present in Gumin's implementation included the usage of overlapping patterns, allowing a single image to be used as an input to the algorithm. Some have speculated that the reason Gumin's implementation proved more popular than Merrell's, may have been due to the 'model synthesis' implementation's lower accessibility, its 3D focus, or perhaps the general public's computing constraints at the time. One of the differences between Merrell & Gumin's implementation and 'wave function collapse' lies in the decision of which cell to 'collapse' next. Merrell's implementation uses a scanline approach, whereas Gumin's always selects as next cell the one with the lowest number of possible outcomes. == Description == The WFC or 'model synthesis' algorithm has some variants. Gumin and Merrell's implementations are described below, and other variants are noted: === Gumin's implementation === The input bitmap is read, and the patterns present within the bitmap are counted. An array is created with the dimensions of the output desired. Each cell of the array is initialized in an 'unobserved' state The following steps are repeated: The cell with the lowest number of possible output states is located 'Collapse' this cell into one of its possible states according to the rules Check that all cells are still valid and follow the rules Once all cells are 'collapsed' into a definite state, return the output. If the output is illegal, discard it, and repeat the process until legal. === Merrell's implementation === Merrell's earlier implementation is substantially the same as Gumin's with some minor differences. (1) In Merrell's version, there is no requirement to select the cell with the lowest number of possible output states for collapse. Instead, a scanline approach is adopted. According to Merrell, this results in a lower failure rate of the model without any negative effect on quality. Some commentators have noted however that the scanline approach to 'collapse' tends to result in directional artifacts. (2) Merrell's approach performs the algorithm in chunks, rather than all-at-once. This approach greatly reduces the failure rate for many large complex models; especially in a 3D space. == Developments == In April 2023 Shaad Alaka and Rafael Bidarra of Delft University proposed 'Hierarchical Semantic wave function collapse'. Essentially, the algorithm is modified to work beyond simple, unstructured sets of tiles. Prior to their work, all WFC algorithm variants operated on a flat set of tile choices per cell. Their generalised approach organizes tile-sets into a hierarchy, consisting of abstract nodes called 'meta-tiles', and terminating nodes called 'leaf tiles'. For example, on the first pass, WFC might make a certain tile a meta-tile of 'castle' type; which on a second pass will be collapsed into other tiles based on a rule, e.g. a 'wall' or 'grass' tile. == References == == External links == https://github.com/mxgmn/WaveFunctionCollapse
Wikipedia/Wave_function_collapse_(algorithm)
Quantum energy teleportation (QET) is an application of quantum information science. It is a variation of the quantum teleportation protocol. Quantum energy teleportation allows energy to be teleported from a sender to a receiver, regardless of location. This protocol works by having the sender inject energy into the quantum vacuum state which the receiver can then extract positive energy from. QET differs from quantum teleportation as instead of information about an unknown state being teleported from a sender to a receiver, energy is transferred instead. This procedure does not allow faster-than-light transfer of energy and does not allow the spontaneous creation of energy. The sender and receiver share a pair of entangled spins in a spin chain. Energy can be teleported from the sender, Alice, to the receiver, Bob, instantly by using the effects of local operators. However, in order for Bob to extract this energy from his spin he requires a classically communicated signal from Alice. Since this classical signal cannot be transmitted faster than the speed of light, the speed at which energy can be transferred from Alice to Bob is also limited by the speed of light. Quantum energy teleportation was first proposed conceptually by Masahiro Hotta in 2008. The protocol was first experimentally demonstrated in 2023 by Kazuki Ikeda who used superconducting quantum computers to show the energy teleportation effect. == QET mechanisms == There are two main factors involved in how QET works: how energy is transferred from Alice to Bob, and how Bob can extract energy from his spin. === Spin chains === QET is studied through analyzing spin chain models. A spin chain is a type of model where a one dimensional chain of sites are assigned certain spin value at each site, typically +1/2 or -1/2 when considering spin-1/2. The spin of one individual site can interact with the spin of its adjacent neighbours, causing the entire system to be coupled together. Spin chains are useful for QET due to the fact that they can be entangled even in the ground state. This means that even without external energy being added to the system, the ground state exhibits quantum correlations across the chain. Alice and Bob are both in possession of an entangled state from a spin chain system. This can provide a rudimentary explanation of how energy can be transferred from Alice's spin to Bob's spin, since any action on Alice's spin can have an effect on Bob's spin. === Vacuum fluctuations === The other key component to understanding the QET mechanism is vacuum fluctuations and the presence of negative energy density regions within the energy distribution of a quantum mechanical system. Vacuum fluctuations are a consequence of the Heisenberg uncertainty principle where Vacuum fluctuations arise from the Heisenberg uncertainty principle, specifically the uncertainty between the field amplitude and its conjugate momentum, which is analogous to the position-momentum uncertainty principle. The commutation relation, [ φ ( x , y , z ) , Π ( x ′ , y ′ , z ′ ) ] = i ℏ δ ( x − x ′ ) δ ( y − y ′ ) δ ( z − z ′ ) {\textstyle [\varphi (x,y,z),\Pi (x',y',z')]=i\hbar \delta (x-x')\delta (y-y')\delta (z-z')} , gives rise to uncertainty in energy densities at different spatial points. Consequently, the energy fluctuates around the zero-point energy density of the state The vacuum fluctuations in certain regions can have lower amplitude fluctuations due to the effect of local operations. These regions possess a negative energy density since the vacuum fluctuations already represent the zero-energy state. Therefore, fluctuations of lower amplitude relative to the vacuum fluctuations represent a negative energy density region. Since the entire vacuum state still has zero-energy, there exist other regions with higher vacuum fluctuations with a positive energy density. Negative energy density in the vacuum fluctuations plays an important role in QET since it allows for the extraction of energy from the vacuum state. Positive energy can be extracted from regions of positive energy density which can be created by regions of negative density region elsewhere in the vacuum state. == QET in a spin chain system == === Framework of the quantum energy teleportation protocol === The QET process is considered over short time scales, such that the Hamiltonian of the spin chain system is approximately invariant with time. It is also assumed that local operations and classical communications (LOCC) for the spins can be repeated several times within a short time span. Alice and Bob share entangled spin states in the ground state | g ⟩ {\textstyle |g\rangle } with correlation length ℓ {\textstyle \ell } . Alice is located at site n A {\textstyle n_{A}} of the spin chain system and Bob is located at site n B {\textstyle n_{B}} of the spin chain system such that Alice and Bob are far away from each other, | n A − n B | ≫ 1 {\textstyle |n_{A}-n_{B}|\gg 1} . === The QET protocol === Conceptually, the QET protocol can be described by three steps: Alice performs a local measurement on her spin at site n A {\textstyle n_{A}} , measuring eigenvalue μ {\textstyle \mu } . When Alice acts on her spin with the local operator, energy E A {\textstyle E_{A}} is inputted into the state. Alice then communicates to Bob over a classical channel what her measurement result μ {\textstyle \mu } was. It is assumed that over the time the classical message is travelling that Alice and Bob's state does not evolve with time. Based on the measurement Alice got on her spin μ {\textstyle \mu } , Bob applies a specific local operator to his spin located at site n B {\textstyle n_{B}} . After the application of the local operator, the expectation value of the Hamiltonian at this site H ^ n B {\textstyle {\hat {H}}_{n_{B}}} is negative. Since the expectation of H ^ n B {\textstyle {\hat {H}}_{n_{B}}} is zero before Bob's operation, the negative expectation value of H ^ n B {\textstyle {\hat {H}}_{n_{B}}} after the local operation implies energy was extracted at site n B {\textstyle n_{B}} while the operation was being applied. Intuitively, one would not expect to be able to extract energy from the ground state in such a manner. However, this protocol allows energy to be teleported from Alice to Bob, despite Alice and Bob sharing entangled spin states in the ground state. == Mathematical description == === Local measurement by Alice === The QET protocol can be worked out mathematically. The derivation in this section follows the work done by Masahiro Hotta in "Quantum Energy Teleportation in Spin Chain Systems". Consider Alice's spin at site n A {\textstyle n_{A}} in a spin chain where each spin is entangled in ground state | g ⟩ {\textstyle |g\rangle } . For a Hermitian unitary local operator σ ^ A = u → A ⋅ σ → n A {\displaystyle {\hat {\sigma }}_{A}={\vec {u}}_{A}\cdot {\vec {\sigma }}_{n_{A}}} , where u → A {\textstyle {\vec {u}}_{A}} represents a 3D unit vector and σ → n A {\textstyle {\vec {\sigma }}_{n_{A}}} is the Pauli spin matrix vector at site n A {\textstyle n_{A}} , the eigenvalues are ( − 1 ) μ {\displaystyle (-1)^{\mu }} with μ = 0 , 1 {\textstyle \mu =0,1} . Alice can perform a measurement on spin at site n A {\textstyle n_{A}} using this local operator to measures μ = 0 or 1 {\textstyle \mu =0{\text{ or }}1} . The expression for σ ^ A {\textstyle {\hat {\sigma }}_{A}} has spectral expansion σ ^ A = ∑ μ = 0 , 1 ( − 1 ) μ P ^ A ( μ ) {\displaystyle {\hat {\sigma }}_{A}=\sum _{\mu =0,1}(-1)^{\mu }{\hat {P}}_{A}(\mu )} where P ^ A ( μ ) {\textstyle {\hat {P}}_{A}(\mu )} is a projective operator which projects onto the eigensubspace with μ {\textstyle \mu } . After Alice has made the measurement with the σ ^ A {\textstyle {\hat {\sigma }}_{A}} operator, the spin is left in the post-measurement state 1 p A ( μ ) P ^ A ( μ ) | g ⟩ {\textstyle {\frac {1}{\sqrt {p_{A}(\mu )}}}{\hat {P}}_{A}(\mu )|g\rangle } where p A ( μ ) = ⟨ g | P ^ A ( μ ) | g ⟩ {\textstyle p_{A}(\mu )=\langle g|{\hat {P}}_{A}(\mu )|g\rangle } . This is a mixed quantum state with density matrix: ρ ^ ′ = ∑ μ = 0 , 1 p A ( μ ) 1 p A ( μ ) P ^ A ( μ ) | g ⟩ ⟨ g | P ^ A ( μ ) 1 p A ( μ ) = P ^ A ( μ ) | g ⟩ ⟨ g | P ^ A ( μ ) . {\displaystyle {\begin{aligned}{\hat {\rho }}'&=\sum _{\mu =0,1}p_{A}(\mu ){\frac {1}{\sqrt {p_{A}(\mu )}}}{\hat {P}}_{A}(\mu )|g\rangle \langle g|{\hat {P}}_{A}(\mu ){\frac {1}{\sqrt {p_{A}(\mu )}}}\\&={\hat {P}}_{A}(\mu )|g\rangle \langle g|{\hat {P}}_{A}(\mu ).\end{aligned}}} This density matrix satisfies the relation: Tr n A [ ρ ′ ] = Tr n A [ | g ⟩ ⟨ g | ] {\textstyle {\text{Tr}}_{n_{A}}[\rho ']={\text{Tr}}_{n_{A}}[|g\rangle \langle g|]} which shows that the quantum fluctuation of ρ ′ {\textstyle \rho '} is the same as that of the ground state except at site n A {\textstyle n_{A}} . This measurement requires Alice to input energy E A {\textstyle E_{A}} into the spin chain. Since the ground state has zero energy, E A {\textstyle E_{A}} is related by the difference in energy between the final quantum state ρ ′ {\textstyle \rho '} and the initial ground state | g ⟩ {\textstyle |g\rangle } : E A = Tr [ ρ ^ ′ H ^ ] − ⟨ g | H ^ | g ⟩ = ∑ μ = 0 , 1 ⟨ g | P ^ A ( μ ) H ^ P ^ A ( μ ) | g ⟩ . {\displaystyle E_{A}={\text{Tr}}[{\hat {\rho }}'{\hat {H}}]-\langle g|{\hat {H}}|g\rangle =\sum _{\mu =0,1}\langle g|{\hat {P}}_{A}(\mu ){\hat {H}}{\hat {P}}_{A}(\mu )|g\rangle .} The energy Alice needs to input is non-negative since H ^ {\displaystyle {\hat {H}}} is non-negative. H ^ {\displaystyle {\hat {H}}} is shown to be non-negative in the source material. This is an important result of the measurement process as the point of the QET protocol is for Alice to inject a positive quantity of energy into the spin chain. ==== Emergence of negative energy density ==== The Hamiltonian for the spin chain system H ^ {\displaystyle {\hat {H}}} can be expressed as the sum of the local energy operators T n ^ {\textstyle {\hat {T_{n}}}} over all n {\textstyle n} spins: H ^ = ∑ n T ^ n {\displaystyle {\hat {H}}=\sum _{n}{\hat {T}}_{n}} . The local energy operators T n ^ {\textstyle {\hat {T_{n}}}} can be shifted by adding constants such that the expectation value of the local energy operators are each zero, ⟨ g | T ^ n | g ⟩ = 0 {\textstyle \langle g|{\hat {T}}_{n}|g\rangle =0} . Due to entanglement, the ground state | g ⟩ {\textstyle |g\rangle } is not an eigenstate of T n ^ {\textstyle {\hat {T_{n}}}} . Since the expectation value of the local energy operators are zero, it implies that the lowest eigenvalue of T n ^ {\textstyle {\hat {T_{n}}}} must be negative. The expectation value of T n ^ {\textstyle {\hat {T_{n}}}} involves eigenstates of T n ^ {\textstyle {\hat {T_{n}}}} with positive and negative energy densities, but will average to 0 across all eigenstates. Therefore, some of the spins in the spin chain that possess a negative energy density lead to spins possessing positive energy density to balance them out. This implies that energy can be withdrawn from certain spin sites with positive energy density, which is the process Bob will use to receive the teleported energy from Alice. === Classical communication between Alice and Bob === Alice then informs Bob of the value of the measurement μ {\textstyle \mu } over a classical channel. The time interval over which this information is transferred is considered to be very short such that the system does not evolve over this time and no emergence of energy flux occurs. === Application of a local unitary by Bob === Bob then applies the local unitary U ^ B ( μ ) {\textstyle {\hat {U}}_{B}(\mu )} to the spin at site n B {\textstyle n_{B}} where U ^ B ( μ ) = I ^ cos θ + i ( − 1 ) μ σ ^ B sin θ {\displaystyle {\hat {U}}_{B}(\mu )={\hat {I}}{\text{cos}}\theta +i(-1)^{\mu }{\hat {\sigma }}_{B}{\text{sin}}\theta } . Here σ ^ B = u → B ⋅ σ → n B {\textstyle {\hat {\sigma }}_{B}={\vec {u}}_{B}\cdot {\vec {\sigma }}_{n_{B}}} where u → B {\textstyle {\vec {u}}_{B}} is a 3D unit vector and σ → n B {\textstyle {\vec {\sigma }}_{n_{B}}} is the Pauli spin matrix vector at site n B {\textstyle n_{B}} . Two real coefficients are introduced ξ = ⟨ g | σ ^ B H ^ σ ^ B | g ⟩ {\textstyle \xi =\langle g|{\hat {\sigma }}_{B}{\hat {H}}{\hat {\sigma }}_{B}|g\rangle } and η = ⟨ g | σ ^ A σ ^ ˙ B | g ⟩ {\displaystyle \eta =\langle g|{\hat {\sigma }}_{A}{\dot {\hat {\sigma }}}_{B}|g\rangle } , where σ ^ ˙ B = i [ H ^ n B , σ ^ B ] {\textstyle {\dot {\hat {\sigma }}}_{B}=i[{\hat {H}}_{n_{B}},{\hat {\sigma }}_{B}]} , which can be used to define the real angle parameter θ {\textstyle \theta } by cos ( 2 θ ) = ξ ξ 2 + η 2 {\textstyle {\text{cos}}(2\theta )={\frac {\xi }{\sqrt {\xi ^{2}+\eta ^{2}}}}} and sin ( 2 θ ) = − η ξ 2 + η 2 {\textstyle {\text{sin}}(2\theta )=-{\frac {\eta }{\sqrt {\xi ^{2}+\eta ^{2}}}}} . Using [ T ^ n , σ ^ B ] = 0 {\textstyle [{\hat {T}}_{n},{\hat {\sigma }}_{B}]=0} for | n − n B | > L {\textstyle |n-n_{B}|>L} , σ ^ ˙ B {\textstyle {\dot {\hat {\sigma }}}_{B}} can be expressed as σ ^ ˙ B = i [ H ^ , σ ^ B ] {\textstyle {\dot {\hat {\sigma }}}_{B}=i[{\hat {H}},{\hat {\sigma }}_{B}]} . T n ^ {\textstyle {\hat {T_{n}}}} refers to the local energy at site n {\textstyle n} . The full derivation can be found in the source material. Essentially, Bob's application of the local unitary U ^ B ( μ ) {\textstyle {\hat {U}}_{B}(\mu )} leaves his state in the quantum state ρ ^ {\displaystyle {\hat {\rho }}} . By using the relations for θ {\textstyle \theta } and other simplifications, the expectation value of the energy at site n B {\textstyle n_{B}} can be expressed as Tr [ ρ ^ H ^ n B ] {\textstyle {\text{Tr}}[{\hat {\rho }}{\hat {H}}_{n_{B}}]} or Tr [ ρ ^ H ^ n B ] = 1 2 [ ξ − ξ 2 + η 2 ] . {\displaystyle {\text{Tr}}[{\hat {\rho }}{\hat {H}}_{n_{B}}]={\frac {1}{2}}\left[\xi -{\sqrt {\xi ^{2}+\eta ^{2}}}\right].} If η ≠ 0 {\textstyle \eta \neq 0} then Tr [ ρ ^ H ^ n B ] {\textstyle {\text{Tr}}[{\hat {\rho }}{\hat {H}}_{n_{B}}]} becomes negative. Before Bob acts with the local unitary U ^ B ( μ ) {\textstyle {\hat {U}}_{B}(\mu )} , the energy around Bob is zero: Tr [ ρ ^ ′ H ^ n B ] = 0 {\textstyle {\text{Tr}}[{\hat {\rho }}'{\hat {H}}_{n_{B}}]=0} . This implies that some positive energy E B {\textstyle E_{B}} must be emitted from the spin chain as from the local energy conservation around site n B {\textstyle n_{B}} : E B + Tr [ ρ ^ H ^ n B ] = Tr [ ρ ^ ′ H ^ n B ] = 0 {\textstyle E_{B}+{\text{Tr}}[{\hat {\rho }}{\hat {H}}_{n_{B}}]={\text{Tr}}[{\hat {\rho }}'{\hat {H}}_{n_{B}}]=0} . Which then follows that: E B = Tr [ ρ ^ ′ H ^ n B ] − Tr [ ρ ^ H ^ n B ] = 1 2 [ ξ 2 + η 2 − ξ ] . {\displaystyle {\begin{aligned}E_{B}&={\text{Tr}}[{\hat {\rho }}'{\hat {H}}_{n_{B}}]-{\text{Tr}}[{\hat {\rho }}{\hat {H}}_{n_{B}}]\\&={\frac {1}{2}}\left[{\sqrt {\xi ^{2}+\eta ^{2}}}-\xi \right].\end{aligned}}} So some positive quantity of energy E B {\textstyle E_{B}} has been extracted from site n B {\textstyle n_{B}} , completing the QET protocol. === Constraints === ==== Entanglement of the spin chain system ==== One of the constraints on the protocol is that Alice and Bob must share an entangled state. This can be proved mathematically. If the ground state is separable and can be expressed as | g ⟩ = | g ⟩ A ⊗ | g ⟩ B {\displaystyle |g\rangle =|g\rangle _{A}\otimes |g\rangle _{B}} and the relations σ ^ ˙ B = i [ H , σ ^ B ] {\textstyle {\dot {\hat {\sigma }}}_{B}=i[H,{\hat {\sigma }}_{B}]} and H ^ | g ⟩ = 0 {\textstyle {\hat {H}}|g\rangle =0} are used then it follows that: η = ⟨ g | σ ^ A σ ^ ˙ B | g ⟩ = ⟨ g | σ ^ A | g ⟩ ⟨ g | σ ^ ˙ B | g ⟩ = i ⟨ g | σ ^ A | g ⟩ ⟨ g | ( H ^ σ ^ B − σ ^ B H ^ ) | g ⟩ = 0. {\displaystyle {\begin{aligned}\eta &=\langle g|{\hat {\sigma }}_{A}{\dot {\hat {\sigma }}}_{B}|g\rangle =\langle g|{\hat {\sigma }}_{A}|g\rangle \langle g|{\dot {\hat {\sigma }}}_{B}|g\rangle \\&=i\langle g|{\hat {\sigma }}_{A}|g\rangle \langle g|({\hat {H}}{\hat {\sigma }}_{B}-{\hat {\sigma }}_{B}{\hat {H}})|g\rangle =0.\end{aligned}}} Therefore, Alice and Bob must share an entangled state for energy to be transported from Alice to Bob otherwise η {\textstyle \eta } vanishes which causes E B {\textstyle E_{B}} to vanish. ==== Zero-cost energy ==== One could postulate that Alice could withdraw the energy she puts into the system when measuring σ ^ A {\textstyle {\hat {\sigma }}_{A}} , E A {\textstyle E_{A}} , thus making the energy Bob extracts, E B {\textstyle E_{B}} , have zero-cost. Mathematically, this is not possible. First, when Alice measures σ ^ A {\textstyle {\hat {\sigma }}_{A}} at site n A {\textstyle n_{A}} the entanglement between the spin at site n A {\textstyle n_{A}} and the rest of the chain is broken since Alice has collapsed the local state. So, for Alice to extract the energy she first deposited to the system during the measurement process she must first restore the ground state. This implies that Alice would have to recreate the entanglement between the spin at site n A {\textstyle n_{A}} and the rest of the chain which is not possible with only local operators. To recreate the entanglement, Alice would need to use non-local operators which inherently require energy. Therefore, it is impossible for Alice to extract the energy E A {\textstyle E_{A}} while only using local operators. == Quantum energy distribution == Quantum energy distribution (QED) is a protocol proposed by Masahiro Hotta in "A Protocol for Quantum Energy Distribution" which proposes an extension of QET with quantum key distribution (QKD). This protocol allows an energy supplier S {\textstyle S} to distribute energy to M {\textstyle M} consumers denoted by C m {\textstyle C_{m}} . === Quantum energy distribution protocol === The supplier S {\textstyle S} and any consumer C m {\textstyle C_{m}} share common short keys k {\textstyle k} which are used for identification. Using the short keys k {\textstyle k} , S {\textstyle S} and C m {\textstyle C_{m}} can perform secure QKD which allows S {\textstyle S} to send classical information to the consumers. It is assumed that S {\textstyle S} and C m {\textstyle C_{m}} share a set of many spin states in the ground state | g ⟩ {\textstyle |g\rangle } . The protocol follows six steps: S {\textstyle S} performs a local measurement of the observable U ^ S = ∑ μ = 0 , 1 ( − 1 ) μ P ^ S ( μ ) {\textstyle {\hat {U}}_{S}=\sum _{\mu =0,1}(-1)^{\mu }{\hat {P}}_{S}(\mu )} on the ground state | g ⟩ {\textstyle |g\rangle } and measures μ {\textstyle \mu } . S {\textstyle S} must input energy E S = ∑ μ = 0 , 1 ⟨ g | P ^ S ( μ ) H ^ P ^ S ( μ ) | g ⟩ {\textstyle E_{S}=\sum _{\mu =0,1}\langle g|{\hat {P}}_{S}(\mu ){\hat {H}}{\hat {P}}_{S}(\mu )|g\rangle } into the spin chain. S {\textstyle S} confirms the identity of C m {\textstyle C_{m}} through use of the shared secret short keys k {\textstyle k} . S {\textstyle S} and C m {\textstyle C_{m}} share pseudo-random secret keys K {\textstyle K} by use of a QKD protocol. S {\textstyle S} encodes the measurement result μ {\textstyle \mu } using secret key K {\textstyle K} and sends it to C m {\textstyle C_{m}} . C m {\textstyle C_{m}} decodes the measurement result μ {\textstyle \mu } using secret key K {\textstyle K} . C m {\textstyle C_{m}} performs the local unitary operation V ^ m ( μ ) {\textstyle {\hat {V}}_{m}(\mu )} to their spin. C m {\textstyle C_{m}} receives energy E m = 1 2 [ ξ m 2 + η m 2 − ξ m ] {\textstyle E_{m}={\frac {1}{2}}\left[{\sqrt {\xi _{m}^{2}+\eta _{m}^{2}}}-\xi _{m}\right]} where ξ m = ⟨ g | U ^ m † H ^ U ^ m | g ⟩ {\textstyle \xi _{m}=\langle g|{\hat {U}}_{m}^{\dagger }{\hat {H}}{\hat {U}}_{m}|g\rangle } , η m = ⟨ g | U ^ s U ^ ˙ m | g ⟩ {\textstyle \eta _{m}=\langle g|{\hat {U}}_{s}{\dot {\hat {U}}}_{m}|g\rangle } , U ^ m = n → m ⋅ σ → n C m {\displaystyle {\hat {U}}_{m}={\vec {n}}_{m}\cdot {\vec {\sigma }}_{n_{C_{m}}}} , U ^ ˙ m = i [ H ^ C m , U ^ m ] {\textstyle {\dot {\hat {U}}}_{m}=i[{\hat {H}}_{C_{m}},{\hat {U}}_{m}]} , n → m {\textstyle {\vec {n}}_{m}} is a unit vector, and σ → n C m {\textstyle {\vec {\sigma }}_{n_{C_{m}}}} is the Pauli spin matrix vector at site n C m {\textstyle n_{C_{m}}} . === Robustness against thieves === This process is robust against an unidentified consumer, a thief D {\textstyle D} , at site n D {\textstyle n_{D}} attempting to steal energy from the spin chain. After step 6, the post-measurement state is given by ρ ^ = ∑ μ = 0 , 1 ( ∏ m U ^ m ( μ ) ) P ^ S ( μ ) | g ⟩ ⟨ g | P ^ S ( μ ) ( ∏ m U ^ m † ( μ ) ) . {\displaystyle {\hat {\rho }}=\sum _{\mu =0,1}\left(\prod _{m}{\hat {U}}_{m}(\mu )\right){\hat {P}}_{S}(\mu )|g\rangle \langle g|{\hat {P}}_{S}(\mu )\left(\prod _{m}{\hat {U}}_{m}^{\dagger }(\mu )\right).} Since D {\textstyle D} has no information on μ {\textstyle \mu } and therefore randomly acts with either U ^ D ( 0 ) {\textstyle {\hat {U}}_{D}(0)} or U ^ D ( 1 ) {\textstyle {\hat {U}}_{D}(1)} where U ^ D ( μ ) = I ^ cos θ + i ( − 1 ) μ n → D ⋅ σ → n D sin θ {\textstyle {\hat {U}}_{D}(\mu )={\hat {I}}{\text{cos}}\theta +i(-1)^{\mu }{\vec {n}}_{D}\cdot {\vec {\sigma }}_{n_{D}}{\text{sin}}\theta } . The post-measurement state becomes a sum over the possible guesses D makes of μ {\textstyle \mu } , 0 or 1. Taking the expectation value of the localized energy operator H ^ D {\textstyle {\hat {H}}_{D}} yields: Tr [ ρ ^ D H ^ D ] = 1 2 ∑ μ = 0 , 1 ⟨ g | P ^ S ( μ ) ( ∏ m U ^ m † ( μ ) ) U ^ D † ( μ ) H ^ D U ^ D ( μ ) ( ∏ m U ^ m ( μ ) ) P ^ S ( μ ) | g ⟩ . {\displaystyle {\text{Tr}}[{\hat {\rho }}_{D}{\hat {H}}_{D}]={\frac {1}{2}}\sum _{\mu =0,1}\langle g|{\hat {P}}_{S}(\mu )\left(\prod _{m}{\hat {U}}_{m}^{\dagger }(\mu )\right){\hat {U}}_{D}^{\dagger }(\mu ){\hat {H}}_{D}{\hat {U}}_{D}(\mu )\left(\prod _{m}{\hat {U}}_{m}(\mu )\right){\hat {P}}_{S}(\mu )|g\rangle .} H ^ D {\textstyle {\hat {H}}_{D}} is positive semi-definite by definition. This means that all expectation values of H ^ D {\textstyle {\hat {H}}_{D}} , even the ones altered by U ^ D ( μ ) {\textstyle {\hat {U}}_{D}(\mu )} , are greater than or equal to zero. At least one of the values in the sum of the trace will be positive, the one where D {\textstyle D} guesses the wrong value of μ {\textstyle \mu } . This is because the operation U ^ D ( μ ) | g ⟩ {\textstyle {\hat {U}}_{D}(\mu )|g\rangle } will add energy to the system when μ {\textstyle \mu } does not match the value measured by Alice. Therefore, Tr [ ρ ^ D H ^ D ] > 0 {\textstyle {\text{Tr}}[{\hat {\rho }}_{D}{\hat {H}}_{D}]>0} which implies that on average D {\textstyle D} will have to input energy to the spin chains without gain. This protocol is not perfect as theoretically D {\textstyle D} could guess μ {\textstyle \mu } on their first attempt, which would be a 50% chance to guess μ {\textstyle \mu } correctly, and would immediately profit energy. However, the idea is that over multiple attempts D {\textstyle D} will lose energy since the energy output from a correct guess is lower than that of the energy input required when making an incorrect guess. == Experimental implementation == QET was experimentally demonstrated in 2022 by IQC group in the publication "Experimental Activation of Strong Local Passive States with Quantum Information", and in 2023 by Kazuki Ikeda in the publication "Demonstration of Quantum Energy Teleportation on Superconducting Quantum Hardware". The basic QET protocol discussed early was verified using several IBM superconducting quantum computers. Some of the quantum computers that were used include ibmq_lima, and ibm_cairo, and ibmq_jakarta which provided the most accurate results for the experiment. These quantum computers provide two connected qubits with high precision for controlled gate operation. The Hamiltonian used accounted for interactions between the two qubits using the X ^ {\textstyle {\hat {X}}} and Z ^ {\textstyle {\hat {Z}}} Pauli operators. === Protocol === The entangled ground state was first prepared using the CNOT ^ {\displaystyle {\widehat {\text{CNOT}}}} and R ^ Y {\textstyle {\hat {R}}_{Y}} quantum gates. Alice then measured her state using the Pauli operator X ^ {\textstyle {\hat {X}}} , injecting energy E 0 {\textstyle E_{0}} into the system. Alice then told Bob her measurement result over a classical channel. The classical communication of measurement results was on the order of 10 nanoseconds and was much faster than the energy propagation timescale of the system. Bob then applied a conditional rotational operation on his qubit dependent on Alice's measurement. Bob then performed a local measurement on his state to extract energy from the system E 1 {\textstyle E_{1}} . === Results === The observed experimental values are dimensionless and the energy values correspond to the eigenvalues of the Hamiltonian. For quantum computers, energy scales tend to be limited by the qubit transition frequency which is often on the order of GHz. Therefore, the typical energy scale is on the order of 10 − 24 {\textstyle 10^{-24}} Joules. Ikeda experimented with varying the parameters in the Hamiltonian, specifically the local energy h {\textstyle h} and interaction strength k {\textstyle k} , to see if the QET protocol improved under certain conditions. For differing experimental parameters, the experimental values for Alice's input energy E 0 {\textstyle E_{0}} was around 1 and matched the experimental results very closely when error mitigation was applied. Bob's extracted energy E 1 {\textstyle E_{1}} , for certain experimental parameters, was observed to be negative when error mitigation was applied. This indicates that the QET protocol was successful for certain experimental parameters. Depending on the experimental parameters, Bob would receive around 1-5% of Alice's inputted energy. === Quantum error correction === Quantum computers are currently the most viable platform for experimentally realizing QET. This is mainly due to their ability to implement quantum error correction. Quantum error correction is important specifically for implementing QET protocols experimentally due to the high precision needed to calculate the negative energy Bob receives in the QET protocol. Error correction in this experiment greatly improved the amount of energy Bob could extract from the system. In some cases without error correction, Bob's extracted energy would be positive, indicating the QET protocol did not work. However after error correction, these values could be brought closer to the experimental values and in some cases even become negative, causing the QET protocol to function. The quantum error correction employed in this experiment allowed Ikeda to observe negative expectation values of the extracted energy E 1 {\textstyle E_{1}} , which had not been experimentally observed before. High precision is also required for experimental implementation of QET due to the subtle effects of negative energy density. Since negative energy densities are a consequence of vacuum fluctuations, they can easily be overshadowed by measurement noise in the instrumentation. So, higher precision can lead to better distinguishability between negative energy signals and noise. == See also == Quantum teleportation Quantum entanglement Spin chains Quantum key distribution Quantum information science == References == == Further reading == Hotta, Masahiro (8 August 2008). "Quantum measurement information as a key to energy extraction from local vacuums". Physical Review D. 78 (4): 045006. arXiv:0803.2272. Bibcode:2008PhRvD..78d5006H. doi:10.1103/PhysRevD.78.045006. Hotta, Masahiro (15 March 2009). "Quantum Energy Teleportation in Spin Chain Systems". Journal of the Physical Society of Japan. 78 (3): 034001. arXiv:0803.0348. Bibcode:2009JPSJ...78c4001H. doi:10.1143/JPSJ.78.034001. Hotta, Masahiro; Matsumoto, Jiro; Yusa, Go (13 January 2014). "Quantum energy teleportation without a limit of distance". Physical Review A. 89 (1): 012311. arXiv:1305.3955. Bibcode:2014PhRvA..89a2311H. doi:10.1103/PhysRevA.89.012311. Hotta, Masahiro (22 October 2009). "Quantum energy teleportation with trapped ions". Physical Review A. 80 (4): 042323. arXiv:0908.2824. Bibcode:2009PhRvA..80d2323H. doi:10.1103/PhysRevA.80.042323. == External links == Physicists Use Quantum Mechanics to Pull Energy out of Nothing First Demonstration of Energy Teleportation How to build a teleportation machine: Teleportation protocol Introductory article about the protocol: arXiv:1101.3954 Demonstration code of quantum energy teleportation using Qiskit is available in GitHub
Wikipedia/Quantum_energy_teleportation
Circuit quantum electrodynamics (circuit QED) provides a means of studying the fundamental interaction between light and matter (quantum optics). As in the field of cavity quantum electrodynamics, a single photon within a single mode cavity coherently couples to a quantum object (atom). In contrast to cavity QED, the photon is stored in a one-dimensional on-chip resonator and the quantum object is no natural atom but an artificial one. These artificial atoms usually are mesoscopic devices which exhibit an atom-like energy spectrum. The field of circuit QED is a prominent example for quantum information processing and a promising candidate for future quantum computation. In the late 2010s decade, experiments involving cQED in 3 dimensions have demonstrated deterministic gate teleportation and other operations on multiple qubits. == Resonator == The resonant devices in the circuit QED architecture can be implemented using a superconducting LC resonator, a high purity cavity, or a superconducting coplanar waveguide microwave resonators, which are two-dimensional microwave analogues of the Fabry–Pérot interferometer, in which the capacitance and inductances are distributed. Coplanar waveguides consist of a signal carrying centerline flanked by two grounded planes. This planar structure is put on a dielectric substrate by a photolithographic process. Superconducting materials used are mostly aluminium (Al), niobium (Nb) and lately tantalum (Ta). Dielectrics typically used as substrates are either surface oxidized silicon (Si) or sapphire (Al2O3). The line impedance is given by the geometric properties, which are chosen to match the 50 Ω {\displaystyle \Omega } of the peripheric microwave equipment to avoid partial reflection of the signal. The electric field is basically confined between the center conductor and the ground planes resulting in a very small mode volume V m {\displaystyle V_{m}} which gives rise to very high electric fields per photon E 0 {\displaystyle E_{0}} (compared to three-dimensional cavities). Mathematically, the field E 0 {\displaystyle E_{0}} can be found as E 0 = ℏ ω r 2 ε 0 V m {\displaystyle E_{0}={\sqrt {\frac {\hbar \omega _{r}}{2\varepsilon _{0}V_{m}}}}} , where ℏ {\displaystyle \hbar } is the reduced Planck constant, ω r {\displaystyle \omega _{r}} is the angular frequency, and ε 0 {\displaystyle \varepsilon _{0}} is the permittivity of free space. One can distinguish between two different types of resonators: λ / 2 {\displaystyle \lambda /2} and λ / 4 {\displaystyle \lambda /4} resonators. Half-wavelength resonators are made by breaking the center conductor at two spots with the distance ℓ {\displaystyle \ell } . The resulting piece of center conductor is in this way capacitively coupled to the input and output and represents a resonator with E {\displaystyle E} -field antinodes at its ends. Quarter-wavelength resonators are short pieces of a coplanar line, which are shorted to ground on one end and capacitively coupled to a feed line on the other. The resonance frequencies are given by λ / 2 : ν n = c ε eff n 2 ℓ ( n = 1 , 2 , 3 , … ) λ / 4 : ν n = c ε eff 2 n + 1 4 ℓ ( n = 0 , 1 , 2 , … ) {\displaystyle \lambda /2:\quad \nu _{n}={\frac {c}{\sqrt {\varepsilon _{\text{eff}}}}}{\frac {n}{2\ell }}\quad (n=1,2,3,\ldots )\qquad \lambda /4:\quad \nu _{n}={\frac {c}{\sqrt {\varepsilon _{\text{eff}}}}}{\frac {2n+1}{4\ell }}\quad (n=0,1,2,\ldots )} with ε eff {\displaystyle \varepsilon _{\text{eff}}} being the effective dielectric permittivity of the device. == Artificial atoms and qubits == The first realized artificial atom in circuit QED was the so-called Cooper-pair box, also known as the charge qubit. In this device, a reservoir of Cooper pairs is coupled via Josephson junctions to a gated superconducting island. The state of the Cooper-pair box (qubit) is given by the number of Cooper pairs on the island ( N {\displaystyle N} Cooper pairs for the ground state ∣ g ⟩ {\displaystyle \mid g\rangle } and N + 1 {\displaystyle N+1} for the excited state ∣ e ⟩ {\displaystyle \mid e\rangle } ). By controlling the Coulomb energy (bias voltage) and the Josephson energy (flux bias) the transition frequency ω a {\displaystyle \omega _{a}} is tuned. Due to the nonlinearity of the Josephson junctions the Cooper-pair box shows an atom like energy spectrum. Other more recent examples for qubits used in circuit QED are so called transmon qubits (more charge noise insensitive compared to the Cooper-pair box) and flux qubits (whose state is given by the direction of a supercurrent in a superconducting loop intersected by Josephson junctions). All of these devices feature very large dipole moments d {\displaystyle d} (up to 103 times that of large n {\displaystyle n} Rydberg atoms), which qualifies them as extremely suitable coupling counterparts for the light field in circuit QED. == Theory == The full quantum description of matter-light interaction is given by the Jaynes–Cummings model. The three terms of the Jaynes–Cummings model can be ascribed to a cavity term, which is mimicked by a harmonic oscillator, an atomic term and an interaction term. H JC = ℏ ω r ( a † a + 1 2 ) ⏟ cavity term + 1 2 ℏ ω a σ z ⏟ atomic term + ℏ g ( σ + a + a † σ − ) ⏟ interaction term {\displaystyle {\mathcal {H}}_{\text{JC}}=\underbrace {\hbar \omega _{r}\left(a^{\dagger }a+{\frac {1}{2}}\right)} _{\text{cavity term}}+\underbrace {{\frac {1}{2}}\hbar \omega _{a}\sigma _{z}} _{\text{atomic term}}+\underbrace {\hbar g\left(\sigma _{+}a+a^{\dagger }\sigma _{-}\right)} _{\text{interaction term}}} In this formulation ω r {\displaystyle \omega _{r}} is the resonance frequency of the cavity and a † {\displaystyle a^{\dagger }} and a {\displaystyle a} are photon creation and annihilation operators, respectively. The atomic term is given by the Hamiltonian of a spin-1/2 system with ω a {\displaystyle \omega _{a}} being the transition frequency and σ z {\displaystyle \sigma _{z}} the Pauli matrix. The operators σ ± {\displaystyle \sigma _{\pm }} are raising and lowering operators (ladder operators) for the atomic states. For the case of zero detuning ( ω r = ω a {\displaystyle \omega _{r}=\omega _{a}} ) the interaction lifts the degeneracy of the photon number state ∣ n ⟩ {\displaystyle \mid n\rangle } and the atomic states ∣ g ⟩ {\displaystyle \mid g\rangle } and ∣ e ⟩ {\displaystyle \mid e\rangle } and pairs of dressed states are formed. These new states are superpositions of cavity and atom states ∣ n , ± ⟩ = 1 2 ( ∣ g ⟩ ∣ n ⟩ ± ∣ e ⟩ ∣ n − 1 ⟩ ) {\displaystyle \mid n,\pm \rangle ={\frac {1}{\sqrt {2}}}\left(\mid g\rangle \mid n\rangle \pm \mid e\rangle \mid n-1\rangle \right)} and are energetically split by 2 g n {\displaystyle 2g{\sqrt {n}}} . If the detuning is significantly larger than the combined cavity and atomic linewidth the cavity states are merely shifted by ± g 2 / Δ {\displaystyle \pm g^{2}/\Delta } (with the detuning Δ = ω a − ω r {\displaystyle \Delta =\omega _{a}-\omega _{r}} ) depending on the atomic state. This provides the possibility to read out the atomic (qubit) state by measuring the transition frequency. The coupling is given by g = E ⋅ d {\displaystyle g=E\cdot d} (for electric dipolar coupling). If the coupling is much larger than the cavity loss rate κ = ω r Q {\displaystyle \kappa ={\frac {\omega _{r}}{Q}}} (quality factor Q {\displaystyle Q} ; the higher Q {\displaystyle Q} , the longer the photon remains inside the resonator) as well as the decoherence rate γ {\displaystyle \gamma } (rate at which the qubit relaxes into modes other than the resonator mode) the strong coupling regime is reached. Due to the high fields and low losses of the coplanar resonators together with the large dipole moments and long decoherence times of the qubits, the strong coupling regime can easily be reached in the field of circuit QED. Combination of the Jaynes–Cummings model and the coupled cavities leads to the Jaynes–Cummings–Hubbard model. == See also == Superconducting radio frequency == References ==
Wikipedia/Circuit_quantum_electrodynamics
In quantum computing, the Brassard–Høyer–Tapp algorithm or BHT algorithm is a quantum algorithm that solves the collision problem. In this problem, one is given n and an r-to-1 function f : { 1 , … , n } → { 1 , … , n } {\displaystyle f:\,\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}} and needs to find two inputs that f maps to the same output. The BHT algorithm only makes O ( n 1 / 3 ) {\displaystyle O(n^{1/3})} queries to f, which matches the lower bound of Ω ( n 1 / 3 ) {\displaystyle \Omega (n^{1/3})} in the black box model. The algorithm was discovered by Gilles Brassard, Peter Høyer, and Alain Tapp in 1997. It uses Grover's algorithm, which was discovered the year before. == Algorithm == Intuitively, the algorithm combines the square root speedup from the birthday paradox using (classical) randomness with the square root speedup from Grover's (quantum) algorithm. First, n1/3 inputs to f are selected at random and f is queried at all of them. If there is a collision among these inputs, then we return the colliding pair of inputs. Otherwise, all these inputs map to distinct values by f. Then Grover's algorithm is used to find a new input to f that collides. Since there are n inputs to f and n1/3 of these could form a collision with the already queried values, Grover's algorithm can find a collision with O ( n n 1 / 3 ) = O ( n 1 / 3 ) {\displaystyle O\left({\sqrt {\frac {n}{n^{1/3}}}}\right)=O(n^{1/3})} extra queries to f. == See also == Element distinctness problem Grover's algorithm == References ==
Wikipedia/BHT_algorithm
In theoretical physics, the logarithmic Schrödinger equation (sometimes abbreviated as LNSE or LogSE) is one of the nonlinear modifications of Schrödinger's equation, first proposed by Gerald H. Rosen in its relativistic version (with D'Alembertian instead of Laplacian and first-order time derivative) in 1969. It is a classical wave equation with applications to extensions of quantum mechanics, quantum optics, nuclear physics, transport and diffusion phenomena, open quantum systems and information theory, effective quantum gravity and physical vacuum models and theory of superfluidity and Bose–Einstein condensation. It is an example of an integrable model. == The equation == The logarithmic Schrödinger equation is a partial differential equation. In mathematics and mathematical physics one often uses its dimensionless form: i ∂ ψ ∂ t + ∇ 2 ψ + ψ ln ⁡ | ψ | 2 = 0. {\displaystyle i{\frac {\partial \psi }{\partial t}}+\nabla ^{2}\psi +\psi \ln |\psi |^{2}=0.} for the complex-valued function ψ = ψ(x, t) of the particles position vector x = (x, y, z) at time t, and ∇ 2 ψ = ∂ 2 ψ ∂ x 2 + ∂ 2 ψ ∂ y 2 + ∂ 2 ψ ∂ z 2 {\displaystyle \nabla ^{2}\psi ={\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}+{\frac {\partial ^{2}\psi }{\partial z^{2}}}} is the Laplacian of ψ in Cartesian coordinates. The logarithmic term ψ ln ⁡ | ψ | 2 {\displaystyle \psi \ln |\psi |^{2}} has been shown indispensable in determining the speed of sound scales as the cubic root of pressure for Helium-4 at very low temperatures. This logarithmic term is also needed for cold sodium atoms. In spite of the logarithmic term, it has been shown in the case of central potentials, that even for non-zero angular momentum, the LogSE retains certain symmetries similar to those found in its linear counterpart, making it potentially applicable to atomic and nuclear systems. The relativistic version of this equation can be obtained by replacing the derivative operator with the D'Alembertian, similarly to the Klein–Gordon equation. Soliton-like solutions known as Gaussons figure prominently as analytical solutions to this equation for a number of cases. == See also == Galaxy rotation curve Nonlinear Schrödinger equation Superfluid Helium-4 Superfluid vacuum theory == References == == External links == Weisstein, Eric W. "SchroedingerEquation". MathWorld.
Wikipedia/Logarithmic_Schrödinger_equation
In physics, the energy–momentum relation, or relativistic dispersion relation, is the relativistic equation relating total energy (which is also called relativistic energy) to invariant mass (which is also called rest mass) and momentum. It is the extension of mass–energy equivalence for bodies or systems with non-zero momentum. It can be formulated as: This equation holds for a body or system, such as one or more particles, with total energy E, invariant mass m0, and momentum of magnitude p; the constant c is the speed of light. It assumes the special relativity case of flat spacetime and that the particles are free. Total energy is the sum of rest energy E 0 = m 0 c 2 {\displaystyle E_{0}=m_{0}c^{2}} and relativistic kinetic energy: E K = E − E 0 = ( p c ) 2 + ( m 0 c 2 ) 2 − m 0 c 2 {\displaystyle E_{K}=E-E_{0}={\sqrt {(pc)^{2}+\left(m_{0}c^{2}\right)^{2}}}-m_{0}c^{2}} Invariant mass is mass measured in a center-of-momentum frame. For bodies or systems with zero momentum, it simplifies to the mass–energy equation E 0 = m 0 c 2 {\displaystyle E_{0}=m_{0}c^{2}} , where total energy in this case is equal to rest energy. The Dirac sea model, which was used to predict the existence of antimatter, is closely related to the energy–momentum relation. == Connection to E = mc2 == The energy–momentum relation is consistent with the familiar mass–energy relation in both its interpretations: E = mc2 relates total energy E to the (total) relativistic mass m (alternatively denoted mrel or mtot), while E0 = m0c2 relates rest energy E0 to (invariant) rest mass m0. Unlike either of those equations, the energy–momentum equation (1) relates the total energy to the rest mass m0. All three equations hold true simultaneously. == Special cases == If the body is a massless particle (m0 = 0), then (1) reduces to E = pc. For photons, this is the relation, discovered in 19th century classical electromagnetism, between radiant momentum (causing radiation pressure) and radiant energy. If the body's speed v is much less than c, then (1) reduces to E = ⁠1/2⁠m0v2 + m0c2; that is, the body's total energy is simply its classical kinetic energy (⁠1/2⁠m0v2) plus its rest energy. If the body is at rest (v = 0), i.e. in its center-of-momentum frame (p = 0), we have E = E0 and m = m0; thus the energy–momentum relation and both forms of the mass–energy relation (mentioned above) all become the same. A more general form of relation (1) holds for general relativity. The invariant mass (or rest mass) is an invariant for all frames of reference (hence the name), not just in inertial frames in flat spacetime, but also accelerated frames traveling through curved spacetime (see below). However the total energy of the particle E and its relativistic momentum p are frame-dependent; relative motion between two frames causes the observers in those frames to measure different values of the particle's energy and momentum; one frame measures E and p, while the other frame measures E′ and p′, where E′ ≠ E and p′ ≠ p, unless there is no relative motion between observers, in which case each observer measures the same energy and momenta. Although we still have, in flat spacetime: E ′ 2 − ( p ′ c ) 2 = ( m 0 c 2 ) 2 . {\displaystyle {E'}^{2}-\left(p'c\right)^{2}=\left(m_{0}c^{2}\right)^{2}\,.} The quantities E, p, E′, p′ are all related by a Lorentz transformation. The relation allows one to sidestep Lorentz transformations when determining only the magnitudes of the energy and momenta by equating the relations in the different frames. Again in flat spacetime, this translates to; E 2 − ( p c ) 2 = E ′ 2 − ( p ′ c ) 2 = ( m 0 c 2 ) 2 . {\displaystyle {E}^{2}-\left(pc\right)^{2}={E'}^{2}-\left(p'c\right)^{2}=\left(m_{0}c^{2}\right)^{2}\,.} Since m0 does not change from frame to frame, the energy–momentum relation is used in relativistic mechanics and particle physics calculations, as energy and momentum are given in a particle's rest frame (that is, E′ and p′ as an observer moving with the particle would conclude to be) and measured in the lab frame (i.e. E and p as determined by particle physicists in a lab, and not moving with the particles). In relativistic quantum mechanics, it is the basis for constructing relativistic wave equations, since if the relativistic wave equation describing the particle is consistent with this equation – it is consistent with relativistic mechanics, and is Lorentz invariant. In relativistic quantum field theory, it is applicable to all particles and fields. == Origins and derivation of the equation == The energy–momentum relation goes back to Max Planck's article published in 1906. It was used by Walter Gordon in 1926 and then by Paul Dirac in 1928 under the form E = c 2 p 2 + ( m 0 c 2 ) 2 + V {\textstyle E={\sqrt {c^{2}p^{2}+(m_{0}c^{2})^{2}}}+V} , where V is the amount of potential energy. The equation can be derived in a number of ways, two of the simplest include: From the relativistic dynamics of a massive particle, By evaluating the norm of the four-momentum of the system. This method applies to both massive and massless particles, and can be extended to multi-particle systems with relatively little effort (see § Many-particle systems below). === Heuristic approach for massive particles === For a massive object moving at three-velocity u = (ux, uy, uz) with magnitude |u| = u in the lab frame: E = γ ( u ) m 0 c 2 {\displaystyle E=\gamma _{(\mathbf {u} )}m_{0}c^{2}} is the total energy of the moving object in the lab frame, p = γ ( u ) m 0 u {\displaystyle \mathbf {p} =\gamma _{(\mathbf {u} )}m_{0}\mathbf {u} } is the three dimensional relativistic momentum of the object in the lab frame with magnitude |p| = p. The relativistic energy E and momentum p include the Lorentz factor defined by: γ ( u ) = 1 1 − u ⋅ u c 2 = 1 1 − ( u c ) 2 {\displaystyle \gamma _{(\mathbf {u} )}={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}={\frac {1}{\sqrt {1-\left({\frac {u}{c}}\right)^{2}}}}} Some authors use relativistic mass defined by: m = γ ( u ) m 0 {\displaystyle m=\gamma _{(\mathbf {u} )}m_{0}} although rest mass m0 has a more fundamental significance, and will be used primarily over relativistic mass m in this article. Squaring the 3-momentum gives: p 2 = p ⋅ p = m 0 2 u ⋅ u 1 − u ⋅ u c 2 = m 0 2 u 2 1 − ( u c ) 2 {\displaystyle p^{2}=\mathbf {p} \cdot \mathbf {p} ={\frac {m_{0}^{2}\mathbf {u} \cdot \mathbf {u} }{1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}={\frac {m_{0}^{2}u^{2}}{1-\left({\frac {u}{c}}\right)^{2}}}} then solving for u2 and substituting into the Lorentz factor one obtains its alternative form in terms of 3-momentum and mass, rather than 3-velocity: γ = 1 + ( p m 0 c ) 2 {\displaystyle \gamma ={\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}} Inserting this form of the Lorentz factor into the energy equation gives: E = m 0 c 2 1 + ( p m 0 c ) 2 {\displaystyle E=m_{0}c^{2}{\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}} followed by more rearrangement it yields (1). The elimination of the Lorentz factor also eliminates implicit velocity dependence of the particle in (1), as well as any inferences to the "relativistic mass" of a massive particle. This approach is not general as massless particles are not considered. Naively setting m0 = 0 would mean that E = 0 and p = 0 and no energy–momentum relation could be derived, which is not correct. === Norm of the four-momentum === ==== Special relativity ==== In Minkowski space, energy (divided by c) and momentum are two components of a Minkowski four-vector, namely the four-momentum; P = ( E c , p ) , {\displaystyle \mathbf {P} =\left({\frac {E}{c}},\mathbf {p} \right)\,,} (these are the contravariant components). The Minkowski inner product ⟨ , ⟩ of this vector with itself gives the square of the norm of this vector, it is proportional to the square of the rest mass m of the body: ⟨ P , P ⟩ = | P | 2 = ( m 0 c ) 2 , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=\left(m_{0}c\right)^{2}\,,} a Lorentz invariant quantity, and therefore independent of the frame of reference. Using the Minkowski metric η with metric signature (− + + +), the inner product is ⟨ P , P ⟩ = | P | 2 = − ( m 0 c ) 2 , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=-\left(m_{0}c\right)^{2}\,,} and ⟨ P , P ⟩ {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle } = P α η α β P β {\displaystyle =P^{\alpha }\eta _{\alpha \beta }P^{\beta }} = ( E c p x p y p z ) ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) ( E c p x p y p z ) {\displaystyle ={\begin{pmatrix}{\frac {E}{c}}&p_{x}&p_{y}&p_{z}\end{pmatrix}}{\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}{\frac {E}{c}}\\p_{x}\\p_{y}\\p_{z}\end{pmatrix}}} = − ( E c ) 2 + p 2 , {\displaystyle =-\left({\frac {E}{c}}\right)^{2}+p^{2}\,,} so − ( m 0 c ) 2 = − ( E c ) 2 + p 2 {\displaystyle -\left(m_{0}c\right)^{2}=-\left({\frac {E}{c}}\right)^{2}+p^{2}} or, in natural units where c = 1, | P | 2 + ( m 0 ) 2 = 0. {\displaystyle |\mathbf {P} |^{2}+(m_{0})^{2}=0.} ==== General relativity ==== In general relativity, the 4-momentum is a four-vector defined in a local coordinate frame, although by definition the inner product is similar to that of special relativity, ⟨ P , P ⟩ = | P | 2 = ( m 0 c ) 2 , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=\left(m_{0}c\right)^{2}\,,} in which the Minkowski metric η is replaced by the metric tensor field g: ⟨ P , P ⟩ = | P | 2 = P α g α β P β , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=P^{\alpha }g_{\alpha \beta }P^{\beta }\,,} solved from the Einstein field equations. Then: P α g α β P β = ( m 0 c ) 2 . {\displaystyle P^{\alpha }g_{\alpha \beta }P^{\beta }=\left(m_{0}c\right)^{2}\,.} == Units of energy, mass and momentum == In natural units where c = 1, the energy–momentum equation reduces to E 2 = p 2 + m 0 2 . {\displaystyle E^{2}=p^{2}+m_{0}^{2}\,.} In particle physics, energy is typically given in units of electron volts (eV), momentum in units of eV·c−1, and mass in units of eV·c−2. In electromagnetism, and because of relativistic invariance, it is useful to have the electric field E and the magnetic field B in the same unit (Gauss), using the cgs (Gaussian) system of units, where energy is given in units of erg, mass in grams (g), and momentum in g·cm·s−1. Energy may also in theory be expressed in units of grams, though in practice it requires a large amount of energy to be equivalent to masses in this range. For example, the first atomic bomb liberated about 1 gram of heat, and the largest thermonuclear bombs have generated a kilogram or more of heat. Energies of thermonuclear bombs are usually given in tens of kilotons and megatons referring to the energy liberated by exploding that amount of trinitrotoluene (TNT). == Special cases == === Centre-of-momentum frame (one particle) === For a body in its rest frame, the momentum is zero, so the equation simplifies to E 0 = m 0 c 2 , {\displaystyle E_{0}=m_{0}c^{2}\,,} where m0 is the rest mass of the body. === Massless particles === If the object is massless, as is the case for a photon, then the equation reduces to E = p c . {\displaystyle E=pc\,.} This is a useful simplification. It can be rewritten in other ways using the de Broglie relations: E = h c λ = ℏ c k . {\displaystyle E={\frac {hc}{\lambda }}=\hbar ck\,.} if the wavelength λ or wavenumber k are given. === Correspondence principle === Rewriting the relation for massive particles as: E = m 0 c 2 1 + ( p m 0 c ) 2 , {\displaystyle E=m_{0}c^{2}{\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}\,,} and expanding into power series by the binomial theorem (or a Taylor series): E = m 0 c 2 [ 1 + 1 2 ( p m 0 c ) 2 − 1 8 ( p m 0 c ) 4 + ⋯ ] , {\displaystyle E=m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {p}{m_{0}c}}\right)^{2}-{\frac {1}{8}}\left({\frac {p}{m_{0}c}}\right)^{4}+\cdots \right]\,,} in the limit that u ≪ c, we have γ(u) ≈ 1 so the momentum has the classical form p ≈ m0u, then to first order in (⁠p/m0c⁠)2 (i.e. retain the term (⁠p/m0c⁠)2n for n = 1 and neglect all terms for n ≥ 2) we have E ≈ m 0 c 2 [ 1 + 1 2 ( m 0 u m 0 c ) 2 ] , {\displaystyle E\approx m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {m_{0}u}{m_{0}c}}\right)^{2}\right]\,,} or E ≈ m 0 c 2 + 1 2 m 0 u 2 , {\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}u^{2}\,,} where the second term is the classical kinetic energy, and the first is the rest energy of the particle. This approximation is not valid for massless particles, since the expansion required the division of momentum by mass. Incidentally, there are no massless particles in classical mechanics. == Many-particle systems == === Addition of four momenta === In the case of many particles with relativistic momenta pn and energy En, where n = 1, 2, ... (up to the total number of particles) simply labels the particles, as measured in a particular frame, the four-momenta in this frame can be added; ∑ n P n = ∑ n ( E n c , p n ) = ( ∑ n E n c , ∑ n p n ) , {\displaystyle \sum _{n}\mathbf {P} _{n}=\sum _{n}\left({\frac {E_{n}}{c}},\mathbf {p} _{n}\right)=\left(\sum _{n}{\frac {E_{n}}{c}},\sum _{n}\mathbf {p} _{n}\right)\,,} and then take the norm; to obtain the relation for a many particle system: | ( ∑ n P n ) | 2 = ( ∑ n E n c ) 2 − ( ∑ n p n ) 2 = ( M 0 c ) 2 , {\displaystyle \left|\left(\sum _{n}\mathbf {P} _{n}\right)\right|^{2}=\left(\sum _{n}{\frac {E_{n}}{c}}\right)^{2}-\left(\sum _{n}\mathbf {p} _{n}\right)^{2}=\left(M_{0}c\right)^{2}\,,} where M0 is the invariant mass of the whole system, and is not equal to the sum of the rest masses of the particles unless all particles are at rest (see Mass in special relativity § The mass of composite systems for more detail). Substituting and rearranging gives the generalization of (1); The energies and momenta in the equation are all frame-dependent, while M0 is frame-independent. === Center-of-momentum frame === In the center-of-momentum frame (COM frame), by definition we have: ∑ n p n = 0 , {\displaystyle \sum _{n}\mathbf {p} _{n}={\boldsymbol {0}}\,,} with the implication from (2) that the invariant mass is also the centre of momentum (COM) mass–energy, aside from the c2 factor: ( ∑ n E n ) 2 = ( M 0 c 2 ) 2 ⇒ ∑ n E C O M n = E C O M = M 0 c 2 , {\displaystyle \left(\sum _{n}E_{n}\right)^{2}=\left(M_{0}c^{2}\right)^{2}\Rightarrow \sum _{n}E_{\mathrm {COM} \,n}=E_{\mathrm {COM} }=M_{0}c^{2}\,,} and this is true for all frames since M0 is frame-independent. The energies ECOM n are those in the COM frame, not the lab frame. However, many familiar bound systems have the lab frame as COM frame, since the system itself is not in motion and so the momenta all cancel to zero. An example would be a simple object (where vibrational momenta of atoms cancel) or a container of gas where the container is at rest. In such systems, all the energies of the system are measured as mass. For example, the heat in an object on a scale, or the total of kinetic energies in a container of gas on the scale, all are measured by the scale as the mass of the system. === Rest masses and the invariant mass === Either the energies or momenta of the particles, as measured in some frame, can be eliminated using the energy momentum relation for each particle: E n 2 − ( p n c ) 2 = ( m n c 2 ) 2 , {\displaystyle E_{n}^{2}-\left(\mathbf {p} _{n}c\right)^{2}=\left(m_{n}c^{2}\right)^{2}\,,} allowing M0 to be expressed in terms of the energies and rest masses, or momenta and rest masses. In a particular frame, the squares of sums can be rewritten as sums of squares (and products): ( ∑ n E n ) 2 = ( ∑ n E n ) ( ∑ k E k ) = ∑ n , k E n E k = 2 ∑ n < k E n E k + ∑ n E n 2 , {\displaystyle \left(\sum _{n}E_{n}\right)^{2}=\left(\sum _{n}E_{n}\right)\left(\sum _{k}E_{k}\right)=\sum _{n,k}E_{n}E_{k}=2\sum _{n<k}E_{n}E_{k}+\sum _{n}E_{n}^{2}\,,} ( ∑ n p n ) 2 = ( ∑ n p n ) ⋅ ( ∑ k p k ) = ∑ n , k p n ⋅ p k = 2 ∑ n < k p n ⋅ p k + ∑ n p n 2 , {\displaystyle \left(\sum _{n}\mathbf {p} _{n}\right)^{2}=\left(\sum _{n}\mathbf {p} _{n}\right)\cdot \left(\sum _{k}\mathbf {p} _{k}\right)=\sum _{n,k}\mathbf {p} _{n}\cdot \mathbf {p} _{k}=2\sum _{n<k}\mathbf {p} _{n}\cdot \mathbf {p} _{k}+\sum _{n}\mathbf {p} _{n}^{2}\,,} so substituting the sums, we can introduce their rest masses mn in (2): ∑ n ( m n c 2 ) 2 + 2 ∑ n < k ( E n E k − c 2 p n ⋅ p k ) = ( M 0 c 2 ) 2 . {\displaystyle \sum _{n}\left(m_{n}c^{2}\right)^{2}+2\sum _{n<k}\left(E_{n}E_{k}-c^{2}\mathbf {p} _{n}\cdot \mathbf {p} _{k}\right)=\left(M_{0}c^{2}\right)^{2}\,.} The energies can be eliminated by: E n = ( p n c ) 2 + ( m n c 2 ) 2 , E k = ( p k c ) 2 + ( m k c 2 ) 2 , {\displaystyle E_{n}={\sqrt {\left(\mathbf {p} _{n}c\right)^{2}+\left(m_{n}c^{2}\right)^{2}}}\,,\quad E_{k}={\sqrt {\left(\mathbf {p} _{k}c\right)^{2}+\left(m_{k}c^{2}\right)^{2}}}\,,} similarly the momenta can be eliminated by: p n ⋅ p k = | p n | | p k | cos ⁡ θ n k , | p n | = 1 c E n 2 − ( m n c 2 ) 2 , | p k | = 1 c E k 2 − ( m k c 2 ) 2 , {\displaystyle \mathbf {p} _{n}\cdot \mathbf {p} _{k}=\left|\mathbf {p} _{n}\right|\left|\mathbf {p} _{k}\right|\cos \theta _{nk}\,,\quad |\mathbf {p} _{n}|={\frac {1}{c}}{\sqrt {E_{n}^{2}-\left(m_{n}c^{2}\right)^{2}}}\,,\quad |\mathbf {p} _{k}|={\frac {1}{c}}{\sqrt {E_{k}^{2}-\left(m_{k}c^{2}\right)^{2}}}\,,} where θnk is the angle between the momentum vectors pn and pk. Rearranging: ( M 0 c 2 ) 2 − ∑ n ( m n c 2 ) 2 = 2 ∑ n < k ( E n E k − c 2 p n ⋅ p k ) . {\displaystyle \left(M_{0}c^{2}\right)^{2}-\sum _{n}\left(m_{n}c^{2}\right)^{2}=2\sum _{n<k}\left(E_{n}E_{k}-c^{2}\mathbf {p} _{n}\cdot \mathbf {p} _{k}\right)\,.} Since the invariant mass of the system and the rest masses of each particle are frame-independent, the right hand side is also an invariant (even though the energies and momenta are all measured in a particular frame). == Matter waves == Using the de Broglie relations for energy and momentum for matter waves, E = ℏ ω , p = ℏ k , {\displaystyle E=\hbar \omega \,,\quad \mathbf {p} =\hbar \mathbf {k} \,,} where ω is the angular frequency and k is the wavevector with magnitude |k| = k, equal to the wave number, the energy–momentum relation can be expressed in terms of wave quantities: ( ℏ ω ) 2 = ( c ℏ k ) 2 + ( m 0 c 2 ) 2 , {\displaystyle \left(\hbar \omega \right)^{2}=\left(c\hbar k\right)^{2}+\left(m_{0}c^{2}\right)^{2}\,,} and tidying up by dividing by (ħc)2 throughout: This can also be derived from the magnitude of the four-wavevector K = ( ω c , k ) , {\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},\mathbf {k} \right)\,,} in a similar way to the four-momentum above. Since the reduced Planck constant ħ and the speed of light c both appear and clutter this equation, this is where natural units are especially helpful. Normalizing them so that ħ = c = 1, we have: ω 2 = k 2 + m 0 2 . {\displaystyle \omega ^{2}=k^{2}+m_{0}^{2}\,.} == Tachyon and exotic matter == The velocity of a bradyon with the relativistic energy–momentum relation E 2 = p 2 c 2 + m 0 2 c 4 . {\displaystyle E^{2}=p^{2}c^{2}+m_{0}^{2}c^{4}\,.} can never exceed c. On the contrary, it is always greater than c for a tachyon whose energy–momentum equation is E 2 = p 2 c 2 − m 0 2 c 4 . {\displaystyle E^{2}=p^{2}c^{2}-m_{0}^{2}c^{4}\,.} By contrast, the hypothetical exotic matter has a negative mass and the energy–momentum equation is E 2 = − p 2 c 2 + m 0 2 c 4 . {\displaystyle E^{2}=-p^{2}c^{2}+m_{0}^{2}c^{4}\,.} == See also == Mass–energy equivalence Four-momentum Mass in special relativity == References == A. Halpern (1988). 3000 Solved Problems in Physics, Schaum Series. McGraw-Hill. pp. 704–705. ISBN 978-0-07-025734-4. G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. p. 65. ISBN 978-0-521-57507-2. C.B. Parker (1994). McGraw-Hill Encyclopaedia of Physics (2nd ed.). McGraw-Hill. pp. 1192, 1193. ISBN 0-07-051400-3. R.G. Lerner; G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC Publishers. p. 1052. ISBN 0-89573-752-3.
Wikipedia/Energy–momentum_relation
The Principles of Quantum Mechanics is an influential monograph on quantum mechanics written by Paul Dirac and first published by Oxford University Press in 1930. Dirac gives an account of quantum mechanics by "demonstrating how to construct a completely new theoretical framework from scratch"; "problems were tackled top-down, by working on the great principles, with the details left to look after themselves". It leaves classical physics behind after the first chapter, presenting the subject with a logical structure. Its 82 sections contain 785 equations with no diagrams. Dirac is credited with developing the subject "particularly in the University of Cambridge and University of Göttingen between 1925–1927", according to Graham Farmelo. It is considered one of the most influential texts on quantum mechanics, with theoretical physicist Laurie M. Brown stating that it "set the stage, the tone, and much of the language of the quantum-mechanical revolution". == History == The first and second editions of the book were published in 1930 and 1935. In 1947 the third edition of the book was published, in which the chapter on quantum electrodynamics was rewritten particularly with the inclusion of electron-positron creation. In the fourth edition, 1958, the same chapter was revised, adding new sections on interpretation and applications. Later a revised fourth edition appeared in 1967. Beginning with the third edition (1947), the mathematical descriptions of quantum states and operators were changed to use the Bra–ket notation, introduced in 1939 and largely developed by Dirac himself. Laurie Brown wrote an article describing the book's evolution through its different editions, and Helge Kragh surveyed reviews by physicists (including Werner Heisenberg, Wolfgang Pauli, and others) from the time of Dirac's book's publication. == Contents == The principle of superposition Dynamical variables and observables Representations The quantum conditions The equations of motion Elementary applications Perturbation theory Collision problems Systems containing several similar particles Theory of radiation Relativistic theory of the electron Quantum electrodynamics == See also == The Evolution of Physics (Einstein and Infeld) The Feynman Lectures on Physics Vol. III (Feynman) The Physical Principles of the Quantum Theory (Heisenberg) Mathematical Foundations of Quantum Mechanics (von Neumann) == References ==
Wikipedia/The_Principles_of_Quantum_Mechanics
In quantum computing, the quantum phase estimation algorithm is a quantum algorithm to estimate the phase corresponding to an eigenvalue of a given unitary operator. Because the eigenvalues of a unitary operator always have unit modulus, they are characterized by their phase, and therefore the algorithm can be equivalently described as retrieving either the phase or the eigenvalue itself. The algorithm was initially introduced by Alexei Kitaev in 1995.: 246  Phase estimation is frequently used as a subroutine in other quantum algorithms, such as Shor's algorithm,: 131  the quantum algorithm for linear systems of equations, and the quantum counting algorithm. == Overview of the algorithm == The algorithm operates on two sets of qubits, referred to in this context as registers. The two registers contain n {\displaystyle n} and m {\displaystyle m} qubits, respectively. Let U {\displaystyle U} be a unitary operator acting on the m {\displaystyle m} -qubit register. The eigenvalues of a unitary operator have unit modulus, and are therefore characterized by their phase. Thus if | ψ ⟩ {\displaystyle |\psi \rangle } is an eigenvector of U {\displaystyle U} , then U | ψ ⟩ = e 2 π i θ | ψ ⟩ {\displaystyle U|\psi \rangle =e^{2\pi i\theta }\left|\psi \right\rangle } for some θ ∈ R {\displaystyle \theta \in \mathbb {R} } . Due to the periodicity of the complex exponential, we can always assume 0 ≤ θ < 1 {\displaystyle 0\leq \theta <1} . The goal is producing a good approximation for θ {\displaystyle \theta } with a small number of gates and a high probability of success. The quantum phase estimation algorithm achieves this assuming oracular access to U {\displaystyle U} , and having | ψ ⟩ {\displaystyle |\psi \rangle } available as a quantum state. This means that when discussing the efficiency of the algorithm we only worry about the number of times U {\displaystyle U} needs to be used, but not about the cost of implementing U {\displaystyle U} itself. More precisely, the algorithm returns with high probability an approximation for θ {\displaystyle \theta } , within additive error ε {\displaystyle \varepsilon } , using n = O ( log ⁡ ( 1 / ε ) ) {\displaystyle n=O(\log(1/\varepsilon ))} qubits in the first register, and O ( 1 / ε ) {\displaystyle O(1/\varepsilon )} controlled-U operations. Furthermore, we can improve the success probability to 1 − Δ {\displaystyle 1-\Delta } for any Δ > 0 {\displaystyle \Delta >0} by using a total of O ( log ⁡ ( 1 / Δ ) / ε ) {\displaystyle O(\log(1/\Delta )/\varepsilon )} uses of controlled-U, and this is optimal. == Detailed description of the algorithm == === State preparation === The initial state of the system is: | Ψ 0 ⟩ = | 0 ⟩ ⊗ n | ψ ⟩ , {\displaystyle |\Psi _{0}\rangle =|0\rangle ^{\otimes n}|\psi \rangle ,} where | ψ ⟩ {\displaystyle |\psi \rangle } is the m {\displaystyle m} -qubit state that evolves through U {\displaystyle U} . We first apply the n-qubit Hadamard gate operation H ⊗ n {\displaystyle H^{\otimes n}} on the first register, which produces the state: | Ψ 1 ⟩ = ( H ⊗ n ⊗ I m ) | Ψ 0 ⟩ = 1 2 n 2 ( | 0 ⟩ + | 1 ⟩ ) ⊗ n | ψ ⟩ = 1 2 n / 2 ∑ j = 0 2 n − 1 | j ⟩ | ψ ⟩ . {\displaystyle |\Psi _{1}\rangle =(H^{\otimes n}\otimes I_{m})|\Psi _{0}\rangle ={\frac {1}{2^{\frac {n}{2}}}}(|0\rangle +|1\rangle )^{\otimes n}|\psi \rangle ={\frac {1}{2^{n/2}}}\sum _{j=0}^{2^{n}-1}|j\rangle |\psi \rangle .} Note that here we are switching between binary and n {\displaystyle n} -ary representation for the n {\displaystyle n} -qubit register: the ket | j ⟩ {\displaystyle |j\rangle } on the right-hand side is shorthand for the n {\displaystyle n} -qubit state | j ⟩ ≡ ⨂ ℓ = 0 n − 1 | j ℓ ⟩ {\displaystyle |j\rangle \equiv \bigotimes _{\ell =0}^{n-1}|j_{\ell }\rangle } , where j = ∑ ℓ = 0 n − 1 j ℓ 2 ℓ {\displaystyle j=\sum _{\ell =0}^{n-1}j_{\ell }2^{\ell }} is the binary decomposition of j {\displaystyle j} . === Controlled-U operations === This state | Ψ 1 ⟩ {\displaystyle |\Psi _{1}\rangle } is then evolved through the controlled-unitary evolution U C {\displaystyle U_{C}} whose action can be written as U C ( | k ⟩ ⊗ | ψ ⟩ ) = | k ⟩ ⊗ ( U k | ψ ⟩ ) , {\displaystyle U_{C}(|k\rangle \otimes |\psi \rangle )=|k\rangle \otimes (U^{k}|\psi \rangle ),} for all k = 0 , . . . , 2 n − 1 {\displaystyle k=0,...,2^{n}-1} . This evolution can also be written concisely as U C = ∑ k = 0 2 n − 1 | k ⟩ ⟨ k | ⊗ U k , {\displaystyle U_{C}=\sum _{k=0}^{2^{n}-1}|k\rangle \!\langle k|\otimes U^{k},} which highlights its controlled nature: it applies U k {\displaystyle U^{k}} to the second register conditionally to the first register being | k ⟩ {\displaystyle |k\rangle } . Remembering the eigenvalue condition holding for | ψ ⟩ {\displaystyle |\psi \rangle } , applying U C {\displaystyle U_{C}} to | Ψ 1 ⟩ {\displaystyle |\Psi _{1}\rangle } thus gives | Ψ 2 ⟩ ≡ U C | Ψ 1 ⟩ = ( 1 2 n / 2 ∑ k = 0 2 n − 1 e 2 π i θ k | k ⟩ ) ⊗ | ψ ⟩ , {\displaystyle |\Psi _{2}\rangle \equiv U_{C}|\Psi _{1}\rangle =\left({\frac {1}{2^{n/2}}}\sum _{k=0}^{2^{n}-1}e^{2\pi i\theta k}|k\rangle \right)\otimes |\psi \rangle ,} where we used U k | ψ ⟩ = e 2 π i k θ | ψ ⟩ {\displaystyle U^{k}|\psi \rangle =e^{2\pi ik\theta }|\psi \rangle } . To show that U C {\displaystyle U_{C}} can also be implemented efficiently, observe that we can write U C = ∏ ℓ = 0 n − 1 C ℓ ( U 2 ℓ ) {\displaystyle U_{C}=\prod _{\ell =0}^{n-1}C_{\ell }(U^{2^{\ell }})} , where C ℓ ( U 2 ℓ ) {\displaystyle C_{\ell }(U^{2^{\ell }})} denotes the operation of applying U 2 ℓ {\displaystyle U^{2^{\ell }}} to the second register conditionally to the ℓ {\displaystyle \ell } -th qubit of the first register being | 1 ⟩ {\displaystyle |1\rangle } . Formally, these gates can be characterized by their action as C ℓ ( U k ) ( | j ⟩ ⊗ | ψ ⟩ ) = | j ⟩ ⊗ ( U j ℓ k | ψ ⟩ ) . {\displaystyle C_{\ell }(U^{k})(|j\rangle \otimes |\psi \rangle )=|j\rangle \otimes (U^{j_{\ell }k}|\psi \rangle ).} This equation can be interpreted as saying that the state is left unchanged when j ℓ = 0 {\displaystyle j_{\ell }=0} , that is, when the ℓ {\displaystyle \ell } -th qubit is | 0 ⟩ {\displaystyle |0\rangle } , while the gate U k {\displaystyle U^{k}} is applied to the second register when the ℓ {\displaystyle \ell } -th qubit is | 1 ⟩ {\displaystyle |1\rangle } . The composition of these controlled-gates thus gives ∏ ℓ = 0 n − 1 C ℓ ( U 2 ℓ ) ( | j ⟩ ⊗ | ψ ⟩ ) = | j ⟩ ⊗ ( U ∑ ℓ = 0 n − 1 j ℓ 2 ℓ | ψ ⟩ ) = U C , {\displaystyle \prod _{\ell =0}^{n-1}C_{\ell }(U^{2^{\ell }})(|j\rangle \otimes |\psi \rangle )=|j\rangle \otimes \left(U^{\sum _{\ell =0}^{n-1}j_{\ell }2^{\ell }}|\psi \rangle \right)=U_{C},} with the last step directly following from the binary decomposition j = ∑ ℓ = 0 n − 1 j ℓ 2 ℓ {\displaystyle j=\sum _{\ell =0}^{n-1}j_{\ell }2^{\ell }} . From this point onwards, the second register is left untouched, and thus it is convenient to write | Ψ 2 ⟩ = | Ψ ~ 2 ⟩ ⊗ | ψ ⟩ {\displaystyle |\Psi _{2}\rangle =|{\tilde {\Psi }}_{2}\rangle \otimes |\psi \rangle } , with | Ψ ~ 2 ⟩ {\displaystyle |{\tilde {\Psi }}_{2}\rangle } the state of the n {\displaystyle n} -qubit register, which is the only one we need to consider for the rest of the algorithm. === Apply inverse quantum Fourier transform === The final part of the circuit involves applying the inverse quantum Fourier transform (QFT) Q F T {\displaystyle {\mathcal {QFT}}} on the first register of | Ψ 2 ⟩ {\displaystyle |\Psi _{2}\rangle } : | Ψ ~ 3 ⟩ = Q F T 2 n − 1 | Ψ ~ 2 ⟩ . {\displaystyle |{\tilde {\Psi }}_{3}\rangle ={\mathcal {QFT}}_{2^{n}}^{-1}|{\tilde {\Psi }}_{2}\rangle .} The QFT and its inverse are characterized by their action on basis states as Q F T N | k ⟩ = N − 1 / 2 ∑ j = 0 N − 1 e 2 π i N j k | j ⟩ , Q F T N − 1 | k ⟩ = N − 1 / 2 ∑ j = 0 N − 1 e − 2 π i N j k | j ⟩ . {\displaystyle {\begin{aligned}{\mathcal {QFT}}_{N}|k\rangle &=N^{-1/2}\sum _{j=0}^{N-1}e^{{\frac {2\pi i}{N}}jk}|j\rangle ,\\{\mathcal {QFT}}_{N}^{-1}|k\rangle &=N^{-1/2}\sum _{j=0}^{N-1}e^{-{\frac {2\pi i}{N}}jk}|j\rangle .\end{aligned}}} It follows that | Ψ ~ 3 ⟩ = 1 2 n 2 ∑ k = 0 2 n − 1 e 2 π i θ k ( 1 2 n 2 ∑ x = 0 2 n − 1 e − 2 π i k x 2 n | x ⟩ ) = 1 2 n ∑ x = 0 2 n − 1 ∑ k = 0 2 n − 1 e − 2 π i k 2 n ( x − 2 n θ ) | x ⟩ . {\displaystyle |{\tilde {\Psi }}_{3}\rangle ={\frac {1}{2^{\frac {n}{2}}}}\sum _{k=0}^{2^{n}-1}e^{2\pi i\theta k}\left({\frac {1}{2^{\frac {n}{2}}}}\sum _{x=0}^{2^{n}-1}e^{\frac {-2\pi ikx}{2^{n}}}|x\rangle \right)={\frac {1}{2^{n}}}\sum _{x=0}^{2^{n}-1}\sum _{k=0}^{2^{n}-1}e^{-{\frac {2\pi ik}{2^{n}}}\left(x-2^{n}\theta \right)}|x\rangle .} Decomposing the state in the computational basis as | Ψ ~ 3 ⟩ = ∑ x = 0 2 n − 1 c x | x ⟩ , {\textstyle |{\tilde {\Psi }}_{3}\rangle =\sum _{x=0}^{2^{n}-1}c_{x}|x\rangle ,} the coefficients thus equal c x ≡ 1 2 n ∑ k = 0 2 n − 1 e − 2 π i k 2 n ( x − 2 n θ ) = 1 2 n ∑ k = 0 2 n − 1 e − 2 π i k 2 n ( x − a ) e 2 π i δ k , {\displaystyle c_{x}\equiv {\frac {1}{2^{n}}}\sum _{k=0}^{2^{n}-1}e^{-{\frac {2\pi ik}{2^{n}}}(x-2^{n}\theta )}={\frac {1}{2^{n}}}\sum _{k=0}^{2^{n}-1}e^{-{\frac {2\pi ik}{2^{n}}}\left(x-a\right)}e^{2\pi i\delta k},} where we wrote 2 n θ = a + 2 n δ , {\displaystyle 2^{n}\theta =a+2^{n}\delta ,} with a {\displaystyle a} is the nearest integer to 2 n θ {\displaystyle 2^{n}\theta } . The difference 2 n δ {\displaystyle 2^{n}\delta } must by definition satisfy 0 ⩽ | 2 n δ | ⩽ 1 2 {\displaystyle 0\leqslant |2^{n}\delta |\leqslant {\tfrac {1}{2}}} . This amounts to approximating the value of θ ∈ [ 0 , 1 ] {\displaystyle \theta \in [0,1]} by rounding 2 n θ {\displaystyle 2^{n}\theta } to the nearest integer. === Measurement === The final step involves performing a measurement in the computational basis on the first register. This yields the outcome | y ⟩ {\displaystyle |y\rangle } with probability Pr ( y ) = | c y | 2 = | 1 2 n ∑ k = 0 2 n − 1 e − 2 π i k 2 n ( y − a ) e 2 π i δ k | 2 . {\displaystyle \Pr(y)=|c_{y}|^{2}=\left|{\frac {1}{2^{n}}}\sum _{k=0}^{2^{n}-1}e^{{\frac {-2\pi ik}{2^{n}}}(y-a)}e^{2\pi i\delta k}\right|^{2}.} It follows that Pr ⁡ ( a ) = 1 {\displaystyle \operatorname {Pr} (a)=1} if δ = 0 {\displaystyle \delta =0} , that is, when θ {\displaystyle \theta } can be written as θ = a / 2 n {\displaystyle \theta =a/2^{n}} , one always finds the outcome y = a {\displaystyle y=a} . On the other hand, if δ ≠ 0 {\displaystyle \delta \neq 0} , the probability reads Pr ⁡ ( a ) = 1 2 2 n | ∑ k = 0 2 n − 1 e 2 π i δ k | 2 = 1 2 2 n | 1 − e 2 π i 2 n δ 1 − e 2 π i δ | 2 . {\displaystyle \operatorname {Pr} (a)={\frac {1}{2^{2n}}}\left|\sum _{k=0}^{2^{n}-1}e^{2\pi i\delta k}\right|^{2}={\frac {1}{2^{2n}}}\left|{\frac {1-{e^{2\pi i2^{n}\delta }}}{1-{e^{2\pi i\delta }}}}\right|^{2}.} From this expression we can see that Pr ( a ) ⩾ 4 π 2 ≈ 0.405 {\displaystyle \Pr(a)\geqslant {\frac {4}{\pi ^{2}}}\approx 0.405} when δ ≠ 0 {\displaystyle \delta \neq 0} . To see this, we observe that from the definition of δ {\displaystyle \delta } we have the inequality | δ | ⩽ 1 2 n + 1 {\displaystyle |\delta |\leqslant {\tfrac {1}{2^{n+1}}}} , and thus:: 157 : 348  Pr ( a ) = 1 2 2 n | 1 − e 2 π i 2 n δ 1 − e 2 π i δ | 2 for δ ≠ 0 = 1 2 2 n | 2 sin ⁡ ( π 2 n δ ) 2 sin ⁡ ( π δ ) | 2 | 1 − e 2 i x | 2 = 4 | sin ⁡ ( x ) | 2 = 1 2 2 n | sin ⁡ ( π 2 n δ ) | 2 | sin ⁡ ( π δ ) | 2 ⩾ 1 2 2 n | sin ⁡ ( π 2 n δ ) | 2 | π δ | 2 | sin ⁡ ( π δ ) | ⩽ | π δ | ⩾ 1 2 2 n | 2 ⋅ 2 n δ | 2 | π δ | 2 | 2 ⋅ 2 n δ | ⩽ | sin ⁡ ( π 2 n δ ) | for | δ | ⩽ 1 2 n + 1 ⩾ 4 π 2 . {\displaystyle {\begin{aligned}\Pr(a)&={\frac {1}{2^{2n}}}\left|{\frac {1-{e^{2\pi i2^{n}\delta }}}{1-{e^{2\pi i\delta }}}}\right|^{2}&&{\text{for }}\delta \neq 0\\&={\frac {1}{2^{2n}}}\left|{\frac {2\sin \left(\pi 2^{n}\delta \right)}{2\sin(\pi \delta )}}\right|^{2}&&\left|1-e^{2ix}\right|^{2}=4\left|\sin(x)\right|^{2}\\&={\frac {1}{2^{2n}}}{\frac {\left|\sin \left(\pi 2^{n}\delta \right)\right|^{2}}{|\sin(\pi \delta )|^{2}}}\\&\geqslant {\frac {1}{2^{2n}}}{\frac {\left|\sin \left(\pi 2^{n}\delta \right)\right|^{2}}{|\pi \delta |^{2}}}&&|\sin(\pi \delta )|\leqslant |\pi \delta |\\&\geqslant {\frac {1}{2^{2n}}}{\frac {|2\cdot 2^{n}\delta |^{2}}{|\pi \delta |^{2}}}&&|2\cdot 2^{n}\delta |\leqslant |\sin(\pi 2^{n}\delta )|{\text{ for }}|\delta |\leqslant {\frac {1}{2^{n+1}}}\\&\geqslant {\frac {4}{\pi ^{2}}}.\end{aligned}}} We conclude that the algorithm provides the best n {\displaystyle n} -bit estimate (i.e., one that is within 1 / 2 n {\displaystyle 1/2^{n}} of the correct answer) of θ {\displaystyle \theta } with probability at least 4 / π 2 {\displaystyle 4/\pi ^{2}} . By adding a number of extra qubits on the order of O ( log ⁡ ( 1 / ϵ ) ) {\displaystyle O(\log(1/\epsilon ))} and truncating the extra qubits the probability can increase to 1 − ϵ {\displaystyle 1-\epsilon } . == Toy examples == Consider the simplest possible instance of the algorithm, where only n = 1 {\displaystyle n=1} qubit, on top of the qubits required to encode | ψ ⟩ {\displaystyle |\psi \rangle } , is involved. Suppose the eigenvalue of | ψ ⟩ {\displaystyle |\psi \rangle } reads λ = e 2 π i θ {\displaystyle \lambda =e^{2\pi i\theta }} , θ ∈ [ 0 , 1 ) {\displaystyle \theta \in [0,1)} . The first part of the algorithm generates the one-qubit state | ϕ ⟩ ≡ 1 2 ( | 0 ⟩ + λ | 1 ⟩ ) {\textstyle |\phi \rangle \equiv {\frac {1}{\sqrt {2}}}(|0\rangle +\lambda |1\rangle )} . Applying the inverse QFT amounts in this case to applying a Hadamard gate. The final outcome probabilities are thus p ± = | ⟨ ± | ϕ ⟩ | 2 {\displaystyle p_{\pm }=|\langle \pm |\phi \rangle |^{2}} where | ± ⟩ ≡ 1 2 ( | 0 ⟩ ± | 1 ⟩ ) {\textstyle |\pm \rangle \equiv {\frac {1}{\sqrt {2}}}(|0\rangle \pm |1\rangle )} , or more explicitly, p ± = | 1 ± λ | 2 4 = 1 ± cos ⁡ ( 2 π θ ) 2 . {\displaystyle p_{\pm }={\frac {|1\pm \lambda |^{2}}{4}}={\frac {1\pm \cos(2\pi \theta )}{2}}.} Suppose λ = 1 {\displaystyle \lambda =1} , meaning | ϕ ⟩ = | + ⟩ {\displaystyle |\phi \rangle =|+\rangle } . Then p + = 1 {\displaystyle p_{+}=1} , p − = 0 {\displaystyle p_{-}=0} , and we recover deterministically the precise value of λ {\displaystyle \lambda } from the measurement outcomes. The same applies if λ = − 1 {\displaystyle \lambda =-1} . If on the other hand λ = e 2 π i / 3 {\displaystyle \lambda =e^{2\pi i/3}} , then p ± = [ 1 ± cos ⁡ ( 2 π / 3 ) ] / 2 {\displaystyle p_{\pm }=[1\pm \cos(2\pi /3)]/2} , that is, p + = 1 / 4 {\displaystyle p_{+}=1/4} and p − = 3 / 4 {\displaystyle p_{-}=3/4} . In this case the result is not deterministic, but we still find the outcome | − ⟩ {\displaystyle |-\rangle } as more likely, compatibly with the fact that 2 / 3 {\displaystyle 2/3} is closer to 1 than to 0. More generally, if λ = e 2 π i θ {\displaystyle \lambda =e^{2\pi i\theta }} , then p + ≥ 1 / 2 {\displaystyle p_{+}\geq 1/2} if and only if | θ | ≤ 1 / 4 {\displaystyle |\theta |\leq 1/4} . This is consistent with the results above because in the cases λ = ± 1 {\displaystyle \lambda =\pm 1} , corresponding to θ = 0 , 1 / 2 {\displaystyle \theta =0,1/2} , the phase is retrieved deterministically, and the other phases are retrieved with higher accuracy the closer they are to these two. == See also == Shor's algorithm Quantum counting algorithm Parity measurement == References ==
Wikipedia/Quantum_phase_estimation_algorithm
Principles of Quantum Mechanics is a textbook by Ramamurti Shankar. The book has been through two editions. It is used in many college courses around the world. == Contents == Mathematical Introduction Linear Vector Spaces: Basics Inner Product Spaces Dual Spaces and the Dirac Notation Subspaces Linear Operators Matrix Elements of Linear Operators Active and Passive Transformations The Eigenvalue Problem Functions of Operators and Related Concepts Generalization to Infinite Dimensions Review of Classical Mechanics The Principle of Least Action and Lagrangian Mechanics The Electromagnetic Lagrangian The Two-Body Problem How Smart Is a Particle? The Hamiltonian Formalism The Electromagnetic Force in the Hamiltonian Scheme Cyclic Coordinates, Poisson Brackets, and Canonical Transformations Symmetries and Their Consequences All Is Not Well with Classical Mechanics Particles and Waves in Classical Physics An Experiment with Waves and Particles (Classical) The Double-Slit Experiment with Light Matter Waves (de Broglie Waves) Conclusions The Postulates – a General Discussion The Postulates Discussion of Postulates I-III The Schrödinger Equation (Dotting Your i's and Crossing your ℏ {\displaystyle \hbar } 's) Simple Problems in One Dimension The Free Particle The Particle in a Box The Continuity Equation for Probability The Single-Step Potential: a Problem in Scattering The Double-Slit Experiment Some Theorems The Classical Limit The Harmonic Oscillator Why Study the Harmonic Oscillator? Review of the Classical Oscillator Quantization of the Oscillator (Coordinate Basis) The Oscillator in the Energy Basis Passage from the Energy Basis to the X Basis The Path Integral Formulation of Quantum Theory The Path Integral Recipe Analysis of the Recipe An Approximation to U(t) for the Free Particle Path Integral Evaluation of the Free-Particle Propagator Equivalence to the Schrodinger Equation Potentials of the Form V = a + b x + c x 2 + d x + e x x {\displaystyle V=a+bx+cx^{2}+dx+exx} The Heisenberg Uncertainty Relations Introduction Derivation of the Uncertainty Relations The Minimum Uncertainty Packet Applications of the Uncertainty Principle The Energy-Time Uncertainty Relation Systems with N {\displaystyle N} Degrees of Freedom N {\displaystyle N} Particles in One Dimension More Particles in More Dimensions Identical Particles Symmetries and Their Consequences Overview Translational Invariance in Quantum Theory Time Translational In variance Parity Invariance Time-Reversal Symmetry Rotational Invariance and Angular Momentum Translations in Two Dimensions Rotations in Two Dimensions The Eigenvalue Problem of L {\displaystyle L} Angular Momentum in Three Dimensions The Eigenvalue Problem of L 2 {\displaystyle L^{2}} and L {\displaystyle L} Solution of Rotationally Invariant Problems The Hydrogen Atom The Eigenvalue Problem The Degeneracy of the Hydrogen Spectrum Numerical Estimates and Comparison with Experiment Multielectron Atoms and the Periodic Table Spin Introduction What is the Nature of Spin? Kinematics of Spin Spin Dynamics Return of Orbital Degrees of Freedom Addition of Angular Momenta A Simple Example The General Problem Irreducible Tensor Operators Explanation of Some "Accidental" Degeneracies Variational and WKB Methods The Variational Method The Wentzel-Kramers-Brillouin Method Time-Independent Perturbation Theory The Formalism Some Examples Degenerate Perturbation Theory Time-Dependent Perturbation Theory The Problem First-Order Perturbation Theory Higher Orders in Perturbation Theory A General Discussion of Electromagnetic Interactions Interaction of Atoms with Electromagnetic Radiation Scattering Theory Introduction Recapitulation of One-Dimensional Scattering and Overview The Born Approximation (Time-Dependent Description) Born Again (The Time-Independent Approximation) The Partial Wave Expansion Two-Particle Scattering The Dirac Equation The Free-Particle Dirac Equation Electromagnetic Interaction of the Dirac Particle More on Relativistic Quantum Mechanics Path Integrals – II Derivation of the Path Integral Imaginary Time Formalism Spin and Fermion Path Integrals Summary Appendix Matrix Inversion Gaussian Integrals Complex Numbers The i ε {\displaystyle i\varepsilon } Prescription == Reviews == Physics Bulletin said about the book, "No matter how gently one introduces students to the concept of Dirac’s bras and kets, many are turned off. Shankar attacks the problem head-on in the first chapter, and in a very informal style suggests that there is nothing to be frightened of". American Scientist called it "An excellent text … The postulates of quantum mechanics and the mathematical underpinnings are discussed in a clear, succinct manner". == See also == Modern Quantum Mechanics by J. J. Sakurai List of textbooks on classical and quantum mechanics == References ==
Wikipedia/Principles_of_Quantum_Mechanics
The Deutsch–Jozsa algorithm is a deterministic quantum algorithm proposed by David Deutsch and Richard Jozsa in 1992 with improvements by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca in 1998. Although of little practical use, it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm. The Deutsch–Jozsa problem is specifically designed to be easy for a quantum algorithm and hard for any deterministic classical algorithm. It is a black box problem that can be solved efficiently by a quantum computer with no error, whereas a deterministic classical computer would need an exponential number of queries to the black box to solve the problem. More formally, it yields an oracle relative to which EQP, the class of problems that can be solved exactly in polynomial time on a quantum computer, and P are different. Since the problem is easy to solve on a probabilistic classical computer, it does not yield an oracle separation with BPP, the class of problems that can be solved with bounded error in polynomial time on a probabilistic classical computer. Simon's problem is an example of a problem that yields an oracle separation between BQP and BPP. == Problem statement == In the Deutsch–Jozsa problem, we are given a black box quantum computer known as an oracle that implements some function: f : { 0 , 1 } n → { 0 , 1 } {\displaystyle f\colon \{0,1\}^{n}\to \{0,1\}} The function takes n-bit binary values as input and produces either a 0 or a 1 as output for each such value. We are promised that the function is either constant (0 on all inputs or 1 on all inputs) or balanced (1 for exactly half of the input domain and 0 for the other half). The task then is to determine if f {\displaystyle f} is constant or balanced by using the oracle. == Classical solution == For a conventional deterministic algorithm where n {\displaystyle n} is the number of bits, 2 n − 1 + 1 {\displaystyle 2^{n-1}+1} evaluations of f {\displaystyle f} will be required in the worst case. To prove that f {\displaystyle f} is constant, just over half the set of inputs must be evaluated and their outputs found to be identical (because the function is guaranteed to be either balanced or constant, not somewhere in between). The best case occurs where the function is balanced and the first two output values are different. For a conventional randomized algorithm, a constant k {\displaystyle k} evaluations of the function suffices to produce the correct answer with a high probability (failing with probability ϵ ≤ 1 / 2 k {\displaystyle \epsilon \leq 1/2^{k}} with k ≥ 1 {\displaystyle k\geq 1} ). However, k = 2 n − 1 + 1 {\displaystyle k=2^{n-1}+1} evaluations are still required if we want an answer that has no possibility of error. The Deutsch-Jozsa quantum algorithm produces an answer that is always correct with a single evaluation of f {\displaystyle f} . == History == The Deutsch–Jozsa algorithm generalizes earlier (1985) work by David Deutsch, which provided a solution for the simple case where n = 1 {\displaystyle n=1} . Specifically, finding out if a given Boolean function whose input is one bit, f : { 0 , 1 } → { 0 , 1 } {\displaystyle f:\{0,1\}\to \{0,1\}} , is constant. The algorithm, as Deutsch had originally proposed it, was not deterministic. The algorithm was successful with a probability of one half. In 1992, Deutsch and Jozsa produced a deterministic algorithm which was generalized to a function which takes n {\displaystyle n} bits for its input. Unlike Deutsch's algorithm, this algorithm required two function evaluations instead of only one. Further improvements to the Deutsch–Jozsa algorithm were made by Cleve et al., resulting in an algorithm that is both deterministic and requires only a single query of f {\displaystyle f} . This algorithm is still referred to as Deutsch–Jozsa algorithm in honour of the groundbreaking techniques they employed. == Algorithm == For the Deutsch–Jozsa algorithm to work, the oracle computing f ( x ) {\displaystyle f(x)} from x {\displaystyle x} must be a quantum oracle which does not decohere x {\displaystyle x} . In its computation, it cannot make a copy of x {\displaystyle x} , because that would violate the no cloning theorem. The point of view of the Deutsch-Jozsa algorithm of f {\displaystyle f} as an oracle means that it does not matter what the oracle does, since it just has to perform its promised transformation. The algorithm begins with the n + 1 {\displaystyle n+1} bit state | 0 ⟩ ⊗ n | 1 ⟩ {\displaystyle |0\rangle ^{\otimes n}|1\rangle } . That is, the first n bits are each in the state | 0 ⟩ {\displaystyle |0\rangle } and the final bit is | 1 ⟩ {\displaystyle |1\rangle } . A Hadamard gate is applied to each bit to obtain the state 1 2 n + 1 ∑ x = 0 2 n − 1 | x ⟩ ( | 0 ⟩ − | 1 ⟩ ) , {\displaystyle {\frac {1}{\sqrt {2^{n+1}}}}\sum _{x=0}^{2^{n}-1}|x\rangle (|0\rangle -|1\rangle ),} where x {\displaystyle x} runs over all n {\displaystyle n} -bit strings, which each may be represented by a number from 0 {\displaystyle 0} to 2 n − 1 {\displaystyle 2^{n}-1} . We have the function f {\displaystyle f} implemented as a quantum oracle. The oracle maps its input state | x ⟩ | y ⟩ {\displaystyle |x\rangle |y\rangle } to | x ⟩ | y ⊕ f ( x ) ⟩ {\displaystyle |x\rangle |y\oplus f(x)\rangle } , where ⊕ {\displaystyle \oplus } denotes addition modulo 2. Applying the quantum oracle gives; 1 2 n + 1 ∑ x = 0 2 n − 1 | x ⟩ ( | 0 ⊕ f ( x ) ⟩ − | 1 ⊕ f ( x ) ⟩ ) . {\displaystyle {\frac {1}{\sqrt {2^{n+1}}}}\sum _{x=0}^{2^{n}-1}|x\rangle (|0\oplus f(x)\rangle -|1\oplus f(x)\rangle ).} For each x , f ( x ) {\displaystyle x,f(x)} is either 0 or 1. Testing these two possibilities, we see the above state is equal to 1 2 n + 1 ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) | x ⟩ ( | 0 ⟩ − | 1 ⟩ ) . {\displaystyle {\frac {1}{\sqrt {2^{n+1}}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}|x\rangle (|0\rangle -|1\rangle ).} At this point the last qubit | 0 ⟩ − | 1 ⟩ 2 {\displaystyle {\frac {|0\rangle -|1\rangle }{\sqrt {2}}}} may be ignored and the following remains: 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) | x ⟩ . {\displaystyle {\frac {1}{\sqrt {2^{n}}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}|x\rangle .} Next, we will have each qubit go through a Hadamard gate. The total transformation over all n {\displaystyle n} qubits can be expressed with the following identity: H ⊗ n | k ⟩ = 1 2 n ∑ j = 0 2 n − 1 ( − 1 ) k ⋅ j | j ⟩ {\displaystyle H^{\otimes n}|k\rangle ={\frac {1}{\sqrt {2^{n}}}}\sum _{j=0}^{2^{n}-1}(-1)^{k\cdot j}|j\rangle } ( j ⋅ k = j 0 k 0 ⊕ j 1 k 1 ⊕ ⋯ ⊕ j n − 1 k n − 1 {\displaystyle j\cdot k=j_{0}k_{0}\oplus j_{1}k_{1}\oplus \cdots \oplus j_{n-1}k_{n-1}} is the sum of the bitwise product). This results in 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) [ 1 2 n ∑ y = 0 2 n − 1 ( − 1 ) x ⋅ y | y ⟩ ] = ∑ y = 0 2 n − 1 [ 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) ( − 1 ) x ⋅ y ] | y ⟩ . {\displaystyle {\frac {1}{\sqrt {2^{n}}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}\left[{\frac {1}{\sqrt {2^{n}}}}\sum _{y=0}^{2^{n}-1}{\left(-1\right)}^{x\cdot y}|y\rangle \right]=\sum _{y=0}^{2^{n}-1}\left[{\frac {1}{2^{n}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}(-1)^{x\cdot y}\right]|y\rangle .} From this, we can see that the probability for a state k {\displaystyle k} to be measured is | 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) ( − 1 ) x ⋅ k | 2 {\displaystyle \left|{\frac {1}{2^{n}}}\sum _{x=0}^{2^{n}-1}{\left(-1\right)}^{f(x)}{\left(-1\right)}^{x\cdot k}\right|^{2}} The probability of measuring k = 0 {\displaystyle k=0} , corresponding to | 0 ⟩ ⊗ n {\displaystyle |0\rangle ^{\otimes n}} , is | 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) | 2 {\displaystyle {\bigg |}{\frac {1}{2^{n}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}{\bigg |}^{2}} which evaluates to 1 if f ( x ) {\displaystyle f(x)} is constant (constructive interference) and 0 if f ( x ) {\displaystyle f(x)} is balanced (destructive interference). In other words, the final measurement will be | 0 ⟩ ⊗ n {\displaystyle |0\rangle ^{\otimes n}} (all zeros) if and only if f ( x ) {\displaystyle f(x)} is constant and will yield some other state if f ( x ) {\displaystyle f(x)} is balanced. == Deutsch's algorithm == Deutsch's algorithm is a special case of the general Deutsch–Jozsa algorithm where n = 1 in f : { 0 , 1 } n → { 0 , 1 } {\displaystyle f\colon \{0,1\}^{n}\rightarrow \{0,1\}} . We need to check the condition f ( 0 ) = f ( 1 ) {\displaystyle f(0)=f(1)} . It is equivalent to check f ( 0 ) ⊕ f ( 1 ) {\displaystyle f(0)\oplus f(1)} (where ⊕ {\displaystyle \oplus } is addition modulo 2, which can also be viewed as a quantum XOR gate implemented as a Controlled NOT gate), if zero, then f {\displaystyle f} is constant, otherwise f {\displaystyle f} is not constant. We begin with the two-qubit state | 0 ⟩ | 1 ⟩ {\displaystyle |0\rangle |1\rangle } and apply a Hadamard gate to each qubit. This yields 1 2 ( | 0 ⟩ + | 1 ⟩ ) ( | 0 ⟩ − | 1 ⟩ ) . {\displaystyle {\frac {1}{2}}(|0\rangle +|1\rangle )(|0\rangle -|1\rangle ).} We are given a quantum implementation of the function f {\displaystyle f} that maps | x ⟩ | y ⟩ {\displaystyle |x\rangle |y\rangle } to | x ⟩ | f ( x ) ⊕ y ⟩ {\displaystyle |x\rangle |f(x)\oplus y\rangle } . Applying this function to our current state we obtain 1 2 ( | 0 ⟩ ( | f ( 0 ) ⊕ 0 ⟩ − | f ( 0 ) ⊕ 1 ⟩ ) + | 1 ⟩ ( | f ( 1 ) ⊕ 0 ⟩ − | f ( 1 ) ⊕ 1 ⟩ ) ) = 1 2 ( ( − 1 ) f ( 0 ) | 0 ⟩ ( | 0 ⟩ − | 1 ⟩ ) + ( − 1 ) f ( 1 ) | 1 ⟩ ( | 0 ⟩ − | 1 ⟩ ) ) = ( − 1 ) f ( 0 ) 1 2 ( | 0 ⟩ + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 1 ⟩ ) ( | 0 ⟩ − | 1 ⟩ ) . {\displaystyle {\begin{aligned}&{\frac {1}{2}}(|0\rangle (|f(0)\oplus 0\rangle -|f(0)\oplus 1\rangle )+|1\rangle (|f(1)\oplus 0\rangle -|f(1)\oplus 1\rangle ))\\&={\frac {1}{2}}((-1)^{f(0)}|0\rangle (|0\rangle -|1\rangle )+(-1)^{f(1)}|1\rangle (|0\rangle -|1\rangle ))\\&=(-1)^{f(0)}{\frac {1}{2}}\left(|0\rangle +(-1)^{f(0)\oplus f(1)}|1\rangle \right)(|0\rangle -|1\rangle ).\end{aligned}}} We ignore the last bit and the global phase and therefore have the state 1 2 ( | 0 ⟩ + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 1 ⟩ ) . {\displaystyle {\frac {1}{\sqrt {2}}}(|0\rangle +(-1)^{f(0)\oplus f(1)}|1\rangle ).} Applying a Hadamard gate to this state we have 1 2 ( | 0 ⟩ + | 1 ⟩ + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 0 ⟩ − ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 1 ⟩ ) = 1 2 ( ( 1 + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) ) | 0 ⟩ + ( 1 − ( − 1 ) f ( 0 ) ⊕ f ( 1 ) ) | 1 ⟩ ) . {\displaystyle {\begin{aligned}&{\frac {1}{2}}(|0\rangle +|1\rangle +(-1)^{f(0)\oplus f(1)}|0\rangle -(-1)^{f(0)\oplus f(1)}|1\rangle )\\&={\frac {1}{2}}((1+(-1)^{f(0)\oplus f(1)})|0\rangle +(1-(-1)^{f(0)\oplus f(1)})|1\rangle ).\end{aligned}}} f ( 0 ) ⊕ f ( 1 ) = 0 {\displaystyle f(0)\oplus f(1)=0} if and only if we measure | 0 ⟩ {\displaystyle |0\rangle } and f ( 0 ) ⊕ f ( 1 ) = 1 {\displaystyle f(0)\oplus f(1)=1} if and only if we measure | 1 ⟩ {\displaystyle |1\rangle } . So with certainty we know whether f ( x ) {\displaystyle f(x)} is constant or balanced. == Deutsch–Jozsa algorithm Qiskit implementation == The quantum circuit shown here is from a simple example of how the Deutsch–Jozsa algorithm can be implemented in Python using Qiskit, an open-source quantum computing software development framework by IBM. == See also == Bernstein–Vazirani algorithm == References == == External links == Deutsch's lecture about the Deutsch-Jozsa algorithm
Wikipedia/Deutsch–Jozsa_algorithm
An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum mechanics might correspond to experienced reality. Quantum mechanics has held up to rigorous and extremely precise tests in an extraordinarily broad range of experiments. However, there exist a number of contending schools of thought over their interpretation. These views on interpretation differ on such fundamental questions as whether quantum mechanics is deterministic or stochastic, local or non-local, which elements of quantum mechanics can be considered real, and what the nature of measurement is, among other matters. While some variation of the Copenhagen interpretation is commonly presented in textbooks, many other interpretations have been developed. Despite nearly a century of debate and experiment, no consensus has been reached among physicists and philosophers of physics concerning which interpretation best "represents" reality. == History == The definition of quantum theorists' terms, such as wave function and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wave function as its charge density smeared across space, but Max Born reinterpreted the absolute square value of the wave function as the electron's probability density distributed across space;: 24–33  the Born rule, as it is now called, matched experiment, whereas Schrödinger's charge density view did not. The views of several early pioneers of quantum mechanics, such as Niels Bohr and Werner Heisenberg, are often grouped together as the "Copenhagen interpretation", though physicists and historians of physics have argued that this terminology obscures differences between the views so designated. Copenhagen-type ideas were never universally embraced, and challenges to a perceived Copenhagen orthodoxy gained increasing attention in the 1950s with the pilot-wave interpretation of David Bohm and the many-worlds interpretation of Hugh Everett III. The physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear." (Mermin also coined the saying "Shut up and calculate" to describe many physicists' attitude to quantum theory, a remark which is often misattributed to Richard Feynman.) As a rough guide to development of the mainstream view during the 1990s and 2000s, a "snapshot" of opinions was collected in a poll by Schlosshauer et al. at the "Quantum Physics and the Nature of Reality" conference of July 2011. The authors reference a similarly informal poll carried out by Max Tegmark at the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "the Copenhagen interpretation still reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the quantum Bayesian interpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll." Some concepts originating from studies of interpretations have found more practical application in quantum information science. == Interpretive challenges == Abstract, mathematical nature of quantum field theories: the mathematical structure of quantum mechanics is abstract and does not result in a single, clear interpretation of its quantities. Apparent indeterministic and irreversible processes: in classical field theory, a physical property at a given location in the field is readily derived. In most mathematical formulations of quantum mechanics, measurement (understood as an interaction with a given state) has a special role in the theory, as it is the sole process that can cause a nonunitary, irreversible evolution of the state. Role of the observer in determining outcomes. Copenhagen-type interpretations imply that the wavefunction is a calculational tool, and represents reality only immediately after a measurement performed by an observer. Everettian interpretations grant that all possible outcomes are real, and that measurement-type interactions cause a branching process in which each possibility is realised. Classically unexpected correlations between remote objects: entangled quantum systems, as illustrated in the EPR paradox, obey statistics that seem to violate principles of local causality by action at a distance. Complementarity of proffered descriptions: complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. This implies the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). Like contextuality, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects. Contextual behaviour of systems locally: Quantum contextuality demonstrates that classical intuitions, in which properties of a system hold definite values independent of the manner of their measurement, fail even for local systems. Also, physical principles such as Leibniz's Principle of the identity of indiscernibles no longer apply in the quantum domain, signaling that most classical intuitions may be incorrect about the quantum world. == Influential interpretations == === Copenhagen interpretation === The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg. It is one of the oldest attitudes towards quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught. There is no definitive historical statement of what is the Copenhagen interpretation, and there were in particular fundamental disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed,: 133  while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process that imparts the classical behavior of "observation" or "measurement". Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states certain pairs of complementary properties cannot all be observed or measured simultaneously. Moreover, properties only result from the act of "observing" or "measuring"; the theory avoids assuming definite values from unperformed experiments. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness.: 85–90  The statistical interpretation of wavefunctions due to Max Born differs sharply from Schrödinger's original intent, which was to have a theory with continuous time evolution and in which wavefunctions directly described physical reality.: 24–33  === Many worlds === The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment. More precisely, the parts of the wavefunction describing observers become increasingly entangled with the parts of the wavefunction describing their experiments. Although all possible outcomes of experiments continue to lie in the wavefunction's support, the times at which they become correlated with observers effectively "split" the universe into mutually unobservable alternate histories. === Quantum information theories === Quantum informational approaches have attracted growing support. They subdivide into two kinds. Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism. Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking. Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. James Hartle writes, The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ... A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector ... becomes problematical only if it is believed that the state vector is an objective property of the system ... The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system. === Relational quantum mechanics === The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them. === QBism === QBism, which originally stood for "quantum Bayesianism", is an interpretation of quantum mechanics that takes an agent's actions and experiences as the central concerns of the theory. This interpretation is distinguished by its use of a subjective Bayesian account of probabilities to understand the quantum mechanical Born rule as a normative addition to good decision-making. QBism draws from the fields of quantum information and Bayesian probability and aims to eliminate the interpretational conundrums that have beset quantum theory. QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement. According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism. The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it. === Consistent histories === The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). === Ensemble interpretation === The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual system – for example, a single particle – but is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. In the words of Einstein: The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems. The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the text book Quantum Mechanics, A Modern Development. === De Broglie–Bohm theory === The de Broglie–Bohm theory of quantum mechanics (also known as the pilot wave theory) is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single spacetime, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden-variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times. Collapse is explained as phenomenological. === Transactional interpretation === The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory. It describes the collapse of the wave function as resulting from a time-symmetric transaction between a possibility wave from the source to the receiver (the wave function) and a possibility wave from the receiver to source (the complex conjugate of the wave function). This interpretation of quantum mechanics is unique in that it not only views the wave function as a real entity, but the complex conjugate of the wave function, which appears in the Born rule for calculating the expected value for an observable, as also real. === Consciousness causes collapse === Eugene Wigner argued that human experimenter consciousness (or maybe even animal consciousness) was critical for the collapse of the wavefunction, but he later abandoned this interpretation after learning about quantum decoherence. Some specific proposals for consciousness caused wave-function collapse have been shown to be unfalsifiable and more broadly reasonable assumption about consciousness lead to the same conclusion. === Quantum logic === Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical Boolean logic with the facts related to measurement and observation in quantum mechanics. === Modal interpretations of quantum theory === Modal interpretations of quantum mechanics were first conceived of in 1972 by Bas van Fraassen, in his paper "A formal approach to the philosophy of science". Van Fraassen introduced a distinction between a dynamical state, which describes what might be true about a system and which always evolves according to the Schrödinger equation, and a value state, which indicates what is actually true about a system at a given time. The term "modal interpretation" now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions, including proposals by Kochen, Dieks, Clifton, Dickson, and Bub. According to Michel Bitbol, Schrödinger's views on how to interpret quantum mechanics progressed through as many as four stages, ending with a non-collapse view that in respects resembles the interpretations of Everett and van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as ontic and treating it as epistemic became interchangeable. === Time-symmetric theories === Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921. Several theories have been proposed that modify the equations of quantum mechanics to be symmetric with respect to time reversal. (See Wheeler–Feynman time-symmetric theory.) This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden-variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism, Lev Vaidman, states that the two-state vector formalism dovetails well with Hugh Everett's many-worlds interpretation. === Other interpretations === As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed that have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism. == Related concepts == Some ideas are discussed in the context of interpreting quantum mechanics but are not necessarily regarded as interpretations themselves. === Quantum Darwinism === Quantum Darwinism is a theory meant to explain the emergence of the classical world from the quantum world as due to a process of Darwinian natural selection induced by the environment interacting with the quantum system; where the many possible quantum states are selected against in favor of a stable pointer state. It was proposed in 2003 by Wojciech Zurek and a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of twenty-five years including pointer states, einselection and decoherence. === Objective-collapse theories === Objective-collapse theories differ from the Copenhagen interpretation by regarding both the wave function and the process of collapse as ontologically objective (meaning these exist and occur independent of the observer). In objective theories, collapse occurs either randomly ("spontaneous localization") or when some physical threshold is reached, with observers having no special role. Thus, objective-collapse theories are realistic, indeterministic, no-hidden-variables theories. Standard quantum mechanics does not specify any mechanism of collapse; quantum mechanics would need to be extended if objective collapse is correct. The requirement for an extension means that objective-collapse theories are alternatives to quantum mechanics rather than interpretations of it. Examples include the Ghirardi–Rimini–Weber theory the continuous spontaneous localization model the Penrose interpretation == Comparisons == The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. For another table comparing interpretations of quantum theory, see reference. No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality. Nevertheless, designing experiments that would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued by many people. == The silent approach == Although interpretational opinions are openly and widely discussed today, that was not always the case. A notable exponent of a tendency of silence was Paul Dirac who once wrote: "The interpretation of quantum mechanics has been dealt with by many authors, and I do not want to discuss it here. I want to deal with more fundamental things." This position is not uncommon among practitioners of quantum mechanics. Similarly Richard Feynman wrote many popularizations of quantum mechanics without ever publishing about interpretation issues like quantum measurement. Others, like Nico van Kampen and Willis Lamb, have openly criticized non-orthodox interpretations of quantum mechanics. == See also == == References == == Sources == Bub, J.; Clifton, R. (1996). "A uniqueness theorem for interpretations of quantum mechanics". Studies in History and Philosophy of Modern Physics. 27B: 181–219. doi:10.1016/1355-2198(95)00019-4. Rudolf Carnap, 1939, "The interpretation of physics", in Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science. Chicago, Illinois: University of Chicago Press. Dickson, M., 1994, "Wavefunction tails in the modal interpretation" in Hull, D., Forbes, M., and Burian, R., eds., Proceedings of the PSA 1" 366–376. East Lansing, Michigan: Philosophy of Science Association. --------, and Clifton, R., 1998, "Lorentz-invariance in modal interpretations" in Dieks, D. and Vermaas, P., eds., The Modal Interpretation of Quantum Mechanics. Dordrecht: Kluwer Academic Publishers: 9–48. Fuchs, Christopher, 2002, "Quantum Mechanics as Quantum Information (and only a little more)". arXiv:quant-ph/0205039 --------, and A. Peres, 2000, "Quantum theory needs no 'interpretation'", Physics Today. Herbert, N., 1985. Quantum Reality: Beyond the New Physics. New York: Doubleday. ISBN 0-385-23569-0. Hey, Anthony, and Walters, P., 2003. The New Quantum Universe, 2nd ed. Cambridge University Press. ISBN 0-521-56457-3. Jackiw, Roman; Kleppner, D. (2000). "One Hundred Years of Quantum Physics". Science. 289 (5481): 893–898. arXiv:quant-ph/0008092. Bibcode:2000quant.ph..8092K. doi:10.1126/science.289.5481.893. PMID 17839156. S2CID 6604344. Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw-Hill. --------, 1974. The Philosophy of Quantum Mechanics. Wiley & Sons. Al-Khalili, 2003. Quantum: A Guide for the Perplexed. London: Weidenfeld & Nicolson. de Muynck, W. M., 2002. Foundations of quantum mechanics, an empiricist approach. Dordrecht: Kluwer Academic Publishers. ISBN 1-4020-0932-1. Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press. Karl Popper, 1963. Conjectures and Refutations. London: Routledge and Kegan Paul. The chapter "Three views Concerning Human Knowledge" addresses, among other things, instrumentalism in the physical sciences. Hans Reichenbach, 1944. Philosophic Foundations of Quantum Mechanics. University of California Press. Tegmark, Max; Wheeler, J. A. (2001). "100 Years of Quantum Mysteries". Scientific American. 284 (2): 68–75. Bibcode:2001SciAm.284b..68T. doi:10.1038/scientificamerican0201-68. S2CID 119375538. Bas van Fraassen, 1972, "A formal approach to the philosophy of science", in R. Colodny, ed., Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain. Univ. of Pittsburgh Press: 303–366. John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton, New Jersey: Princeton University Press, ISBN 0-691-08316-9, LoC QC174.125.Q38 1983. == Further reading == Almost all authors below are professional physicists. David Z Albert, 1992. Quantum Mechanics and Experience. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-74112-9. John S. Bell, 1987. Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, ISBN 0-521-36869-3. The 2004 edition (ISBN 0-521-52338-9) includes two additional papers and an introduction by Alain Aspect. Dmitrii Ivanovich Blokhintsev, 1968. The Philosophy of Quantum Mechanics. D. Reidel Publishing Company. ISBN 90-277-0105-9. David Bohm, 1980. Wholeness and the Implicate Order. London: Routledge. ISBN 0-7100-0971-2. Adan Cabello (15 November 2004). "Bibliographic guide to the foundations of quantum mechanics and quantum information". arXiv:quant-ph/0012089. David Deutsch, 1997. The Fabric of Reality. London: Allen Lane. ISBN 0-14-027541-X; ISBN 0-7139-9061-9. Argues forcefully against instrumentalism. For general readers. F. J. Duarte (2014). Quantum Optics for Engineers. New York: CRC. ISBN 978-1439888537. Provides a pragmatic perspective on interpretations. For general readers. Bernard d'Espagnat, 1976. Conceptual Foundation of Quantum Mechanics, 2nd ed. Addison Wesley. ISBN 0-8133-4087-X. Bernard d'Espagnat, 1983. In Search of Reality. Springer. ISBN 0-387-11399-1. Bernard d'Espagnat, 2003. Veiled Reality: An Analysis of Quantum Mechanical Concepts. Westview Press. Bernard d'Espagnat, 2006. On Physics and Philosophy. Princetone, New Jersey: Princeton University Press. Arthur Fine, 1986. The Shaky Game: Einstein Realism and the Quantum Theory. Science and its Conceptual Foundations. Chicago, Illinois: University of Chicago Press. ISBN 0-226-24948-4. Ghirardi, Giancarlo, 2004. Sneaking a Look at God's Cards. Princeton, New Jersey: Princeton University Press. Gregg Jaeger (2009) Entanglement, Information, and the Interpretation of Quantum Mechanics. Springer. ISBN 978-3-540-92127-1. N. David Mermin (1990) Boojums all the way through. Cambridge University Press. ISBN 0-521-38880-5. Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton, New Jersey: Princeton University Press. ISBN 0-691-03669-1. Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press. Roland Omnès, 1999. Quantum Philosophy: Understanding and Interpreting Contemporary Science. Princeton, New Jersey: Princeton University Press. Roger Penrose, 1989. The Emperor's New Mind. Oxford University Press. ISBN 0-19-851973-7. Especially chapter 6. Roger Penrose, 1994. Shadows of the Mind. Oxford University Press. ISBN 0-19-853978-9. Roger Penrose, 2004. The Road to Reality. New York: Alfred A. Knopf. Argues that quantum theory is incomplete. Lee Phillips, 2017. A brief history of quantum alternatives. Ars Technica. Styer, Daniel F.; Balkin, Miranda S.; Becker, Kathryn M.; Burns, Matthew R.; Dudley, Christopher E.; Forth, Scott T.; Gaumer, Jeremy S.; Kramer, Mark A.; et al. (March 2002). "Nine formulations of quantum mechanics" (PDF). American Journal of Physics. 70 (3): 288–297. Bibcode:2002AmJPh..70..288S. doi:10.1119/1.1445404. Baggott, Jim (25 April 2024). "'Shut up and calculate': how Einstein lost the battle to explain quantum reality". Nature. 629 (8010): 29–32. Bibcode:2024Natur.629...29B. doi:10.1038/d41586-024-01216-z. PMID 38664517. == External links == Stanford Encyclopedia of Philosophy: "Bohmian mechanics" by Sheldon Goldstein. "Collapse Theories." by Giancarlo Ghirardi. "Copenhagen Interpretation of Quantum Mechanics" by Jan Faye. "Everett's Relative State Formulation of Quantum Mechanics" by Jeffrey Barrett. "Many-Worlds Interpretation of Quantum Mechanics" by Lev Vaidman. "Modal Interpretation of Quantum Mechanics" by Michael Dickson and Dennis Dieks. "Philosophical Issues in Quantum Theory" by Wayne Myrvold. "Quantum-Bayesian and Pragmatist Views of Quantum Theory" by Richard Healey. "Quantum Entanglement and Information" by Jeffrey Bub. "Quantum mechanics" by Jenann Ismael. "Quantum Logic and Probability Theory" by Alexander Wilce. "Relational Quantum Mechanics" by Federico Laudisa and Carlo Rovelli. "The Role of Decoherence in Quantum Mechanics" by Guido Bacciagaluppi. Internet Encyclopedia of Philosophy: "Interpretations of Quantum Mechanics" by Peter J. Lewis. "Everettian Interpretations of Quantum Mechanics" by Christina Conroy.
Wikipedia/Interpretation_of_quantum_mechanics
Algorithmic cooling is an algorithmic method for transferring heat (or entropy) from some qubits to others or outside the system and into the environment, which results in a cooling effect. This method uses regular quantum operations on ensembles of qubits, and it can be shown that it can succeed beyond Shannon's bound on data compression. The phenomenon is a result of the connection between thermodynamics and information theory. The cooling itself is done in an algorithmic manner using ordinary quantum operations. The input is a set of qubits, and the output is a subset of qubits cooled to a desired threshold determined by the user. This cooling effect may have usages in initializing cold (highly pure) qubits for quantum computation and in increasing polarization of certain spins in nuclear magnetic resonance. Therefore, it can be used in the initializing process taking place before a regular quantum computation. == Overview == Quantum computers need qubits (quantum bits) on which they operate. Generally, in order to make the computation more reliable, the qubits must be as pure as possible, minimizing possible fluctuations. Since the purity of a qubit is related to von Neumann entropy and to temperature, making the qubits as pure as possible is equivalent to making them as cold as possible (or having as little entropy as possible). One method of cooling qubits is extracting entropy from them, thus purifying them. This can be done in two general ways: reversibly (namely, using unitary operations) or irreversibly (for example, using a heat bath). Algorithmic cooling is the name of a family of algorithms that are given a set of qubits and purify (cool) a subset of them to a desirable level. This can also be viewed in a probabilistic manner. Since qubits are two-level systems, they can be regarded as coins, unfair ones in general. Purifying a qubit means (in this context) making the coin as unfair as possible: increasing the difference between the probabilities for tossing different results as much as possible. Moreover, the entropy previously mentioned can be viewed using the prism of information theory, which assigns entropy to any random variable. The purification can, therefore, be considered as using probabilistic operations (such as classical logical gates and conditional probability) for minimizing the entropy of the coins, making them more unfair. The case in which the algorithmic method is reversible, such that the total entropy of the system is not changed, was first named "molecular scale heat engine", and is also named "reversible algorithmic cooling". This process cools some qubits while heating the others. It is limited by a variant of Shannon's bound on data compression and it can asymptotically reach quite close to the bound. A more general method, "irreversible algorithmic cooling", makes use of irreversible transfer of heat outside of the system and into the environment (and therefore may bypass the Shannon bound). Such an environment can be a heat bath, and the family of algorithms which use it is named "heat-bath algorithmic cooling". In this algorithmic process entropy is transferred reversibly to specific qubits (named reset spins) that are coupled with the environment much more strongly than others. After a sequence of reversible steps that let the entropy of these reset qubits increase they become hotter than the environment. Then the strong coupling results in a heat transfer (irreversibly) from these reset spins to the environment. The entire process may be repeated and may be applied recursively to reach low temperatures for some qubits. == Background == === Thermodynamics === Algorithmic cooling can be discussed using classical and quantum thermodynamics points of view. ==== Cooling ==== The classical interpretation of "cooling" is transferring heat from one object to the other. However, the same process can be viewed as entropy transfer. For example, if two gas containers that are both in thermal equilibrium with two different temperatures are put in contact, entropy will be transferred from the "hotter" object (with higher entropy) to the "colder" one. This approach can be used when discussing the cooling of an object whose temperature is not always intuitively defined, e.g. a single particle. Therefore, the process of cooling spins can be thought of as a process of transferring entropy between spins, or outside of the system. ==== Heat reservoir ==== The concept of heat reservoir is discussed extensively in classical thermodynamics (for instance in Carnot cycle). For the purposes of algorithmic cooling, it is sufficient to consider heat reservoirs, or "heat baths", as large objects whose temperature remains unchanged even when in contact with other ("normal" sized) objects. Intuitively, this can be pictured as a bath filled with room-temperature water that practically retains its temperature even when a small piece of hot metal is put in it. Using the entropy form of thinking from the previous subsection, an object which is considered hot (whose entropy is large) can transfer heat (and entropy) to a colder heat bath, thus lowering its own entropy. This process results in cooling. Unlike entropy transfer between two "regular" objects which preserves the entropy of the system, entropy transfer to a heat bath is normally regarded as non-preserving. This is because the bath is normally not considered as a part of the relevant system, due to its size. Therefore, when transferring entropy to a heat bath, one can essentially lower the entropy of their system, or equivalently, cool it. Continuing this approach, the goal of algorithmic cooling is to reduce as much as possible the entropy of the system of qubits, thus cooling it. === Quantum mechanics === ==== General introduction ==== Algorithmic cooling applies to quantum systems. Therefore, it is important to be familiar with both the core principles and the relevant notations. A qubit (or quantum bit) is a unit of information that can be in a superposition of two states, denoted as | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } . The general superposition can be written as | ψ ⟩ = α | 0 ⟩ + β | 1 ⟩ , {\displaystyle |\psi \rangle =\alpha |0\rangle +\beta |1\rangle ,} where | α | 2 + | β | 2 = 1 {\displaystyle |\alpha |^{2}+|\beta |^{2}=1} and α , β ∈ C {\displaystyle \alpha ,\beta \in \mathbb {C} } . If one measures the state of the qubit in the orthonormal basis composed of | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } , one gets the result | 0 ⟩ {\displaystyle |0\rangle } with probability | α | 2 {\displaystyle |\alpha |^{2}} and the result | 1 ⟩ {\displaystyle |1\rangle } with probability | β | 2 {\displaystyle |\beta |^{2}} . The above description is known as a quantum pure state. A general mixed quantum state can be prepared as a probability distribution over pure states, and is represented by a density matrix of the general form ρ = ∑ i p i | ψ i ⟩ ⟨ ψ i | {\textstyle \rho =\sum _{i}p_{i}|\psi _{i}\rangle \langle \psi _{i}|} , where each | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } is a pure state (see ket-bra notations) and each p i {\displaystyle p_{i}} is the probability of | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } in the distribution. The quantum states that play a major role in algorithmic cooling are mixed states in the diagonal form ρ = ( 1 + ε 2 0 0 1 − ε 2 ) {\displaystyle \rho ={\begin{pmatrix}{\frac {1+\varepsilon }{2}}&0\\0&{\frac {1-\varepsilon }{2}}\\\end{pmatrix}}} for ε ∈ [ − 1 , 1 ] {\displaystyle \varepsilon \in [-1,1]} . Essentially, this means that the state is the pure | 0 ⟩ {\displaystyle |0\rangle } state with probability 1 + ε 2 {\textstyle {\frac {1+\varepsilon }{2}}} and is pure | 1 ⟩ {\displaystyle |1\rangle } with probability 1 − ε 2 {\textstyle {\frac {1-\varepsilon }{2}}} . In the ket-bra notations, the density matrix is 1 + ε 2 | 0 ⟩ ⟨ 0 | + 1 − ε 2 | 1 ⟩ ⟨ 1 | {\textstyle {\frac {1+\varepsilon }{2}}|0\rangle \langle 0|+{\frac {1-\varepsilon }{2}}|1\rangle \langle 1|} . For ε = ± 1 {\displaystyle \varepsilon =\pm 1} the state is called pure, and for ε = 0 {\displaystyle \varepsilon =0} the state is called completely mixed (represented by the normalized identity matrix). The completely mixed state represents a uniform probability distribution over the states | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } . ==== Polarization or bias of a state ==== The state ρ {\displaystyle \rho } above is called ε {\displaystyle \varepsilon } -polarized, or ε {\displaystyle \varepsilon } -biased, since it deviates by ε {\displaystyle \varepsilon } in the diagonal entries from the completely mixed state. Another approach for the definition of bias or polarization is using Bloch sphere (or generally Bloch ball). Restricted to a diagonal density matrix, a state can be on the straight line connecting the antipodal points representing the states | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } ("north and south poles" of the sphere). In this approach, the ε {\displaystyle \varepsilon } parameter ( ε ∈ [ − 1 , 1 ] {\displaystyle \varepsilon \in [-1,1]} ) is exactly the distance (up to a sign) of the state from the center of the ball, which represents the completely mixed state. For ε = ± 1 {\displaystyle \varepsilon =\pm 1} the state is exactly on the poles and for ε = 0 {\displaystyle \varepsilon =0} the state is exactly in the center. A bias can be negative (for example − 1 2 {\textstyle -{\frac {1}{2}}} ), and in this case the state is in the middle between the center and the south pole. In the Pauli matrices representation form, an ε {\displaystyle \varepsilon } -biased quantum state is ρ = 1 2 ( 1 + ε 0 0 1 − ε ) = 1 2 ( I + ( 0 , 0 , ε ) ⋅ σ → ) = 1 2 ( I + ε σ z ) . {\displaystyle \rho ={\frac {1}{2}}{\begin{pmatrix}1+\varepsilon &0\\0&1-\varepsilon \end{pmatrix}}={\frac {1}{2}}\left(I+(0,0,\varepsilon )\cdot {\vec {\sigma }}\right)={\frac {1}{2}}(I+\varepsilon \sigma _{z}).} ==== Entropy ==== Since quantum systems are involved, the entropy used here is von Neumann entropy. For a single qubit represented by the (diagonal) density matrix above, its entropy is H ( ε ) = − ( 1 + ε 2 log ⁡ 1 + ε 2 + 1 − ε 2 log ⁡ 1 − ε 2 ) {\displaystyle H(\varepsilon )=-\left({\frac {1+\varepsilon }{2}}\log {\frac {1+\varepsilon }{2}}+{\frac {1-\varepsilon }{2}}\log {\frac {1-\varepsilon }{2}}\right)} (where the logarithm is to base 2 {\displaystyle 2} ). This expression coincides with the entropy of an unfair coin with "bias" ε {\displaystyle \varepsilon } , meaning probability 1 + ε 2 {\textstyle {\frac {1+\varepsilon }{2}}} for tossing heads. A coin with bias ε = ± 1 {\displaystyle \varepsilon =\pm 1} is deterministic with zero entropy, and a coin with bias ε = 0 {\displaystyle \varepsilon =0} is fair with maximal entropy ( H ( ε = 0 ) = log ⁡ 2 = 1 ) {\displaystyle H(\varepsilon =0)=\log 2=1)} . The relation between the coins approach and von Neumann entropy is an example of the relation between entropy in thermodynamics and in information theory. == Intuition == An intuition for this family of algorithms can come from various fields and mindsets, which are not necessarily quantum. This is due to the fact that these algorithms do not explicitly use quantum phenomena in their operations or analysis, and mainly rely on information theory. Therefore, the problem can be inspected from a classical (physical, computational, etc.) point of view. === Physics === The physical intuition for this family of algorithms comes from classical thermodynamics. ==== Reversible case ==== The basic scenario is an array of qubits with equal initial biases. This means that the array contains small thermodynamic systems, each with the same entropy. The goal is to transfer entropy from some qubits to others, eventually resulting in a sub-array of "cold" qubits and another sub-array of "hot" qubits (the sub-arrays being distinguished by their qubits' entropies, as in the background section). The entropy transfers are restricted to be reversible, which means that the total entropy is conserved. Therefore, reversible algorithmic cooling can be seen as an act of redistributing the entropy of all the qubits to obtain a set of colder ones while the others are hotter. To see the analogy from classical thermodynamics, two qubits can be considered as a gas container with two compartments, separated by a movable and heat-insulating partition. If external work is applied in order to move the partition in a reversible manner, the gas in one compartment is compressed, resulting in higher temperature (and entropy), while the gas in the other is expanding, similarly resulting in lower temperature (and entropy). Since it is reversible, the opposite action can be done, returning the container and the gases to the initial state. The entropy transfer here is analogous to the entropy transfer in algorithmic cooling, in the sense that by applying external work entropy can be transferred reversibly between qubits. ==== Irreversible case ==== The basic scenario remains the same, however an additional object is present – a heat bath. This means that entropy can be transferred from the qubits to an external reservoir and some operations can be irreversible, which can be used for cooling some qubits without heating the others. In particular, hot qubits (hotter than the bath) that were on the receiving side of reversible entropy transfer can be cooled by letting them interact with the heat bath. The classical analogy for this situation is the Carnot refrigerator, specifically the stage in which the engine is in contact with the cold reservoir and heat (and entropy) flows from the engine to the reservoir. === Information theory === The intuition for this family of algorithms can come from an extension of Von-Neumann's solution for the problem of obtaining fair results from a biased coin. In this approach to algorithmic cooling, the bias of the qubits is merely a probability bias, or the "unfairness" of a coin. == Applications == Two typical applications that require a large number of pure qubits are quantum error correction (QEC) and ensemble computing. In realizations of quantum computing (implementing and applying the algorithms on actual qubits), algorithmic cooling was involved in realizations in optical lattices. In addition, algorithmic cooling can be applied to in vivo magnetic resonance spectroscopy. === Quantum error correction === Quantum error correction is a quantum algorithm for protection from errors. The algorithm operates on the relevant qubits (which operate within the computation) and needs a supply of new pure qubits for each round. This requirement can be weakened to purity above a certain threshold instead of requiring fully pure qubits. For this, algorithmic cooling can be used to produce qubits with the desired purity for quantum error correction. === Ensemble computing === Ensemble computing is a computational model that uses a macroscopic number of identical computers. Each computer contains a certain number of qubits, and the computational operations are performed simultaneously on all the computers. The output of the computation can be obtained by measuring the state of the entire ensemble, which would be the average output of each computer in it. Since the number of computers is macroscopic, the output signal is easier to detect and measure than the output signal of each single computer. This model is widely used in NMR quantum computing: each computer is represented by a single (identical) molecule, and the qubits of each computer are the nuclear spins of its atoms. The obtained (averaged) output is a detectable magnetic signal. === NMR spectroscopy === Nuclear magnetic resonance spectroscopy (sometimes called MRS - magnetic resonance spectroscopy) is a non-invasive technique related to MRI (magnetic resonance imaging) for analyzing metabolic changes in vivo (from Latin: "within the living organism"), which can potentially be used for diagnosing brain tumors, Parkinson's disease, depression, etc. It uses some magnetic properties of the relevant metabolites to measure their concentrations in the body, which are correlated with certain diseases. For example, the difference between the concentrations of the metabolites glutamate and glutamine can be linked to some stages of neurodegenerative diseases, such as Alzheimer's disease. Some uses of MRS focus on the carbon atoms of the metabolites (see carbon-13 nuclear magnetic resonance). One major reason for this is the presence of carbon in a large portion of all tested metabolites. Another reason is the ability to mark certain metabolites by the 13C isotope, which is more easy to measure than the usually used hydrogen atoms mainly because of its magnetic properties (such as its gyromagnetic ratio). In MRS, the nuclear spins of the atoms of the metabolites are required to be with a certain degree of polarization, so the spectroscopy can succeed. Algorithmic cooling can be applied in vivo, increasing the resolution and precision of the MRS. Realizations (not in vivo) of algorithmic cooling on metabolites with 13C isotope have been shown to increase the polarization of 13C in amino acids and other metabolites. MRS can be used to obtain biochemical information about certain body tissues in a non-invasive manner. This means that the operation must be carried out at room temperature. Some methods of increasing polarization of spins (such as hyperpolarization, and in particular dynamic nuclear polarization) are not able to operate under this condition since they require a cold environment (a typical value is 1K, about -272 degrees Celsius). On the other hand, algorithmic cooling can be operated in room temperature and be used in MRS in vivo, while methods that required lower temperature can be used in biopsy, outside of the living body. == Reversible algorithmic cooling - basic compression subroutine == The algorithm operates on an array of equally (and independently) biased qubits. After the algorithm transfers heat (and entropy) from some qubits to the others, the resulting qubits are rearranged in increasing order of bias. Then this array is divided into two sub-arrays: "cold" qubits (with bias exceeding a certain threshold chosen by the user) and "hot" qubits (with bias lower than that threshold). Only the "cold" qubits are used for further quantum computation. The basic procedure is called "Basic Compression Subroutine" or "3 Bit Compression". The reversible case can be demonstrated on 3 qubits, using the probabilistic approach. Each qubit is represented by a "coin" (two-level system) whose sides are labeled 0 and 1, and with a certain bias: each coin is independently with bias ε {\displaystyle \varepsilon } , meaning probability 1 + ε 2 {\displaystyle {\frac {1+\varepsilon }{2}}} for tossing 0. The coins are A , B , C {\displaystyle A,B,C} and the goal is to use coins B , C {\displaystyle B,C} to cool coin (qubit) A {\displaystyle A} . The procedure: Toss coins A , B , C {\displaystyle A,B,C} independently. Apply C-NOT on B , C {\displaystyle B,C} . Use coin B {\displaystyle B} for conditioning C-SWAP of coins A , C {\displaystyle A,C} . After this procedure, the average (expected value) of the bias of coin A {\displaystyle A} is, to leading order, ε new average = 3 2 ε {\textstyle \varepsilon _{\text{new}}^{\text{average}}={\frac {3}{2}}\varepsilon } . === C-NOT step === Coins B , C {\displaystyle B,C} are used for C-NOT operation, also known as XOR (exclusive or). The operation is applied in the following manner: A new = A , B new = B ⊕ C , C new = C {\displaystyle A_{\text{new}}=A,B_{\text{new}}=B\oplus C,C_{\text{new}}=C} , which means that B ⊕ C {\displaystyle B\oplus C} is computed and replaces the old value of B {\displaystyle B} , and A , C {\displaystyle A,C} remain unchanged. More specifically, the following operation is applied: If the result of coin C {\displaystyle C} is 1: Flip coin B {\displaystyle B} without looking at the result Else (the result of coin C {\displaystyle C} is 0): Do nothing (still without looking at the result of B {\displaystyle B} ) Now, the result of coin B new {\displaystyle B_{\text{new}}} is checked (without looking at A new , C new {\displaystyle A_{\text{new}},C_{\text{new}}} ). Classically, this means that the result of coin C {\displaystyle C} must be "forgotten" (cannot be used anymore). This is somewhat problematic classically, because the result of coin C {\displaystyle C} is no longer probabilistic; however, the equivalent quantum operators (which are the ones that are actually used in realizations and implementations of the algorithm) are capable of doing so. After the C-NOT operation is over, the bias of coin C new {\displaystyle C_{\text{new}}} is computed using conditional probability: If B new = 0 {\displaystyle B_{\text{new}}=0} (meaning B = C {\displaystyle B=C} ): P ( C new = 0 | B = C ) = P ( B = C = 0 ) P ( B = C ) = ( 1 + ε ) 2 4 ( 1 + ε ) 2 4 + ( 1 − ε ) 2 4 = ( 1 + ε ) 2 4 1 + ε 2 2 = 1 + 2 ε 1 + ε 2 2 . {\displaystyle P(C_{\text{new}}=0|B=C)={\frac {P(B=C=0)}{P(B=C)}}={\frac {\frac {(1+\varepsilon )^{2}}{4}}{{\frac {(1+\varepsilon )^{2}}{4}}+{\frac {(1-\varepsilon )^{2}}{4}}}}={\frac {\frac {(1+\varepsilon )^{2}}{4}}{\frac {1+\varepsilon ^{2}}{2}}}={\frac {1+{\frac {2\varepsilon }{1+\varepsilon ^{2}}}}{2}}.} Therefore, the new bias of coin C new {\displaystyle C_{\text{new}}} is 2 ε 1 + ε 2 {\displaystyle {\frac {2\varepsilon }{1+\varepsilon ^{2}}}} . If B new = 1 {\displaystyle B_{\text{new}}=1} (meaning B ≠ C {\displaystyle B\neq C} ): P ( C new = 0 | B ≠ C ) = P ( C = 0 , B = 1 ) P ( B ≠ C ) = 1 + ε 2 1 − ε 2 1 + ε 2 1 − ε 2 + 1 − ε 2 1 + ε 2 = 1 − ε 2 4 1 − ε 2 2 = 1 2 . {\displaystyle P(C_{\text{new}}=0|B\neq C)={\frac {P(C=0,B=1)}{P(B\neq C)}}={\frac {{\frac {1+\varepsilon }{2}}{\frac {1-\varepsilon }{2}}}{{\frac {1+\varepsilon }{2}}{\frac {1-\varepsilon }{2}}+{\frac {1-\varepsilon }{2}}{\frac {1+\varepsilon }{2}}}}={\frac {\frac {1-\varepsilon ^{2}}{4}}{\frac {1-\varepsilon ^{2}}{2}}}={\frac {1}{2}}.} Therefore, the new bias of coin C new {\displaystyle C_{\text{new}}} is 0 {\displaystyle 0} . === C-SWAP step === Coins A new , B new , C new {\displaystyle A_{\text{new}},B_{\text{new}},C_{\text{new}}} are used for C-SWAP operation. The operation is applied in the following manner: A ′ = A new B new + B new ¯ C new , B ′ = B new , C ′ = A new B new ¯ + B new C new , {\displaystyle A'=A_{\text{new}}B_{\text{new}}+{\overline {B_{\text{new}}}}C_{\text{new}},\;B'=B_{\text{new}},\;C'=A_{\text{new}}{\overline {B_{\text{new}}}}+B_{\text{new}}C_{\text{new}},} which means that A new , C new {\displaystyle A_{\text{new}},C_{\text{new}}} are swapped if B new = 0 {\displaystyle B_{\text{new}}=0} . After the C-SWAP operation is over: If B new = 0 {\displaystyle B_{\text{new}}=0} : coins A new {\displaystyle A_{\text{new}}} and C new {\displaystyle C_{\text{new}}} have been swapped, hence coin A ′ {\displaystyle A'} is now 2 ε 1 + ε 2 {\textstyle {\frac {2\varepsilon }{1+\varepsilon ^{2}}}} -biased and coin C ′ {\displaystyle C'} is ε {\displaystyle \varepsilon } -biased. Else ( B new = 1 {\displaystyle B_{\text{new}}=1} ): coin A ′ {\displaystyle A'} remains unchanged (still of bias ε {\displaystyle \varepsilon } ) and coin C ′ {\displaystyle C'} remains with bias 0 {\displaystyle 0} . In this case, coin C ′ {\displaystyle C'} can be discarded from the system, as it is too "hot" (its bias is too low, or, equivalently, its entropy is too high). The average bias of coin A ′ {\displaystyle A'} can be calculated by looking at those two cases, using the final bias in each case and the probability of each case: ε new average = P ( B new = 0 ) ⋅ 2 ε 1 + ε 2 + P ( B new = 1 ) ⋅ ε = ( ( 1 + ε ) 2 4 + ( 1 − ε ) 2 4 ) ⋅ 2 ε 1 + ε 2 + ( 1 + ε 2 1 − ε 2 + 1 − ε 2 1 + ε 2 ) ⋅ ε = 3 ε 2 − ε 3 2 {\displaystyle \varepsilon _{\text{new}}^{\text{average}}=P(B_{\text{new}}=0)\cdot {\frac {2\varepsilon }{1+\varepsilon ^{2}}}+P(B_{\text{new}}=1)\cdot \varepsilon =\left({\frac {(1+\varepsilon )^{2}}{4}}+{\frac {(1-\varepsilon )^{2}}{4}}\right)\cdot {\frac {2\varepsilon }{1+\varepsilon ^{2}}}+\left({\frac {1+\varepsilon }{2}}{\frac {1-\varepsilon }{2}}+{\frac {1-\varepsilon }{2}}{\frac {1+\varepsilon }{2}}\right)\cdot \varepsilon ={\frac {3\varepsilon }{2}}-{\frac {\varepsilon ^{3}}{2}}} Using the approximation ε ≪ 1 {\displaystyle \varepsilon \ll 1} , the new average bias of coin A ′ {\displaystyle A'} is ε new average = 3 2 ε {\textstyle \varepsilon _{\text{new}}^{\text{average}}={\frac {3}{2}}\varepsilon } . Therefore, these two steps increase the polarization of coin A {\displaystyle A} on average. === Alternative explanation: quantum operations === The algorithm can be written using quantum operations on qubits, as opposed to the classical treatment. In particular, the C-NOT and C-SWAP steps can be replaced by a single unitary quantum operator that operates on the 3 qubits. Although this operation changes qubits B , C {\displaystyle B,C} in a different manner than the two classical steps, it yields the same final bias for qubit A {\displaystyle A} . The operator U {\displaystyle U} can be uniquely defined by its action on the computational basis of the Hilbert space of 3 qubits: | 000 ⟩ ↦ | 000 ⟩ , {\displaystyle |000\rangle \mapsto |000\rangle ,} | 001 ⟩ ↦ | 001 ⟩ , {\displaystyle |001\rangle \mapsto |001\rangle ,} | 010 ⟩ ↦ | 010 ⟩ , {\displaystyle |010\rangle \mapsto |010\rangle ,} | 011 ⟩ ↦ | 100 ⟩ , {\displaystyle |011\rangle \mapsto |100\rangle ,} | 100 ⟩ ↦ | 011 ⟩ , {\displaystyle |100\rangle \mapsto |011\rangle ,} | 101 ⟩ ↦ | 101 ⟩ , {\displaystyle |101\rangle \mapsto |101\rangle ,} | 110 ⟩ ↦ | 110 ⟩ , {\displaystyle |110\rangle \mapsto |110\rangle ,} | 111 ⟩ ↦ | 111 ⟩ . {\displaystyle |111\rangle \mapsto |111\rangle .} In matrix form, this operator is the identity matrix of size 8, except that the 4th and 5th rows are swapped. The result of this operation can be obtained by writing the product state of the 3 qubits, ρ A , B , C = ρ A ⊗ ρ B ⊗ ρ C {\displaystyle \rho _{A,B,C}=\rho _{A}\otimes \rho _{B}\otimes \rho _{C}} , and applying U {\displaystyle U} on it. Afterwards, the bias of qubit A {\displaystyle A} can be calculated by projecting its state on the state | 0 ⟩ {\displaystyle |0\rangle } (without projecting qubits B , C {\displaystyle B,C} ) and taking the trace of the result (see density matrix measurement): 1 + ε new average 2 = tr ⁡ [ ( P 0 ⊗ I ⊗ I ) ( U ρ A , B , C U † ) ] = 1 + 3 ε 2 − ε 3 2 2 , {\displaystyle {\frac {1+\varepsilon _{\text{new}}^{\text{average}}}{2}}=\operatorname {tr} [(P_{0}\otimes I\otimes I)(U\rho _{A,B,C}U^{\dagger })]={\frac {1+{\frac {3\varepsilon }{2}}-{\frac {\varepsilon ^{3}}{2}}}{2}},} where P 0 = ( 1 0 0 0 ) = | 0 ⟩ ⟨ 0 | {\displaystyle P_{0}={\begin{pmatrix}1&0\\0&0\end{pmatrix}}=|0\rangle \langle 0|} is the projection on the state | 0 ⟩ {\displaystyle |0\rangle } . Again, using the approximation ε ≪ 1 {\displaystyle \varepsilon \ll 1} , the new average bias of coin A {\displaystyle A} is ε new average = 3 2 ε {\textstyle \varepsilon _{\text{new}}^{\text{average}}={\frac {3}{2}}\varepsilon } . == Heat-bath algorithmic cooling (irreversible algorithmic cooling) == The irreversible case is an extension of the reversible case: it uses the reversible algorithm as a subroutine. The irreversible algorithm contains another procedure called "Refresh" and extends the reversible one by using a heat bath. This allows for cooling certain qubits (called "reset qubits") without affecting the others, which results in an overall cooling of all the qubits as a system. The cooled reset qubits are used for cooling the rest (called "computational qubits") by applying a compression on them which is similar to the basic compression subroutine from the reversible case. The "insulation" of the computational qubits from the heat bath is a theoretical idealization that does not always hold when implementing the algorithm. However, with a proper choice of the physical implementation of each type of qubit, this assumption fairly holds. There are many different versions of this algorithm, with different uses of the reset qubits and different achievable biases. The common idea behind them can be demonstrated using three qubits: two computational qubits A , B {\displaystyle A,B} and one reset qubit C {\displaystyle C} . Each of the three qubits is initially in a completely mixed state with bias 0 {\displaystyle 0} (see the background section). The following steps are then applied: Refresh: the reset qubit C {\displaystyle C} interacts with the heat bath. Compression: a reversible compression (entropy transfer) is applied on the three qubits. Each round of the algorithm consists of three iterations, and each iteration consists of these two steps (refresh, and then compression). The compression step in each iteration is slightly different, but its goal is to sort the qubits in descending order of bias, so that the reset qubit would have the smallest bias (namely, the highest temperature) of all qubits. This serves two goals: Transferring as much entropy as possible away from the computational qubits. Transferring as much entropy as possible away from the whole system (and in particular the reset qubit) and into the bath in the following refresh step. When writing the density matrices after each iteration, the compression step in the 1st round can be effectively treated as follows: 1st iteration: swap qubit A {\displaystyle A} with the previously refreshed reset qubit C {\displaystyle C} . 2nd iteration: swap qubit B {\displaystyle B} with the previously refreshed reset qubit C {\displaystyle C} . 3rd iteration: boost the bias of qubit A {\displaystyle A} . The description of the compression step in the following rounds depends on the state of the system before the round has begun and may be more complicated than the above description. In this illustrative description of the algorithm, the boosted bias of qubit A {\displaystyle A} (obtained after the end of the first round) is 3 ε b 2 − ε b 3 2 {\textstyle {\frac {3\varepsilon _{b}}{2}}-{\frac {\varepsilon _{b}^{3}}{2}}} , where ε b {\displaystyle \varepsilon _{b}} is the bias of the qubits within the heat bath. This result is obtained after the last compression step; just before this step, the qubits were each ε b {\displaystyle \varepsilon _{b}} -biased, which is exactly the state of the qubits before the reversible algorithm is applied. === Refresh step === The contact that is established between the reset qubit and the heat bath can be modeled in several possible ways: A physical interaction between two thermodynamic systems, which eventually results in a reset qubit whose temperature is identical to the bath temperature (equivalently - with bias equal to the bias of the qubits in the bath, ε b {\displaystyle \varepsilon _{b}} ). A mathematical trace-out on the reset qubit, followed by taking the system in a product state with a fresh new qubit from the bath. This means that we lose the former reset qubit and gain a refreshed new one. Formally, this can be written as ρ new = tr C ⁡ ( ρ ) ⊗ ρ ε b {\displaystyle \rho _{\text{new}}=\operatorname {tr} _{C}(\rho )\otimes \rho _{\varepsilon _{b}}} , where ρ new {\displaystyle \rho _{\text{new}}} is the new density matrix (after the operation is held), tr C ⁡ ( ρ ) {\displaystyle \operatorname {tr} _{C}(\rho )} is the partial trace operation on the reset qubit C {\displaystyle C} , and ρ ε b {\displaystyle \rho _{\varepsilon _{b}}} is the density matrix describing a (new) qubit from the bath, with bias ε b {\displaystyle \varepsilon _{b}} . In both ways, the result is a reset qubit whose bias is identical to the bias of the qubits in the bath. In addition, the resulted reset qubit is uncorrelated with the other ones, independently of the correlations between them before the refresh step was held. Therefore, the refresh step can be viewed as discarding the information about the current reset qubit and gaining information about a fresh new one from the bath. === Compression step === The goal of this step is to reversibly redistribute the entropy of all qubits, such that the biases of the qubits are in descending (or non-ascending) order. The operation is done reversibly in order to prevent the entropy of the entire system from increasing (as it cannot decrease in a closed system, see entropy). In terms of temperature, this step rearranges the qubits in ascending order of temperature, so that the reset qubits are the hottest. In the example of the three qubits A , B , C {\displaystyle A,B,C} , this means that after the compression is done, the bias of qubit A {\displaystyle A} is the highest and the bias of C {\displaystyle C} is the lowest. In addition, the compression is used for the cooling of the computational qubits. The state of the system will be denoted by ( ε A , ε B , ε C ) {\displaystyle (\varepsilon _{A},\varepsilon _{B},\varepsilon _{C})} if the qubits A , B , C {\displaystyle A,B,C} are uncorrelated with each other (namely, if the system is in a product state) and their corresponding biases are ε A , ε B , ε C {\displaystyle \varepsilon _{A},\varepsilon _{B},\varepsilon _{C}} . The compression can be described as a sort operation on the diagonal entries of the density matrix which describes the system. For instance, if the state of the system after a certain reset step is ( 2 ε b , 0 , ε b ) {\displaystyle (2\varepsilon _{b},0,\varepsilon _{b})} , then the compression operates on the state as follows: ρ A B C = 1 8 diag ⁡ ( ( 1 + 2 ε b ) ( 1 + ε b ) ( 1 + 2 ε b ) ( 1 − ε b ) ( 1 + 2 ε b ) ( 1 + ε b ) ( 1 + 2 ε b ) ( 1 − ε b ) ( 1 − 2 ε b ) ( 1 + ε b ) ( 1 − 2 ε b ) ( 1 − ε b ) ( 1 − 2 ε b ) ( 1 + ε b ) ( 1 − 2 ε b ) ( 1 − ε b ) ) → compression ρ A B C ′ = 1 8 diag ⁡ ( ( 1 + 2 ε b ) ( 1 + ε b ) ( 1 + 2 ε b ) ( 1 + ε b ) ( 1 + 2 ε b ) ( 1 − ε b ) ( 1 + 2 ε b ) ( 1 − ε b ) ( 1 − 2 ε b ) ( 1 + ε b ) ( 1 − 2 ε b ) ( 1 + ε b ) ( 1 − 2 ε b ) ( 1 − ε b ) ( 1 − 2 ε b ) ( 1 − ε b ) ) {\displaystyle \rho _{ABC}={\frac {1}{8}}\operatorname {diag} {\begin{pmatrix}(1+2\varepsilon _{b})(1+\varepsilon _{b})\\(1+2\varepsilon _{b})(1-\varepsilon _{b})\\(1+2\varepsilon _{b})(1+\varepsilon _{b})\\(1+2\varepsilon _{b})(1-\varepsilon _{b})\\(1-2\varepsilon _{b})(1+\varepsilon _{b})\\(1-2\varepsilon _{b})(1-\varepsilon _{b})\\(1-2\varepsilon _{b})(1+\varepsilon _{b})\\(1-2\varepsilon _{b})(1-\varepsilon _{b})\end{pmatrix}}\xrightarrow {\text{compression}} \rho _{ABC}'={\frac {1}{8}}\operatorname {diag} {\begin{pmatrix}(1+2\varepsilon _{b})(1+\varepsilon _{b})\\(1+2\varepsilon _{b})(1+\varepsilon _{b})\\(1+2\varepsilon _{b})(1-\varepsilon _{b})\\(1+2\varepsilon _{b})(1-\varepsilon _{b})\\(1-2\varepsilon _{b})(1+\varepsilon _{b})\\(1-2\varepsilon _{b})(1+\varepsilon _{b})\\(1-2\varepsilon _{b})(1-\varepsilon _{b})\\(1-2\varepsilon _{b})(1-\varepsilon _{b})\end{pmatrix}}} This notation denotes a diagonal matrix whose diagonal entries are listed within the parentheses. The density matrices ρ A B C , ρ A B C ′ {\displaystyle \rho _{ABC},\rho _{ABC}'} represent the state of the system (including possible correlations between the qubits) before and after the compression step, respectively. In the above notations, the state after compression is ( 2 ε b , ε b , 0 ) {\displaystyle (2\varepsilon _{b},\varepsilon _{b},0)} . This sort operation is used for the rearrangement of the qubits in descending order of bias. As in the example, for some cases the sort operation can be described by a simpler operation, such as swap. However, the general form of the compression operation is a sort operation on the diagonal entries of the density matrix. For an intuitive demonstration of the compression step, the flow of the algorithm in the 1st round is presented below: 1st Iteration: After the refresh step, the state is ( 0 , 0 , ε b ) {\displaystyle (0,0,\varepsilon _{b})} . After the compression step (which swaps qubits A , C {\displaystyle A,C} ), the state is ( ε b , 0 , 0 ) {\displaystyle (\varepsilon _{b},0,0)} . 2nd Iteration: After the refresh step, the state is ( ε b , 0 , ε b ) {\displaystyle (\varepsilon _{b},0,\varepsilon _{b})} . After the compression step (which swaps qubits B , C {\displaystyle B,C} ), the state is ( ε b , ε b , 0 ) {\displaystyle (\varepsilon _{b},\varepsilon _{b},0)} . 3rd Iteration: After the refresh step, the state is ( ε b , ε b , ε b ) {\displaystyle (\varepsilon _{b},\varepsilon _{b},\varepsilon _{b})} . After the compression step (which boosts the bias of qubit A {\displaystyle A} ), the biases of the qubits are 3 ε b 2 − ε b 3 2 , ε b 2 + ε b 3 2 , ε b 2 + ε b 3 2 {\textstyle {\frac {3\varepsilon _{b}}{2}}-{\frac {\varepsilon _{b}^{3}}{2}},{\frac {\varepsilon _{b}}{2}}+{\frac {\varepsilon _{b}^{3}}{2}},{\frac {\varepsilon _{b}}{2}}+{\frac {\varepsilon _{b}^{3}}{2}}} , which can be approximated (to leading order) by 3 ε b 2 , ε b 2 , ε b 2 {\textstyle {\frac {3\varepsilon _{b}}{2}},{\frac {\varepsilon _{b}}{2}},{\frac {\varepsilon _{b}}{2}}} . Here, each bias is independently defined as the bias of the matching qubit when discarding the rest of the system (using partial trace), even when there are correlations between them. Therefore, this notation cannot fully describe the system, but can only be used as an intuitive demonstration of the steps of the algorithm. After the 1st round is over, the bias of the reset qubit ( ε b 2 + ε b 3 2 {\textstyle {\frac {\varepsilon _{b}}{2}}+{\frac {\varepsilon _{b}^{3}}{2}}} ) is smaller than the bias of the heat bath ( ε b {\displaystyle \varepsilon _{b}} ). This means that in the next refresh step (in the 2nd round of the algorithm), the reset qubit will be replaced by a fresh qubit with bias ε b {\displaystyle \varepsilon _{b}} : this cools the entire system, similarly to the previous refresh steps. Afterwards, the algorithm continues in a similar way. === General results === The number of rounds is not bounded: since the biases of the reset qubits asymptotically reach the bias of the bath after each round, the bias of the target computational qubit asymptotically reaches its limit as the algorithm proceeds. The target qubit is the computational qubit that the algorithm aims to cool the most. The "cooling limit" (the maximum bias the target qubit can reach) depends on the bias of the bath and the number of qubits of each kind in the system. If the number of the computational qubits (excluding the target one) is n ′ {\displaystyle n'} and the number of reset qubits is m {\displaystyle m} , then the cooling limit is ε max = ( 1 + ε b ) m 2 n ′ − ( 1 − ε b ) m 2 n ′ ( 1 + ε b ) m 2 n ′ + ( 1 − ε b ) m 2 n ′ {\textstyle \varepsilon _{\max }={\frac {(1+\varepsilon _{b})^{m2^{n'}}-(1-\varepsilon _{b})^{m2^{n'}}}{(1+\varepsilon _{b})^{m2^{n'}}+(1-\varepsilon _{b})^{m2^{n'}}}}} . In the case where ε b ≪ m 2 n ′ {\displaystyle \varepsilon _{b}\ll m2^{n'}} , the maximal polarization that can be obtained is proportional to m 2 n ′ {\displaystyle m2^{n'}} . Otherwise, the maximal bias reaches arbitrarily close to 1 {\displaystyle 1} . The number of rounds required in order to reach a certain bias depends on the desired bias, the bias of the bath and the number of qubits, and moreover varies between different versions of the algorithm. There are other theoretical results which give bounds on the number of iterations required to reach a certain bias. For example, if the bias of the bath is ε b ≪ 1 {\displaystyle \varepsilon _{b}\ll 1} , then the number of iterations required to cool a certain qubit to bias k ε b ≪ 1 {\displaystyle k\varepsilon _{b}\ll 1} is at least k 2 {\displaystyle k^{2}} . == References ==
Wikipedia/Algorithmic_cooling
The Quantum counting algorithm is a quantum algorithm for efficiently counting the number of solutions for a given search problem. The algorithm is based on the quantum phase estimation algorithm and on Grover's search algorithm. Counting problems are common in diverse fields such as statistical estimation, statistical physics, networking, etc. As for quantum computing, the ability to perform quantum counting efficiently is needed in order to use Grover's search algorithm (because running Grover's search algorithm requires knowing how many solutions exist). Moreover, this algorithm solves the quantum existence problem (namely, deciding whether any solution exists) as a special case. The algorithm was devised by Gilles Brassard, Peter Høyer and Alain Tapp in 1998. == The problem == Consider a finite set { 0 , 1 } n {\displaystyle \{0,1\}^{n}} of size N = 2 n {\displaystyle N=2^{n}} and a set B {\displaystyle B} of "solutions" (that is a subset of { 0 , 1 } n {\displaystyle \{0,1\}^{n}} ). Define: { f : { 0 , 1 } n → { 0 , 1 } f ( x ) = { 1 x ∈ B 0 x ∉ B {\displaystyle {\begin{cases}f:\left\{0,1\right\}^{n}\to \{0,1\}\\f(x)={\begin{cases}1&x\in B\\0&x\notin B\end{cases}}\end{cases}}} In other words, f {\displaystyle f} is the indicator function of B {\displaystyle B} . Calculate the number of solutions M = | f − 1 ( 1 ) | = | B | {\displaystyle M=\left\vert f^{-1}(1)\right\vert =\vert B\vert } . === Classical solution === Without any prior knowledge on the set of solutions B {\displaystyle B} (or the structure of the function f {\displaystyle f} ), a classical deterministic solution cannot perform better than Ω ( N ) {\displaystyle \Omega (N)} , because all the N {\displaystyle N} elements of { 0 , 1 } n {\displaystyle \{0,1\}^{n}} must be inspected (consider a case where the last element to be inspected is a solution). == The algorithm == === Setup === The input consists of two registers (namely, two parts): the upper p {\displaystyle p} qubits comprise the first register, and the lower n {\displaystyle n} qubits are the second register. === Create superposition === The initial state of the system is | 0 ⟩ ⊗ p | 0 ⟩ ⊗ n {\displaystyle |0\rangle ^{\otimes p}|0\rangle ^{\otimes n}} . After applying multiple bit Hadamard gate operation on each of the registers separately, the state of the first register is 1 2 p / 2 ( | 0 ⟩ + | 1 ⟩ ) ⊗ p {\displaystyle {\frac {1}{2^{p/2}}}(|0\rangle +|1\rangle )^{\otimes p}} and the state of the second register is 1 2 n / 2 ( | 0 ⟩ + | 1 ⟩ ) ⊗ n = 1 N ∑ x = 0 N − 1 | x ⟩ {\displaystyle {\frac {1}{2^{n/2}}}(|0\rangle +|1\rangle )^{\otimes n}={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}|x\rangle } an equal superposition state in the computational basis. === Grover operator === Because the size of the space is | { 0 , 1 } n | = 2 n = N {\displaystyle \left\vert \{0,1\}^{n}\right\vert =2^{n}=N} and the number of solutions is | B | = M {\displaystyle \left\vert B\right\vert =M} , we can define the normalized states:: 252  | α ⟩ = 1 N − M ∑ x ∉ B | x ⟩ , and | β ⟩ = 1 M ∑ x ∈ B | x ⟩ . {\displaystyle |\alpha \rangle ={\frac {1}{\sqrt {N-M}}}\sum _{x\notin B}{|x\rangle },\qquad {\text{and}}\qquad |\beta \rangle ={\frac {1}{\sqrt {M}}}\sum _{x\in B}{|x\rangle }.} Note that N − M N | α ⟩ + M N | β ⟩ = 1 N ∑ x = 0 N − 1 | x ⟩ , {\displaystyle {\sqrt {\frac {N-M}{N}}}|\alpha \rangle +{\sqrt {\frac {M}{N}}}|\beta \rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}{|x\rangle },} which is the state of the second register after the Hadamard transform. Geometric visualization of Grover's algorithm shows that in the two-dimensional space spanned by | α ⟩ {\displaystyle |\alpha \rangle } and | β ⟩ {\displaystyle |\beta \rangle } , the Grover operator is a counterclockwise rotation; hence, it can be expressed as G = [ cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ] {\displaystyle G={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}} in the orthonormal basis { | α ⟩ , | β ⟩ } {\displaystyle \{|\alpha \rangle ,|\beta \rangle \}} .: 252 : 149  From the properties of rotation matrices we know that G {\displaystyle G} is a unitary matrix with the two eigenvalues e ± i θ {\displaystyle e^{\pm i\theta }} .: 253  === Estimating the value of θ === From here onwards, we follow the quantum phase estimation algorithm scheme: we apply controlled Grover operations followed by inverse quantum Fourier transform; and according to the analysis, we will find the best p {\displaystyle p} -bit approximation to the real number θ {\displaystyle \theta } (belonging to the eigenvalues e ± i θ {\displaystyle e^{\pm i\theta }} of the Grover operator) with probability higher than 4 π 2 {\displaystyle {\frac {4}{\pi ^{2}}}} .: 348 : 157  Note that the second register is actually in a superposition of the eigenvectors of the Grover operator (while in the original quantum phase estimation algorithm, the second register is the required eigenvector). This means that with some probability, we approximate θ {\displaystyle \theta } , and with some probability, we approximate 2 π − θ {\displaystyle 2\pi -\theta } ; those two approximations are equivalent.: 224–225  === Analysis === Assuming that the size N {\displaystyle N} of the space is at least twice the number of solutions (namely, assuming that M ≤ N 2 {\displaystyle M\leq {\tfrac {N}{2}}} ), a result of the analysis of Grover's algorithm is:: 254  sin ⁡ θ 2 = M N . {\displaystyle \sin {\frac {\theta }{2}}={\sqrt {\frac {M}{N}}}.} Thus, if we find θ {\displaystyle \theta } , we can also find the value of M {\displaystyle M} (because N {\displaystyle N} is known). The error | Δ M | N = | sin 2 ⁡ ( θ + Δ θ 2 ) − sin 2 ⁡ ( θ 2 ) | {\displaystyle {\frac {\vert \Delta M\vert }{N}}=\left\vert \sin ^{2}\left({\frac {\theta +\Delta \theta }{2}}\right)-\sin ^{2}\left({\frac {\theta }{2}}\right)\right\vert } is determined by the error within estimation of the value of θ {\displaystyle \theta } . The quantum phase estimation algorithm finds, with high probability, the best p {\displaystyle p} -bit approximation of θ {\displaystyle \theta } ; this means that if p {\displaystyle p} is large enough, we will have Δ θ ≈ 0 {\displaystyle \Delta \theta \approx 0} , hence | Δ M | ≈ 0 {\displaystyle \vert \Delta M\vert \approx 0} .: 263  == Uses == === Grover's search algorithm for an initially-unknown number of solutions === In Grover's search algorithm, the number of iterations that should be done is π 4 N M {\displaystyle {\frac {\pi }{4}}{\sqrt {\frac {N}{M}}}} .: 254  : 150  Thus, if N {\displaystyle N} is known and M {\displaystyle M} is calculated by the quantum counting algorithm, the number of iterations for Grover's algorithm is easily calculated. === Speeding up NP-complete problems === The quantum counting algorithm can be used to speed up solution to problems which are NP-complete. An example of an NP-complete problem is the Hamiltonian cycle problem, which is the problem of determining whether a graph G = ( V , E ) {\displaystyle G=(V,E)} has a Hamiltonian cycle. A simple solution to the Hamiltonian cycle problem is checking, for each ordering of the vertices of G {\displaystyle G} , whether it is a Hamiltonian cycle or not. Searching through all the possible orderings of the graph's vertices can be done with quantum counting followed by Grover's algorithm, achieving a speedup of the square root, similar to Grover's algorithm.: 264  This approach finds a Hamiltonian cycle (if exists); for determining whether a Hamiltonian cycle exists, the quantum counting algorithm itself is sufficient (and even the quantum existence algorithm, described below, is sufficient). === Quantum existence problem === Quantum existence problem is a special case of quantum counting where we do not want to calculate the value of M {\displaystyle M} , but we only wish to know whether M ≠ 0 {\displaystyle M\neq 0} or not.: 147  A trivial solution to this problem is directly using the quantum counting algorithm: the algorithm yields M {\displaystyle M} , so by checking whether M ≠ 0 {\displaystyle M\neq 0} we get the answer to the existence problem. This approach involves some overhead information because we are not interested in the value of M {\displaystyle M} . Quantum phase estimation can be optimized to eliminate this overhead.: 148  If you are not interested in the control of error probability then using a setup with small number of qubits in the upper register will not produce an accurate estimation of the value of θ {\displaystyle \theta } , but will suffice to determine whether M {\displaystyle M} equals zero or not.: 263  === Quantum relation testing problem === Quantum relation testing Q R T ( v a l u e , r e l a t i o n ) {\displaystyle QRT(value,relation)} . is an extension of quantum existence testing, it decides whether at least one entry can be found in the data base which fulfils the relation to a certain reference value. E.g. Q R T ( 5 , > ) {\displaystyle QRT(5,>)} gives back YES if the data base contains any value larger than 5 else it returns NO. Quantum relation testing combined with classical logarithmic search forms an efficient quantum min/max searching algorithm. : 152  == See also == Quantum phase estimation algorithm Grover's algorithm Counting problem (complexity) == References ==
Wikipedia/Quantum_counting_algorithm
In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just O ( N ) {\displaystyle O({\sqrt {N}})} evaluations of the function, where N {\displaystyle N} is the size of the function's domain. It was devised by Lov Grover in 1996. The analogous problem in classical computation would have a query complexity O ( N ) {\displaystyle O(N)} (i.e., the function would have to be evaluated O ( N ) {\displaystyle O(N)} times: there is no better approach than trying out all input values one after the other, which, on average, takes N / 2 {\displaystyle N/2} steps). Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani proved that any quantum solution to the problem needs to evaluate the function Ω ( N ) {\displaystyle \Omega ({\sqrt {N}})} times, so Grover's algorithm is asymptotically optimal. Since classical algorithms for NP-complete problems require exponentially many steps, and Grover's algorithm provides at most a quadratic speedup over the classical solution for unstructured search, this suggests that Grover's algorithm by itself will not provide polynomial-time solutions for NP-complete problems (as the square root of an exponential function is still an exponential, not a polynomial function). Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable when N {\displaystyle N} is large, and Grover's algorithm can be applied to speed up broad classes of algorithms. Grover's algorithm could brute-force a 128-bit symmetric cryptographic key in roughly 264 iterations, or a 256-bit key in roughly 2128 iterations. It may not be the case that Grover's algorithm poses a significantly increased risk to encryption over existing classical algorithms, however. == Applications and limitations == Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms. In particular, algorithms for NP-complete problems which contain exhaustive search as a subroutine can be sped up by Grover's algorithm. The current theoretical best algorithm, in terms of worst-case complexity, for 3SAT is one such example. Generic constraint satisfaction problems also see quadratic speedups with Grover. These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance. However, it is unclear whether Grover's algorithm could speed up best practical algorithms for these problems. Grover's algorithm can also give provable speedups for black-box problems in quantum query complexity, including element distinctness and the collision problem (solved with the Brassard–Høyer–Tapp algorithm). In these types of problems, one treats the oracle function f as a database, and the goal is to use the quantum query to this function as few times as possible. === Cryptography === Grover's algorithm essentially solves the task of function inversion. Roughly speaking, if we have a function y = f ( x ) {\displaystyle y=f(x)} that can be evaluated on a quantum computer, Grover's algorithm allows us to calculate x {\displaystyle x} when given y {\displaystyle y} . Consequently, Grover's algorithm gives broad asymptotic speed-ups to many kinds of brute-force attacks on symmetric-key cryptography, including collision attacks and pre-image attacks. However, this may not necessarily be the most efficient algorithm since, for example, the Pollard's rho algorithm is able to find a collision in SHA-2 more efficiently than Grover's algorithm. === Limitations === Grover's original paper described the algorithm as a database search algorithm, and this description is still common. The database in this analogy is a table of all of the function's outputs, indexed by the corresponding input. However, this database is not represented explicitly. Instead, an oracle is invoked to evaluate an item by its index. Reading a full database item by item and converting it into such a representation may take a lot longer than Grover's search. To account for such effects, Grover's algorithm can be viewed as solving an equation or satisfying a constraint. In such applications, the oracle is a way to check the constraint and is not related to the search algorithm. This separation usually prevents algorithmic optimizations, whereas conventional search algorithms often rely on such optimizations and avoid exhaustive search. Fortunately, fast Grover's oracle implementation is possible for many constraint satisfaction and optimization problems. The major barrier to instantiating a speedup from Grover's algorithm is that the quadratic speedup achieved is too modest to overcome the large overhead of near-term quantum computers. However, later generations of fault-tolerant quantum computers with better hardware performance may be able to realize these speedups for practical instances of data. == Problem description == As input for Grover's algorithm, suppose we have a function f : { 0 , 1 , … , N − 1 } → { 0 , 1 } {\displaystyle f\colon \{0,1,\ldots ,N-1\}\to \{0,1\}} . In the "unstructured database" analogy, the domain represent indices to a database, and f(x) = 1 if and only if the data that x points to satisfies the search criterion. We additionally assume that only one index satisfies f(x) = 1, and we call this index ω. Our goal is to identify ω. We can access f with a subroutine (sometimes called an oracle) in the form of a unitary operator Uω that acts as follows: { U ω | x ⟩ = − | x ⟩ for x = ω , that is, f ( x ) = 1 , U ω | x ⟩ = | x ⟩ for x ≠ ω , that is, f ( x ) = 0. {\displaystyle {\begin{cases}U_{\omega }|x\rangle =-|x\rangle &{\text{for }}x=\omega {\text{, that is, }}f(x)=1,\\U_{\omega }|x\rangle =|x\rangle &{\text{for }}x\neq \omega {\text{, that is, }}f(x)=0.\end{cases}}} This uses the N {\displaystyle N} -dimensional state space H {\displaystyle {\mathcal {H}}} , which is supplied by a register with n = ⌈ log 2 ⁡ N ⌉ {\displaystyle n=\lceil \log _{2}N\rceil } qubits. This is often written as U ω | x ⟩ = ( − 1 ) f ( x ) | x ⟩ . {\displaystyle U_{\omega }|x\rangle =(-1)^{f(x)}|x\rangle .} Grover's algorithm outputs ω with probability at least 1/2 using O ( N ) {\displaystyle O({\sqrt {N}})} applications of Uω. This probability can be made arbitrarily large by running Grover's algorithm multiple times. If one runs Grover's algorithm until ω is found, the expected number of applications is still O ( N ) {\displaystyle O({\sqrt {N}})} , since it will only be run twice on average. === Alternative oracle definition === This section compares the above oracle U ω {\displaystyle U_{\omega }} with an oracle U f {\displaystyle U_{f}} . Uω is different from the standard quantum oracle for a function f. This standard oracle, denoted here as Uf, uses an ancillary qubit system. The operation then represents an inversion (NOT gate) on the main system conditioned by the value of f(x) from the ancillary system: { U f | x ⟩ | y ⟩ = | x ⟩ | ¬ y ⟩ for x = ω , that is, f ( x ) = 1 , U f | x ⟩ | y ⟩ = | x ⟩ | y ⟩ for x ≠ ω , that is, f ( x ) = 0 , {\displaystyle {\begin{cases}U_{f}|x\rangle |y\rangle =|x\rangle |\neg y\rangle &{\text{for }}x=\omega {\text{, that is, }}f(x)=1,\\U_{f}|x\rangle |y\rangle =|x\rangle |y\rangle &{\text{for }}x\neq \omega {\text{, that is, }}f(x)=0,\end{cases}}} or briefly, U f | x ⟩ | y ⟩ = | x ⟩ | y ⊕ f ( x ) ⟩ . {\displaystyle U_{f}|x\rangle |y\rangle =|x\rangle |y\oplus f(x)\rangle .} These oracles are typically realized using uncomputation. If we are given Uf as our oracle, then we can also implement Uω, since Uω is Uf when the ancillary qubit is in the state | − ⟩ = 1 2 ( | 0 ⟩ − | 1 ⟩ ) = H | 1 ⟩ {\displaystyle |-\rangle ={\frac {1}{\sqrt {2}}}{\big (}|0\rangle -|1\rangle {\big )}=H|1\rangle } : U f ( | x ⟩ ⊗ | − ⟩ ) = 1 2 ( U f | x ⟩ | 0 ⟩ − U f | x ⟩ | 1 ⟩ ) = 1 2 ( | x ⟩ | 0 ⊕ f ( x ) ⟩ − | x ⟩ | 1 ⊕ f ( x ) ⟩ ) = { 1 2 ( − | x ⟩ | 0 ⟩ + | x ⟩ | 1 ⟩ ) if f ( x ) = 1 , 1 2 ( | x ⟩ | 0 ⟩ − | x ⟩ | 1 ⟩ ) if f ( x ) = 0 = ( U ω | x ⟩ ) ⊗ | − ⟩ {\displaystyle {\begin{aligned}U_{f}{\big (}|x\rangle \otimes |-\rangle {\big )}&={\frac {1}{\sqrt {2}}}\left(U_{f}|x\rangle |0\rangle -U_{f}|x\rangle |1\rangle \right)\\&={\frac {1}{\sqrt {2}}}\left(|x\rangle |0\oplus f(x)\rangle -|x\rangle |1\oplus f(x)\rangle \right)\\&={\begin{cases}{\frac {1}{\sqrt {2}}}\left(-|x\rangle |0\rangle +|x\rangle |1\rangle \right)&{\text{if }}f(x)=1,\\{\frac {1}{\sqrt {2}}}\left(|x\rangle |0\rangle -|x\rangle |1\rangle \right)&{\text{if }}f(x)=0\end{cases}}\\&=(U_{\omega }|x\rangle )\otimes |-\rangle \end{aligned}}} So, Grover's algorithm can be run regardless of which oracle is given. If Uf is given, then we must maintain an additional qubit in the state | − ⟩ {\displaystyle |-\rangle } and apply Uf in place of Uω. == Algorithm == The steps of Grover's algorithm are given as follows: Initialize the system to the uniform superposition over all states | s ⟩ = 1 N ∑ x = 0 N − 1 | x ⟩ . {\displaystyle |s\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}|x\rangle .} Perform the following "Grover iteration" r ( N ) {\displaystyle r(N)} times: Apply the operator U ω {\displaystyle U_{\omega }} Apply the Grover diffusion operator U s = 2 | s ⟩ ⟨ s | − I {\displaystyle U_{s}=2\left|s\right\rangle \!\!\left\langle s\right|-I} Measure the resulting quantum state in the computational basis. For the correctly chosen value of r {\displaystyle r} , the output will be | ω ⟩ {\displaystyle |\omega \rangle } with probability approaching 1 for N ≫ 1. Analysis shows that this eventual value for r ( N ) {\displaystyle r(N)} satisfies r ( N ) ≤ ⌈ π 4 N ⌉ {\displaystyle r(N)\leq {\Big \lceil }{\frac {\pi }{4}}{\sqrt {N}}{\Big \rceil }} . Implementing the steps for this algorithm can be done using a number of gates linear in the number of qubits. Thus, the gate complexity of this algorithm is O ( log ⁡ ( N ) r ( N ) ) {\displaystyle O(\log(N)r(N))} , or O ( log ⁡ ( N ) ) {\displaystyle O(\log(N))} per iteration. == Geometric proof of correctness == There is a geometric interpretation of Grover's algorithm, following from the observation that the quantum state of Grover's algorithm stays in a two-dimensional subspace after each step. Consider the plane spanned by | s ⟩ {\displaystyle |s\rangle } and | ω ⟩ {\displaystyle |\omega \rangle } ; equivalently, the plane spanned by | ω ⟩ {\displaystyle |\omega \rangle } and the perpendicular ket | s ′ ⟩ = 1 N − 1 ∑ x ≠ ω | x ⟩ {\displaystyle \textstyle |s'\rangle ={\frac {1}{\sqrt {N-1}}}\sum _{x\neq \omega }|x\rangle } . Grover's algorithm begins with the initial ket | s ⟩ {\displaystyle |s\rangle } , which lies in the subspace. The operator U ω {\displaystyle U_{\omega }} is a reflection at the hyperplane orthogonal to | ω ⟩ {\displaystyle |\omega \rangle } for vectors in the plane spanned by | s ′ ⟩ {\displaystyle |s'\rangle } and | ω ⟩ {\displaystyle |\omega \rangle } , i.e. it acts as a reflection across | s ′ ⟩ {\displaystyle |s'\rangle } . This can be seen by writing U ω {\displaystyle U_{\omega }} in the form of a Householder reflection: U ω = I − 2 | ω ⟩ ⟨ ω | . {\displaystyle U_{\omega }=I-2|\omega \rangle \langle \omega |.} The operator U s = 2 | s ⟩ ⟨ s | − I {\displaystyle U_{s}=2|s\rangle \langle s|-I} is a reflection through | s ⟩ {\displaystyle |s\rangle } . Both operators U s {\displaystyle U_{s}} and U ω {\displaystyle U_{\omega }} take states in the plane spanned by | s ′ ⟩ {\displaystyle |s'\rangle } and | ω ⟩ {\displaystyle |\omega \rangle } to states in the plane. Therefore, Grover's algorithm stays in this plane for the entire algorithm. It is straightforward to check that the operator U s U ω {\displaystyle U_{s}U_{\omega }} of each Grover iteration step rotates the state vector by an angle of θ = 2 arcsin ⁡ 1 N {\displaystyle \theta =2\arcsin {\tfrac {1}{\sqrt {N}}}} . So, with enough iterations, one can rotate from the initial state | s ⟩ {\displaystyle |s\rangle } to the desired output state | ω ⟩ {\displaystyle |\omega \rangle } . The initial ket is close to the state orthogonal to | ω ⟩ {\displaystyle |\omega \rangle } : ⟨ s ′ | s ⟩ = N − 1 N . {\displaystyle \langle s'|s\rangle ={\sqrt {\frac {N-1}{N}}}.} In geometric terms, the angle θ / 2 {\displaystyle \theta /2} between | s ⟩ {\displaystyle |s\rangle } and | s ′ ⟩ {\displaystyle |s'\rangle } is given by sin ⁡ θ 2 = 1 N . {\displaystyle \sin {\frac {\theta }{2}}={\frac {1}{\sqrt {N}}}.} We need to stop when the state vector passes close to | ω ⟩ {\displaystyle |\omega \rangle } ; after this, subsequent iterations rotate the state vector away from | ω ⟩ {\displaystyle |\omega \rangle } , reducing the probability of obtaining the correct answer. The exact probability of measuring the correct answer is sin 2 ⁡ ( ( r + 1 2 ) θ ) , {\displaystyle \sin ^{2}\left({\Big (}r+{\frac {1}{2}}{\Big )}\theta \right),} where r is the (integer) number of Grover iterations. The earliest time that we get a near-optimal measurement is therefore r ≈ π N / 4 {\displaystyle r\approx \pi {\sqrt {N}}/4} . == Algebraic proof of correctness == To complete the algebraic analysis, we need to find out what happens when we repeatedly apply U s U ω {\displaystyle U_{s}U_{\omega }} . A natural way to do this is by eigenvalue analysis of a matrix. Notice that during the entire computation, the state of the algorithm is a linear combination of s {\displaystyle s} and ω {\displaystyle \omega } . We can write the action of U s {\displaystyle U_{s}} and U ω {\displaystyle U_{\omega }} in the space spanned by { | s ⟩ , | ω ⟩ } {\displaystyle \{|s\rangle ,|\omega \rangle \}} as: U s : a | ω ⟩ + b | s ⟩ ↦ [ | ω ⟩ | s ⟩ ] [ − 1 0 2 / N 1 ] [ a b ] . U ω : a | ω ⟩ + b | s ⟩ ↦ [ | ω ⟩ | s ⟩ ] [ − 1 − 2 / N 0 1 ] [ a b ] . {\displaystyle {\begin{aligned}U_{s}:a|\omega \rangle +b|s\rangle &\mapsto [|\omega \rangle \,|s\rangle ]{\begin{bmatrix}-1&0\\2/{\sqrt {N}}&1\end{bmatrix}}{\begin{bmatrix}a\\b\end{bmatrix}}.\\U_{\omega }:a|\omega \rangle +b|s\rangle &\mapsto [|\omega \rangle \,|s\rangle ]{\begin{bmatrix}-1&-2/{\sqrt {N}}\\0&1\end{bmatrix}}{\begin{bmatrix}a\\b\end{bmatrix}}.\end{aligned}}} So in the basis { | ω ⟩ , | s ⟩ } {\displaystyle \{|\omega \rangle ,|s\rangle \}} (which is neither orthogonal nor a basis of the whole space) the action U s U ω {\displaystyle U_{s}U_{\omega }} of applying U ω {\displaystyle U_{\omega }} followed by U s {\displaystyle U_{s}} is given by the matrix U s U ω = [ − 1 0 2 / N 1 ] [ − 1 − 2 / N 0 1 ] = [ 1 2 / N − 2 / N 1 − 4 / N ] . {\displaystyle U_{s}U_{\omega }={\begin{bmatrix}-1&0\\2/{\sqrt {N}}&1\end{bmatrix}}{\begin{bmatrix}-1&-2/{\sqrt {N}}\\0&1\end{bmatrix}}={\begin{bmatrix}1&2/{\sqrt {N}}\\-2/{\sqrt {N}}&1-4/N\end{bmatrix}}.} This matrix happens to have a very convenient Jordan form. If we define t = arcsin ⁡ ( 1 / N ) {\displaystyle t=\arcsin(1/{\sqrt {N}})} , it is U s U ω = M [ e 2 i t 0 0 e − 2 i t ] M − 1 {\displaystyle U_{s}U_{\omega }=M{\begin{bmatrix}e^{2it}&0\\0&e^{-2it}\end{bmatrix}}M^{-1}} where M = [ − i i e i t e − i t ] . {\displaystyle M={\begin{bmatrix}-i&i\\e^{it}&e^{-it}\end{bmatrix}}.} It follows that r-th power of the matrix (corresponding to r iterations) is ( U s U ω ) r = M [ e 2 r i t 0 0 e − 2 r i t ] M − 1 . {\displaystyle (U_{s}U_{\omega })^{r}=M{\begin{bmatrix}e^{2rit}&0\\0&e^{-2rit}\end{bmatrix}}M^{-1}.} Using this form, we can use trigonometric identities to compute the probability of observing ω after r iterations mentioned in the previous section, | [ ⟨ ω | ω ⟩ ⟨ ω | s ⟩ ] ( U s U ω ) r [ 0 1 ] | 2 = sin 2 ⁡ ( ( 2 r + 1 ) t ) . {\displaystyle \left|{\begin{bmatrix}\langle \omega |\omega \rangle &\langle \omega |s\rangle \end{bmatrix}}(U_{s}U_{\omega })^{r}{\begin{bmatrix}0\\1\end{bmatrix}}\right|^{2}=\sin ^{2}\left((2r+1)t\right).} Alternatively, one might reasonably imagine that a near-optimal time to distinguish would be when the angles 2rt and −2rt are as far apart as possible, which corresponds to 2 r t ≈ π / 2 {\displaystyle 2rt\approx \pi /2} , or r = π / 4 t = π / 4 arcsin ⁡ ( 1 / N ) ≈ π N / 4 {\displaystyle r=\pi /4t=\pi /4\arcsin(1/{\sqrt {N}})\approx \pi {\sqrt {N}}/4} . Then the system is in state [ | ω ⟩ | s ⟩ ] ( U s U ω ) r [ 0 1 ] ≈ [ | ω ⟩ | s ⟩ ] M [ i 0 0 − i ] M − 1 [ 0 1 ] = | ω ⟩ 1 cos ⁡ ( t ) − | s ⟩ sin ⁡ ( t ) cos ⁡ ( t ) . {\displaystyle [|\omega \rangle \,|s\rangle ](U_{s}U_{\omega })^{r}{\begin{bmatrix}0\\1\end{bmatrix}}\approx [|\omega \rangle \,|s\rangle ]M{\begin{bmatrix}i&0\\0&-i\end{bmatrix}}M^{-1}{\begin{bmatrix}0\\1\end{bmatrix}}=|\omega \rangle {\frac {1}{\cos(t)}}-|s\rangle {\frac {\sin(t)}{\cos(t)}}.} A short calculation now shows that the observation yields the correct answer ω with error O ( 1 N ) {\displaystyle O\left({\frac {1}{N}}\right)} . == Extensions and variants == === Multiple matching entries === If, instead of 1 matching entry, there are k matching entries, the same algorithm works, but the number of iterations must be π 4 ( N k ) 1 / 2 {\textstyle {\frac {\pi }{4}}{\left({\frac {N}{k}}\right)^{1/2}}} instead of π 4 N 1 / 2 . {\textstyle {\frac {\pi }{4}}{N^{1/2}}.} There are several ways to handle the case if k is unknown. A simple solution performs optimally up to a constant factor: run Grover's algorithm repeatedly for increasingly small values of k, e.g., taking k = N, N/2, N/4, ..., and so on, taking k = N / 2 t {\displaystyle k=N/2^{t}} for iteration t until a matching entry is found. With sufficiently high probability, a marked entry will be found by iteration t = log 2 ⁡ ( N / k ) + c {\displaystyle t=\log _{2}(N/k)+c} for some constant c. Thus, the total number of iterations taken is at most π 4 ( 1 + 2 + 4 + ⋯ + N k 2 c ) = O ( N / k ) . {\displaystyle {\frac {\pi }{4}}{\Big (}1+{\sqrt {2}}+{\sqrt {4}}+\cdots +{\sqrt {\frac {N}{k2^{c}}}}{\Big )}=O{\big (}{\sqrt {N/k}}{\big )}.} Another approach if k is unknown is to derive it via the quantum counting algorithm prior. If k = N / 2 {\displaystyle k=N/2} (or the traditional one marked state Grover's Algorithm if run with N = 2 {\displaystyle N=2} ), the algorithm will provide no amplification. If k > N / 2 {\displaystyle k>N/2} , increasing k will begin to increase the number of iterations necessary to obtain a solution. On the other hand, if k ≥ N / 2 {\displaystyle k\geq N/2} , a classical running of the checking oracle on a single random choice of input will more likely than not give a correct solution. A version of this algorithm is used in order to solve the collision problem. === Quantum partial search === A modification of Grover's algorithm called quantum partial search was described by Grover and Radhakrishnan in 2004. In partial search, one is not interested in finding the exact address of the target item, only the first few digits of the address. Equivalently, we can think of "chunking" the search space into blocks, and then asking "in which block is the target item?". In many applications, such a search yields enough information if the target address contains the information wanted. For instance, to use the example given by L. K. Grover, if one has a list of students organized by class rank, we may only be interested in whether a student is in the lower 25%, 25–50%, 50–75% or 75–100% percentile. To describe partial search, we consider a database separated into K {\displaystyle K} blocks, each of size b = N / K {\displaystyle b=N/K} . The partial search problem is easier. Consider the approach we would take classically – we pick one block at random, and then perform a normal search through the rest of the blocks (in set theory language, the complement). If we do not find the target, then we know it is in the block we did not search. The average number of iterations drops from N / 2 {\displaystyle N/2} to ( N − b ) / 2 {\displaystyle (N-b)/2} . Grover's algorithm requires π 4 N {\textstyle {\frac {\pi }{4}}{\sqrt {N}}} iterations. Partial search will be faster by a numerical factor that depends on the number of blocks K {\displaystyle K} . Partial search uses n 1 {\displaystyle n_{1}} global iterations and n 2 {\displaystyle n_{2}} local iterations. The global Grover operator is designated G 1 {\displaystyle G_{1}} and the local Grover operator is designated G 2 {\displaystyle G_{2}} . The global Grover operator acts on the blocks. Essentially, it is given as follows: Perform j 1 {\displaystyle j_{1}} standard Grover iterations on the entire database. Perform j 2 {\displaystyle j_{2}} local Grover iterations. A local Grover iteration is a direct sum of Grover iterations over each block. Perform one standard Grover iteration. The optimal values of j 1 {\displaystyle j_{1}} and j 2 {\displaystyle j_{2}} are discussed in the paper by Grover and Radhakrishnan. One might also wonder what happens if one applies successive partial searches at different levels of "resolution". This idea was studied in detail by Vladimir Korepin and Xu, who called it binary quantum search. They proved that it is not in fact any faster than performing a single partial search. == Optimality == Grover's algorithm is optimal up to sub-constant factors. That is, any algorithm that accesses the database only by using the operator Uω must apply Uω at least a 1 − o ( 1 ) {\displaystyle 1-o(1)} fraction as many times as Grover's algorithm. The extension of Grover's algorithm to k matching entries, π(N/k)1/2/4, is also optimal. This result is important in understanding the limits of quantum computation. If the Grover's search problem was solvable with logc N applications of Uω, that would imply that NP is contained in BQP, by transforming problems in NP into Grover-type search problems. The optimality of Grover's algorithm suggests that quantum computers cannot solve NP-Complete problems in polynomial time, and thus NP is not contained in BQP. It has been shown that a class of non-local hidden variable quantum computers could implement a search of an N {\displaystyle N} -item database in at most O ( N 3 ) {\displaystyle O({\sqrt[{3}]{N}})} steps. This is faster than the O ( N ) {\displaystyle O({\sqrt {N}})} steps taken by Grover's algorithm. == See also == Amplitude amplification Brassard–Høyer–Tapp algorithm (for solving the collision problem) Shor's algorithm (for factorization) Quantum walk search == Notes == == References == Grover L.K.: A fast quantum mechanical algorithm for database search, Proceedings, 28th Annual ACM Symposium on the Theory of Computing, (May 1996) p. 212 Grover L.K.: From Schrödinger's equation to quantum search algorithm, American Journal of Physics, 69(7): 769–777, 2001. Pedagogical review of the algorithm and its history. Grover L.K.: QUANTUM COMPUTING: How the weird logic of the subatomic world could make it possible for machines to calculate millions of times faster than they do today The Sciences, July/August 1999, pp. 24–30. Nielsen, M.A. and Chuang, I.L. Quantum computation and quantum information. Cambridge University Press, 2000. Chapter 6. What's a Quantum Phone Book?, Lov Grover, Lucent Technologies == External links == Davy Wybiral. "Quantum Circuit Simulator". Archived from the original on 2017-01-16. Retrieved 2017-01-13. Craig Gidney (2013-03-05). "Grover's Quantum Search Algorithm". Archived from the original on 2020-11-17. Retrieved 2013-03-08. François Schwarzentruber (2013-05-18). "Grover's algorithm". Alexander Prokopenya. "Quantum Circuit Implementing Grover's Search Algorithm". Wolfram Alpha. "Quantum computation, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Roberto Maestre (2018-05-11). "Grover's Algorithm implemented in R and C". GitHub. Bernhard Ömer. "QCL - A Programming Language for Quantum Computers". Retrieved 2022-04-30. Implemented in /qcl-0.6.4/lib/grover.qcl
Wikipedia/Grover's_algorithm
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm. == Quantum data fitting == Data fitting is a process of constructing a mathematical function that best fits a set of data points. The fit's quality is measured by some criteria, usually the distance between the function and the data points. === Quantum least squares fitting === One of the most common types of data fitting is solving the least squares problem, minimizing the sum of the squares of differences between the data points and the fitted function. The algorithm is given N {\displaystyle N} input data points ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) {\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})} and M {\displaystyle M} continuous functions f 1 , f 2 , . . . , f M {\displaystyle f_{1},f_{2},...,f_{M}} . The algorithm finds and gives as output a continuous function f λ → {\displaystyle f_{\vec {\lambda }}} that is a linear combination of f j {\displaystyle f_{j}} : f λ → ( x ) = ∑ j = 1 M f j ( x ) λ j {\displaystyle f_{\vec {\lambda }}(x)=\sum _{j=1}^{M}f_{j}(x)\lambda _{j}} In other words, the algorithm finds the complex coefficients λ j {\displaystyle \lambda _{j}} , and thus the vector λ → = ( λ 1 , λ 2 , . . . , λ M ) {\displaystyle {\vec {\lambda }}=(\lambda _{1},\lambda _{2},...,\lambda _{M})} . The algorithm is aimed at minimizing the error, which is given by: E = ∑ i = 1 N | f λ → ( x i ) − y i | 2 = ∑ i = 1 N | ∑ j = 1 M f j ( x i ) λ j − y i | 2 = | F λ → − y → | 2 {\displaystyle E=\sum _{i=1}^{N}\left\vert f_{\vec {\lambda }}(x_{i})-y_{i}\right\vert ^{2}=\sum _{i=1}^{N}\left\vert \sum _{j=1}^{M}f_{j}(x_{i})\lambda _{j}-y_{i}\right\vert ^{2}=\left\vert F{\vec {\lambda }}-{\vec {y}}\right\vert ^{2}} where F {\displaystyle F} is defined to be the following matrix: F = ( f 1 ( x 1 ) ⋯ f M ( x 1 ) f 1 ( x 2 ) ⋯ f M ( x 2 ) ⋮ ⋱ ⋮ f 1 ( x N ) ⋯ f M ( x N ) ) {\displaystyle {F}={\begin{pmatrix}f_{1}(x_{1})&\cdots &f_{M}(x_{1})\\f_{1}(x_{2})&\cdots &f_{M}(x_{2})\\\vdots &\ddots &\vdots \\f_{1}(x_{N})&\cdots &f_{M}(x_{N})\\\end{pmatrix}}} The quantum least-squares fitting algorithm makes use of a version of Harrow, Hassidim, and Lloyd's quantum algorithm for linear systems of equations (HHL), and outputs the coefficients λ j {\displaystyle \lambda _{j}} and the fit quality estimation E {\displaystyle E} . It consists of three subroutines: an algorithm for performing a pseudo-inverse operation, one routine for the fit quality estimation, and an algorithm for learning the fit parameters. Because the quantum algorithm is mainly based on the HHL algorithm, it suggests an exponential improvement in the case where F {\displaystyle F} is sparse and the condition number (namely, the ratio between the largest and the smallest eigenvalues) of both F F † {\displaystyle FF^{\dagger }} and F † F {\displaystyle F^{\dagger }F} is small. == Quantum semidefinite programming == Semidefinite programming (SDP) is an optimization subfield dealing with the optimization of a linear objective function (a user-specified function to be minimized or maximized), over the intersection of the cone of positive semidefinite matrices with an affine space. The objective function is an inner product of a matrix C {\displaystyle C} (given as an input) with the variable X {\displaystyle X} . Denote by S n {\displaystyle \mathbb {S} ^{n}} the space of all n × n {\displaystyle n\times n} symmetric matrices. The variable X {\displaystyle X} must lie in the (closed convex) cone of positive semidefinite symmetric matrices S + n {\displaystyle \mathbb {S} _{+}^{n}} . The inner product of two matrices is defined as: ⟨ A , B ⟩ S n = t r ( A T B ) = ∑ i = 1 , j = 1 n A i j B i j . {\displaystyle \langle A,B\rangle _{\mathbb {S} ^{n}}={\rm {tr}}(A^{T}B)=\sum _{i=1,j=1}^{n}A_{ij}B_{ij}.} The problem may have additional constraints (given as inputs), also usually formulated as inner products. Each constraint forces the inner product of the matrices A k {\displaystyle A_{k}} (given as an input) with the optimization variable X {\displaystyle X} to be smaller than a specified value b k {\displaystyle b_{k}} (given as an input). Finally, the SDP problem can be written as: min X ∈ S n ⟨ C , X ⟩ S n subject to ⟨ A k , X ⟩ S n ≤ b k , k = 1 , … , m X ⪰ 0 {\displaystyle {\begin{array}{rl}{\displaystyle \min _{X\in \mathbb {S} ^{n}}}&\langle C,X\rangle _{\mathbb {S} ^{n}}\\{\text{subject to}}&\langle A_{k},X\rangle _{\mathbb {S} ^{n}}\leq b_{k},\quad k=1,\ldots ,m\\&X\succeq 0\end{array}}} The best classical algorithm is not known to unconditionally run in polynomial time. The corresponding feasibility problem is known to either lie outside of the union of the complexity classes NP and co-NP, or in the intersection of NP and co-NP. === The quantum algorithm === The algorithm inputs are A 1 . . . A m , C , b 1 . . . b m {\displaystyle A_{1}...A_{m},C,b_{1}...b_{m}} and parameters regarding the solution's trace, precision and optimal value (the objective function's value at the optimal point). The quantum algorithm consists of several iterations. In each iteration, it solves a feasibility problem, namely, finds any solution satisfying the following conditions (giving a threshold t {\displaystyle t} ): ⟨ C , X ⟩ S n ≤ t ⟨ A k , X ⟩ S n ≤ b k , k = 1 , … , m X ⪰ 0 {\displaystyle {\begin{array}{lr}\langle C,X\rangle _{\mathbb {S} ^{n}}\leq t\\\langle A_{k},X\rangle _{\mathbb {S} ^{n}}\leq b_{k},\quad k=1,\ldots ,m\\X\succeq 0\end{array}}} In each iteration, a different threshold t {\displaystyle t} is chosen, and the algorithm outputs either a solution X {\displaystyle X} such that ⟨ C , X ⟩ S n ≤ t {\displaystyle \langle C,X\rangle _{\mathbb {S} ^{n}}\leq t} (and the other constraints are satisfied, too) or an indication that no such solution exists. The algorithm performs a binary search to find the minimal threshold t {\displaystyle t} for which a solution X {\displaystyle X} still exists: this gives the minimal solution to the SDP problem. The quantum algorithm provides a quadratic improvement over the best classical algorithm in the general case, and an exponential improvement when the input matrices are of low rank. == Quantum combinatorial optimization == The combinatorial optimization problem is aimed at finding an optimal object from a finite set of objects. The problem can be phrased as a maximization of an objective function which is a sum of Boolean functions. Each Boolean function C α : { 0 , 1 } n → { 0 , 1 } {\displaystyle \,C_{\alpha }\colon \lbrace {0,1\rbrace }^{n}\rightarrow \lbrace {0,1}\rbrace } gets as input the n {\displaystyle n} -bit string z = z 1 z 2 … z n {\displaystyle z=z_{1}z_{2}\ldots z_{n}} and gives as output one bit (0 or 1). The combinatorial optimization problem of n {\displaystyle n} bits and m {\displaystyle m} clauses is finding an n {\displaystyle n} -bit string z {\displaystyle z} that maximizes the function C ( z ) = ∑ α = 1 m C α ( z ) {\displaystyle C(z)=\sum _{\alpha =1}^{m}C_{\alpha }(z)} Approximate optimization is a way of finding an approximate solution to an optimization problem, which is often NP-hard. The approximated solution of the combinatorial optimization problem is a string z {\displaystyle z} that is close to maximizing C ( z ) {\displaystyle C(z)} . === Quantum approximate optimization algorithm === For combinatorial optimization, the quantum approximate optimization algorithm (QAOA) briefly had a better approximation ratio than any known polynomial time classical algorithm (for a certain problem), until a more effective classical algorithm was proposed. The relative speed-up of the quantum algorithm is an open research question. QAOA consists of the following steps: Defining a cost Hamiltonian H C {\displaystyle H_{C}} such that its ground state encodes the solution to the optimization problem. Defining a mixer Hamiltonian H M {\displaystyle H_{M}} . Defining the oracles U C ( γ ) = exp ⁡ ( − ı γ H C ) {\displaystyle U_{C}(\gamma )=\exp(-\imath \gamma H_{C})} and U M ( α ) = exp ⁡ ( − ı α H M ) {\displaystyle U_{M}(\alpha )=\exp(-\imath \alpha H_{M})} , with parameters γ {\displaystyle \gamma } and α. Repeated application of the oracles U C {\displaystyle U_{C}} and U M {\displaystyle U_{M}} , in the order: U ( γ , α ) = ∐ i = 1 N ( U C ( γ i ) U M ( α i ) ) {\displaystyle U({\boldsymbol {\gamma }},{\boldsymbol {\alpha }})=\coprod _{i=1}^{N}(U_{C}(\gamma _{i})U_{M}(\alpha _{i}))} Preparing an initial state, that is a superposition of all possible states and apply U ( γ , α ) {\displaystyle U({\boldsymbol {\gamma }},{\boldsymbol {\alpha }})} to the state. Using classical methods to optimize the parameters γ , α {\displaystyle {\boldsymbol {\gamma }},{\boldsymbol {\alpha }}} and measure the output state of the optimized circuit to obtain the approximate optimal solution to the cost Hamiltonian. An optimal solution will be one that maximizes the expectation value of the cost Hamiltonian H C {\displaystyle H_{C}} . The layout of the algorithm, viz, the use of cost and mixer Hamiltonians are inspired from the Quantum Adiabatic theorem, which states that starting in a ground state of a time-dependent Hamiltonian, if the Hamiltonian evolves slowly enough, the final state will be a ground state of the final Hamiltonian. Moreover, the adiabatic theorem can be generalized to any other eigenstate as long as there is no overlap (degeneracy) between different eigenstates across the evolution. Identifying the initial Hamiltonian with H M {\displaystyle H_{M}} and the final Hamiltonian with H C {\displaystyle H_{C}} , whose ground states encode the solution to the optimization problem of interest, one can approximate the optimization problem as the adiabatic evolution of the Hamiltonian from an initial to the final one, whose ground (eigen)state gives the optimal solution. In general, QAOA relies on the use of unitary operators dependent on 2 p {\displaystyle 2p} angles (parameters), where p > 1 {\displaystyle p>1} is an input integer, which can be identified the number of layers of the oracle U ( γ , α ) {\displaystyle U({\boldsymbol {\gamma }},{\boldsymbol {\alpha }})} . These operators are iteratively applied on a state that is an equal-weighted quantum superposition of all the possible states in the computational basis. In each iteration, the state is measured in the computational basis and the Boolean function C ( z ) {\displaystyle C(z)} is estimated. The angles are then updated classically to increase C ( z ) {\displaystyle C(z)} . After this procedure is repeated a sufficient number of times, the value of C ( z ) {\displaystyle C(z)} is almost optimal, and the state being measured is close to being optimal as well. A sample circuit that implements QAOA on a quantum computer is given in figure. This procedure is highlighted using the following example of finding the minimum vertex cover of a graph. === QAOA for finding the minimum vertex cover of a graph === The goal here is to find a minimum vertex cover of a graph: a collection of vertices such that each edge in the graph contains at least one of the vertices in the cover. Hence, these vertices “cover” all the edges. We wish to find a vertex cover that has the smallest possible number of vertices. Vertex covers can be represented by a bit string where each bit denotes whether the corresponding vertex is present in the cover. For example, the bit string 0101 represents a cover consisting of the second and fourth vertex in a graph with four vertices. Consider the graph given in the figure. It has four vertices and there are two minimum vertex cover for this graph: vertices 0 and 2, and the vertices 1 and 2. These can be respectively represented by the bit strings 1010 and 0110. The goal of the algorithm is to sample these bit strings with high probability. In this case, the cost Hamiltonian has two ground states, |1010⟩ and |0110⟩, coinciding with the solutions of the problem. The mixer Hamiltonian is the simple, non-commuting sum of Pauli-X operations on each node of the graph and they are given by: H C = − 0.25 Z 3 + 0.5 Z 0 + 0.5 Z 1 + 1.25 Z 2 + 0.75 ( Z 0 Z 1 + Z 0 Z 2 + Z 2 Z 3 + Z 1 Z 2 ) {\displaystyle H_{C}=-0.25Z_{3}+0.5Z_{0}+0.5Z_{1}+1.25Z_{2}+0.75(Z_{0}Z_{1}+Z_{0}Z_{2}+Z_{2}Z_{3}+Z_{1}Z_{2})} H M = X 0 + X 1 + X 2 + X 3 {\displaystyle H_{M}=X_{0}+X_{1}+X_{2}+X_{3}} Implementing QAOA algorithm for this four qubit circuit with two layers of the ansatz in qiskit (see figure) and optimizing leads to a probability distribution for the states given in the figure. This shows that the states |0110⟩ and |1010⟩ have the highest probabilities of being measured, just as expected. === Generalization of QAOA to constrained combinatorial optimisation === In principle the optimal value of C ( z ) {\displaystyle C(z)} can be reached up to arbitrary precision, this is guaranteed by the adiabatic theorem or alternatively by the universality of the QAOA unitaries. However, it is an open question whether this can be done in a feasible way. For example, it was shown that QAOA exhibits a strong dependence on the ratio of a problem's constraint to variables (problem density) placing a limiting restriction on the algorithm's capacity to minimize a corresponding objective function. It was soon recognized that a generalization of the QAOA process is essentially an alternating application of a continuous-time quantum walk on an underlying graph followed by a quality-dependent phase shift applied to each solution state. This generalized QAOA was termed as QWOA (Quantum Walk-based Optimisation Algorithm). In the paper How many qubits are needed for quantum computational supremacy submitted to arXiv, the authors conclude that a QAOA circuit with 420 qubits and 500 constraints would require at least one century to be simulated using a classical simulation algorithm running on state-of-the-art supercomputers so that would be sufficient for quantum computational supremacy. A rigorous comparison of QAOA with classical algorithms can give estimates on depth p {\displaystyle p} and number of qubits required for quantum advantage. A study of QAOA and MaxCut algorithm shows that p > 11 {\displaystyle p>11} is required for scalable advantage. === Variations of QAOA === Several variations to the basic structure of QAOA have been proposed, which include variations to the ansatz of the basic algorithm. The choice of ansatz typically depends on the problem type, such as combinatorial problems represented as graphs, or problems strongly influenced by hardware design. However, ansatz design must balance specificity and generality to avoid overfitting and maintain applicability to a wide range of problems. For this reason, designing optimal ansatze for QAOA is an extensively researched and widely investigated topic. Some of the proposed variants are: Multi-angle QAOA Expressive QAOA (XQAOA) QAOA+ Digitised counteradiabatic QAOA Quantum alternating operator ansatz,which allows for constrains on the optimization problem etc. Another variation of QAOA focuses on techniques for parameter optimization, which aims at selecting the optimal set of initial parameters for a given problem and avoiding barren plateaus, which represent parameters leading to eigenstates which correspond to plateaus in the energy landscape of the cost Hamiltonian. Finally, there has been significant research interest in leveraging specific hardware to enhance the performance of QAOA across various platforms, such as trapped ion, neutral atoms, superconducting qubits, and photonic quantum computers. The goals of these approaches include overcoming hardware connectivity limitations and mitigating noise-related issues to broaden the applicability of QAOA to a wide range of combinatorial optimization problems. == QAOA algorithm Qiskit implementation == The quantum circuit shown here is from a simple example of how the QAOA algorithm can be implemented in Python using Qiskit, an open-source quantum computing software development framework by IBM. == See also == Adiabatic quantum computation Quantum annealing == References == == External links == Implementation of the QAOA algorithm for the knapsack problem with Classiq
Wikipedia/Quantum_optimization_algorithms
In mathematical physics, some approaches to quantum field theory are more popular than others. For historical reasons, the Schrödinger representation is less favored than Fock space methods. In the early days of quantum field theory, maintaining symmetries such as Lorentz invariance, displaying them manifestly, and proving renormalisation were of paramount importance. The Schrödinger representation is not manifestly Lorentz invariant and its renormalisability was only shown as recently as the 1980s by Kurt Symanzik (1981). The Schrödinger functional is, in its most basic form, the time translation generator of state wavefunctionals. In layman's terms, it defines how a system of quantum particles evolves through time and what the subsequent systems look like. == Background == Quantum mechanics is defined over the spatial coordinates x {\displaystyle \mathbf {x} } upon which the Galilean group acts, and the corresponding operators act on its state as x ^ ψ ( x ) = x ψ ( x ) {\displaystyle {\hat {\mathbf {x} }}\psi (\mathbf {x} )=\mathbf {x} \psi (\mathbf {x} )} . The state is characterized by a wave function ψ ( x ) = ⟨ x | ψ ⟩ {\displaystyle \psi (\mathbf {x} )=\langle \mathbf {x} |\psi \rangle } obtained by projecting it onto the coordinate eigenstates defined by x ^ | x ⟩ = x | x ⟩ {\displaystyle {\hat {\mathbf {x} }}\left|\mathbf {x} \right\rangle =\mathbf {x} \left|\mathbf {x} \right\rangle } . These eigenstates are not stationary. Time evolution is generated by the Hamiltonian, yielding the Schrödinger equation i ∂ 0 | ψ ( t ) ⟩ = H ^ | ψ ( t ) ⟩ {\displaystyle i\partial _{0}\left|\psi (t)\right\rangle ={\hat {H}}\left|\psi (t)\right\rangle } . However, in quantum field theory, the coordinate is the field operator ϕ ^ x = ϕ ^ ( x ) {\displaystyle {\hat {\phi }}_{\mathbf {x} }={\hat {\phi }}(\mathbf {x} )} , which acts on the state's wave functional as ϕ ^ ( x ) Ψ [ ϕ ( ⋅ ) ] = ϕ ⁡ ( x ) Ψ [ ϕ ( ⋅ ) ] , {\displaystyle {\hat {\phi }}(\mathbf {x} )\Psi \left[\phi (\cdot )\right]=\operatorname {\phi } \left(\mathbf {x} \right)\Psi \left[\phi (\cdot )\right],} where "⋅" indicates an unbound spatial argument. This wave functional Ψ [ ϕ ( ⋅ ) ] = ⟨ ϕ ( ⋅ ) | Ψ ⟩ {\displaystyle \Psi \left[\phi (\cdot )\right]=\left\langle \phi (\cdot )|\Psi \right\rangle } is obtained by means of the field eigenstates ϕ ^ ( x ) | Φ ( ⋅ ) ⟩ = Φ ( x ) | Φ ( ⋅ ) ⟩ , {\displaystyle {\hat {\phi }}(\mathbf {x} )\left|\Phi (\cdot )\right\rangle =\Phi (\mathbf {x} )\left|\Phi (\cdot )\right\rangle ,} which are indexed by unapplied "classical field" configurations Φ ( ⋅ ) {\displaystyle \Phi (\cdot )} . These eigenstates, like the position eigenstates above, are not stationary. Time evolution is generated by the Hamiltonian, yielding the Schrödinger equation, i ∂ 0 | Ψ ( t ) ⟩ = H ^ | Ψ ( t ) ⟩ . {\displaystyle i\partial _{0}\left|\Psi (t)\right\rangle ={\hat {H}}\left|\Psi (t)\right\rangle .} Thus the state in quantum field theory is conceptually a functional superposition of field configurations. == Example: scalar field == In the quantum field theory of (as example) a quantum scalar field ϕ ^ ( x ) {\displaystyle {\hat {\phi }}(x)} , in complete analogy with the one-particle quantum harmonic oscillator, the eigenstate of this quantum field with the "classical field" ϕ ( x ) {\displaystyle \phi (x)} (c-number) as its eigenvalue, ϕ ^ ( x ) | ϕ ⟩ = ϕ ( x ) | ϕ ⟩ {\displaystyle {\hat {\phi }}(x)\left|\phi \right\rangle =\phi \left(x\right)\left|\phi \right\rangle } is (Schwartz, 2013) | ϕ ⟩ ∝ e − ∫ d x 1 2 ( ϕ ( x ) − Φ ^ + ( x ) ) 2 | 0 ⟩ {\displaystyle \left|\phi \right\rangle \propto e^{-\int dx{\frac {1}{2}}~(\phi (x)-{\hat {\Phi }}_{+}(x))^{2}}\left|0\right\rangle } where Φ ^ + ( x ) {\displaystyle {\hat {\Phi }}_{+}\left(x\right)} is the part of ϕ ^ ( x ) {\displaystyle {\hat {\phi }}\left(x\right)} that only includes creation operators a k † {\displaystyle a_{k}^{\dagger }} . For the oscillator, this corresponds to the representation change/map to the |x⟩ state from Fock states. For a time-independent Hamiltonian H, the Schrödinger functional is defined as S [ ϕ 2 , t 2 ; ϕ 1 , t 1 ] = ⟨ ϕ 2 | e − i H ( t 2 − t 1 ) / ℏ | ϕ 1 ⟩ . {\displaystyle {\mathcal {S}}[\phi _{2},t_{2};\phi _{1},t_{1}]=\langle \,\phi _{2}\,|e^{-iH(t_{2}-t_{1})/\hbar }|\,\phi _{1}\,\rangle .} In the Schrödinger representation, this functional generates time translations of state wave functionals, through Ψ [ ϕ 2 , t 2 ] = ∫ D ϕ 1 S [ ϕ 2 , t 2 ; ϕ 1 , t 1 ] Ψ [ ϕ 1 , t 1 ] . {\displaystyle \Psi [\phi _{2},t_{2}]=\int \!{\mathcal {D}}\phi _{1}\,\,{\mathcal {S}}[\phi _{2},t_{2};\phi _{1},t_{1}]\Psi [\phi _{1},t_{1}].} === States === The normalized, vacuum state, free field wave-functional is the Gaussian Ψ 0 [ ϕ ] = det 1 4 ( K π ) e − 1 2 ∫ d x → ∫ d y → ϕ ( x → ) K ( x → , y → ) ϕ ( y → ) = det 1 4 ( K π ) e − 1 2 ϕ ⋅ K ⋅ ϕ , {\displaystyle \Psi _{0}[\phi ]=\det {}^{\frac {1}{4}}\left({\frac {K}{\pi }}\right)\;e^{-{\frac {1}{2}}\int d{\vec {x}}\int d{\vec {y}}\,\phi ({\vec {x}})K({\vec {x}},{\vec {y}})\phi ({\vec {y}})}=\det {}^{\frac {1}{4}}\left({\frac {K}{\pi }}\right)\;e^{-{\frac {1}{2}}\phi \cdot K\cdot \phi },} where the covariance K is K ( x → , y → ) = ∫ d 3 k ( 2 π ) 3 ω k → e i k → ⋅ ( x → − y → ) . {\displaystyle K({\vec {x}},{\vec {y}})=\int {\frac {d^{3}k}{(2\pi )^{3}}}\omega _{\vec {k}}\,e^{i{\vec {k}}\cdot ({\vec {x}}-{\vec {y}})}.} This is analogous to (the Fourier transform of) the product of each k-mode's ground state in the continuum limit, roughly (Hatfield 1992) Ψ 0 [ ϕ ~ ] = lim Δ k → 0 ∏ k → ( ω k → π ) 1 4 e − 1 2 ω k → ϕ ~ ( k → ) 2 Δ k 3 ( 2 π ) 3 → ( ∏ k → ( ω k → π ) 1 4 ) e − 1 2 ∫ d 3 k ( 2 π ) 3 ω k → ϕ ~ ( | k → | ) 2 . {\displaystyle \Psi _{0}[{\tilde {\phi }}]=\lim _{\Delta k\to 0}\;\prod _{\vec {k}}\left({\frac {\omega _{\vec {k}}}{\pi }}\right)^{\frac {1}{4}}e^{-{\frac {1}{2}}\omega _{\vec {k}}{\tilde {\phi }}({\vec {k}})^{2}{\frac {\Delta k^{3}}{(2\pi )^{3}}}}\to \left(\prod _{\vec {k}}\left({\frac {\omega _{\vec {k}}}{\pi }}\right)^{\frac {1}{4}}\right)e^{-{\frac {1}{2}}\int {\frac {d^{3}k}{(2\pi )^{3}}}\omega _{\vec {k}}{\tilde {\phi }}(|{\vec {k}}|)^{2}}.} Each k-mode enters as an independent quantum harmonic oscillator. One-particle states are obtained by exciting a single mode, and have the form, Ψ [ ϕ ] ∝ ∫ d x → ∫ d y → ϕ ( x → ) K ( x → , y → ) f ( y → ) Ψ 0 [ ϕ ] = ϕ ⋅ K ⋅ f e − 1 2 ϕ ⋅ K ⋅ ϕ . {\displaystyle \Psi [\phi ]\propto \int d{\vec {x}}\int d{\vec {y}}\,\phi ({\vec {x}})K({\vec {x}},{\vec {y}})f({\vec {y}})\Psi _{0}[\phi ]=\phi \cdot K\cdot f\,e^{-{\frac {1}{2}}\phi \cdot K\cdot \phi }.} For example, putting an excitation in k → 1 {\displaystyle {\vec {k}}_{1}} yields (Hatfield 1992) Ψ 1 [ ϕ ~ ] = ( 2 ω k 1 ( 2 π ) 3 ) 1 2 ϕ ~ ( k → 1 ) Ψ 0 [ ϕ ~ ] {\displaystyle \Psi _{1}[{\tilde {\phi }}]=\left({\frac {2\omega _{k_{1}}}{(2\pi )^{3}}}\right)^{\frac {1}{2}}{\tilde {\phi }}({\vec {k}}_{1})\Psi _{0}[{\tilde {\phi }}]} Ψ 1 [ ϕ ] = ( 2 ω k 1 ( 2 π ) 3 ) 1 2 ∫ d 3 y e − i k → 1 ⋅ y → ϕ ( y → ) Ψ 0 [ ϕ ] . {\displaystyle \Psi _{1}[\phi ]=\left({\frac {2\omega _{k_{1}}}{(2\pi )^{3}}}\right)^{\frac {1}{2}}\int d^{3}y\,e^{-i{\vec {k}}_{1}\cdot {\vec {y}}}\phi ({\vec {y}})\Psi _{0}[\phi ].} (The factor of ( 2 π ) − 3 / 2 {\displaystyle (2\pi )^{-3/2}} stems from Hatfield's setting Δ k = 1 {\displaystyle \Delta k=1} .) == Example: fermion field == For clarity, we consider a massless Weyl–Majorana field ψ ^ ( x ) {\displaystyle {\hat {\psi }}(x)} in 2D space in SO+(1, 1), but this solution generalizes to any massive Dirac bispinor in SO+(1, 3). The configuration space consists of functionals Ψ [ u ] {\displaystyle \Psi [u]} of anti-commuting Grassmann-valued fields u(x). The effect of ψ ^ ( x ) {\displaystyle {\hat {\psi }}(x)} is ψ ^ ( x ) | Ψ ⟩ = 1 2 ( u ( x ) + δ δ u ( x ) ) | Ψ ⟩ . {\displaystyle {\hat {\psi }}(x)|\Psi \rangle ={\frac {1}{\sqrt {2}}}\left(u(x)+{\frac {\delta }{\delta u(x)}}\right)|\Psi \rangle .} == References == Brian Hatfield, Quantum Field Theory of Point Particles and Strings. Addison Wesley Longman, 1992. See Chapter 10 "Free Fields in the Schrödinger Representation". I.V. Kanatchikov, "Precanonical Quantization and the Schrödinger Wave Functional." Phys. Lett. A 283 (2001) 25–36. Eprint arXiv:hep-th/0012084, 16 pages. R. Jackiw, "Schrödinger Picture for Boson and Fermion Quantum Field Theories." In Mathematical Quantum Field Theory and Related Topics: Proceedings of the 1987 Montréal Conference Held September 1–5, 1987 (eds. J.S. Feldman and L.M. Rosen, American Mathematical Society 1988). H. Reinhardt, C. Feuchter, "On the Yang-Mills wave functional in Coulomb gauge." Phys. Rev. D 71 (2005) 105002. Eprint arXiv:hep-th/0408237, 9 pages. D.V. Long, G.M. Shore, "The Schrödinger Wave Functional and Vacuum States in Curved Spacetime." Nucl.Phys. B 530 (1998) 247–278. Eprint arXiv:hep-th/9605004, 41 pages. Kurt Symanzik, "Schrödinger representation and Casimir effect in renormalizable quantum field theory". Nucl. Phys.B 190 (1981) 1–44, doi:10.1016/0550-3213(81)90482-X. K. Symanzik, "Schrödinger Representation in Renormalizable Quantum Field Theory". Chapter in Structural Elements in Particle Physics and Statistical Mechanics, NATO Advanced Study Institutes Series 82 (1983) pp 287–299, doi:10.1007/978-1-4613-3509-2_20. Martin Lüscher, Rajamani Narayanan, Peter Weisz, Ulli Wolff, "The Schrödinger Functional - a Renormalizable Probe for Non-Abelian Gauge Theories". Nucl.Phys.B 384 (1992) 168–228, doi:10.1016/0550-3213(92)90466-O. Eprint arXiv:hep-lat/9207009. Matthew Schwartz (2013). Quantum Field Theory and the Standard Model, Cambridge University Press, Ch.14.
Wikipedia/Schrödinger_functional
The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations. The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm and Grover's search algorithm. Provided the linear system is sparse and has a low condition number κ {\displaystyle \kappa } , and that the user is interested in the result of a scalar measurement on the solution vector, instead of the values of the solution vector itself, then the algorithm has a runtime of O ( log ⁡ ( N ) κ 2 ) {\displaystyle O(\log(N)\kappa ^{2})} , where N {\displaystyle N} is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in O ( N κ ) {\displaystyle O(N\kappa )} (or O ( N κ ) {\displaystyle O(N{\sqrt {\kappa }})} for positive semidefinite matrices). An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by three independent publications. The demonstrations consisted of simple linear equations on specially designed quantum devices. The first demonstration of a general-purpose version of the algorithm appeared in 2018. Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability. == Procedure == The HHL algorithm tackles the following problem: given a N × N {\displaystyle N\times N} Hermitian matrix A {\displaystyle A} and a unit vector b → ∈ R N {\displaystyle {\vec {b}}\in \mathbb {R} ^{N}} , prepare the quantum state | x ⟩ {\displaystyle |x\rangle } corresponding to the vector x → ∈ R N {\displaystyle {\vec {x}}\in \mathbb {R} ^{N}} that solves the linear system A x → = b → {\displaystyle A{\vec {x}}={\vec {b}}} . More precisely, the goal is to prepare a state | x ⟩ {\displaystyle |x\rangle } whose amplitudes equal the elements of x → {\displaystyle {\vec {x}}} . This means, in particular, that the algorithm cannot be used to efficiently retrieve the vector x → {\displaystyle {\vec {x}}} itself. It does, however, allow to efficiently compute expectation values of the form ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } for some observable M {\displaystyle M} . First, the algorithm represents the vector b → {\displaystyle {\vec {b}}} as a quantum state of the form: | b ⟩ = ∑ i = ⁡ 1 N b i | i ⟩ . {\displaystyle |b\rangle =\sum _{i\mathop {=} 1}^{N}b_{i}|i\rangle .} Next, Hamiltonian simulation techniques are used to apply the unitary operator e i A t {\displaystyle e^{iAt}} to | b ⟩ {\displaystyle |b\rangle } for a superposition of different times t {\displaystyle t} . The ability to decompose | b ⟩ {\displaystyle |b\rangle } into the eigenbasis of A {\displaystyle A} and to find the corresponding eigenvalues λ j {\displaystyle \lambda _{j}} is facilitated by the use of quantum phase estimation. The state of the system after this decomposition is approximately: ∑ j = ⁡ 1 N β j | u j ⟩ | λ j ⟩ , {\displaystyle \sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle |\lambda _{j}\rangle ,} where u j {\displaystyle u_{j}} is the eigenvector basis of A {\displaystyle A} , and | b ⟩ = ∑ j = ⁡ 1 N β j | u j ⟩ {\displaystyle |b\rangle =\sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle } . We would then like to perform the linear map taking | λ j ⟩ {\displaystyle |\lambda _{j}\rangle } to C λ j − 1 | λ j ⟩ {\displaystyle C\lambda _{j}^{-1}|\lambda _{j}\rangle } , where C {\displaystyle C} is a normalizing constant. The linear mapping operation is not unitary and thus will require a number of repetitions as it has some probability of failing. After it succeeds, we uncomputed the | λ j ⟩ {\displaystyle |\lambda _{j}\rangle } register and are left with a state proportional to: ∑ i = ⁡ 1 N β i λ j − 1 | u j ⟩ = A − 1 | b ⟩ = | x ⟩ , {\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle ,} where | x ⟩ {\displaystyle |x\rangle } is a quantum-mechanical representation of the desired solution vector x. To read out all components of x would require the procedure be repeated at least N times. However, it is often the case that one is not interested in x {\displaystyle x} itself, but rather some expectation value of a linear operator M acting on x. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } . This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, and moments without actually computing all the values of the solution vector x. == Explanation == === Initialization === Firstly, the algorithm requires that the matrix A {\displaystyle A} be Hermitian so that it can be converted into a unitary operator. In the case where A {\displaystyle A} is not Hermitian, define C = [ 0 A A † 0 ] . {\displaystyle \mathbf {C} ={\begin{bmatrix}0&A\\A^{\dagger }&0\end{bmatrix}}.} As C {\displaystyle C} is Hermitian, the algorithm can now be used to solve C y = [ b 0 ] {\displaystyle Cy={\begin{bmatrix}b\\0\end{bmatrix}}} to obtain y = [ 0 x ] {\displaystyle y={\begin{bmatrix}0\\x\end{bmatrix}}} . Secondly, the algorithm requires an efficient procedure to prepare | b ⟩ {\displaystyle |b\rangle } , the quantum representation of b. It is assumed that there exists some linear operator B {\displaystyle B} that can take some arbitrary quantum state | i n i t i a l ⟩ {\displaystyle |\mathrm {initial} \rangle } to | b ⟩ {\displaystyle |b\rangle } efficiently or that this algorithm is a subroutine in a larger algorithm and is given | b ⟩ {\displaystyle |b\rangle } as input. Any error in the preparation of state | b ⟩ {\displaystyle |b\rangle } is ignored. Finally, the algorithm assumes that the state | ψ 0 ⟩ {\displaystyle |\psi _{0}\rangle } can be prepared efficiently, where | ψ 0 ⟩ := 2 / T ∑ τ = ⁡ 0 T − 1 sin ⁡ π ( τ + 1 2 T ) | τ ⟩ {\displaystyle |\psi _{0}\rangle :={\sqrt {2/T}}\sum _{\tau \mathop {=} 0}^{T-1}\sin \pi \left({\tfrac {\tau +{\tfrac {1}{2}}}{T}}\right)|\tau \rangle } for some large T {\displaystyle T} . The coefficients of | ψ 0 ⟩ {\displaystyle |\psi _{0}\rangle } are chosen to minimize a certain quadratic loss function which induces error in the U i n v e r t {\displaystyle U_{\mathrm {invert} }} subroutine described below. === Hamiltonian simulation === Hamiltonian simulation is used to transform the Hermitian matrix A {\displaystyle A} into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allows e i A t {\displaystyle e^{iAt}} to be simulated in time O ( log ⁡ ( N ) s 2 t ) {\displaystyle O(\log(N)s^{2}t)} . === Uinvert subroutine === The key subroutine to the algorithm, denoted U i n v e r t {\displaystyle U_{\mathrm {invert} }} , is defined as follows and incorporates a phase estimation subroutine: 1. Prepare | ψ 0 ⟩ C {\displaystyle |\psi _{0}\rangle ^{C}} on register C 2. Apply the conditional Hamiltonian evolution (sum) 3. Apply the Fourier transform to the register C. Denote the resulting basis states with | k ⟩ {\displaystyle |k\rangle } for k = 0, ..., T − 1. Define λ k := 2 π k / t 0 {\displaystyle \lambda _{k}:=2\pi k/t_{0}} . 4. Adjoin a three-dimensional register S in the state | h ( λ k ) ⟩ S := 1 − f ( λ k ) 2 − g ( λ k ) 2 | n o t h i n g ⟩ S + f ( λ k ) | w e l l ⟩ S + g ( λ k ) | i l l ⟩ S , {\displaystyle |h(\lambda _{k})\rangle ^{S}:={\sqrt {1-f(\lambda _{k})^{2}-g(\lambda _{k})^{2}}}|\mathrm {nothing} \rangle ^{S}+f(\lambda _{k})|\mathrm {well} \rangle ^{S}+g(\lambda _{k})|\mathrm {ill} \rangle ^{S},} 5. Reverse steps 1–3, uncomputing any garbage produced along the way. The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues of A up to error ϵ {\displaystyle \epsilon } . The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse of A. In this register, the functions f, g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of | b ⟩ {\displaystyle |b\rangle } is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extended Born rule. === Main loop === The body of the algorithm follows the amplitude amplification procedure: starting with U i n v e r t B | i n i t i a l ⟩ {\displaystyle U_{\mathrm {invert} }B|\mathrm {initial} \rangle } , the following operation is repeatedly applied: U i n v e r t B R i n i t B † U i n v e r t † R s u c c , {\displaystyle U_{\mathrm {invert} }BR_{\mathrm {init} }B^{\dagger }U_{\mathrm {invert} }^{\dagger }R_{\mathrm {succ} },} where R s u c c = I − 2 | w e l l ⟩ ⟨ w e l l | {\displaystyle R_{\mathrm {succ} }=I-2|\mathrm {well} \rangle \langle \mathrm {well} |} and R i n i t = I − 2 | i n i t i a l ⟩ ⟨ i n i t i a l | . {\displaystyle R_{\mathrm {init} }=I-2|\mathrm {initial} \rangle \langle \mathrm {initial} |.} After each repetition, S {\displaystyle S} is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until | w e l l ⟩ {\displaystyle |\mathrm {well} \rangle } is measured, which occurs with a probability p {\displaystyle p} . Rather than repeating 1 p {\displaystyle {\frac {1}{p}}} times to minimize error, amplitude amplification is used to achieve the same error resilience using only O ( 1 p ) {\displaystyle O\left({\frac {1}{\sqrt {p}}}\right)} repetitions. === Scalar measurement === After successfully measuring 'well' on S {\displaystyle S} the system will be in a state proportional to: ∑ i = ⁡ 1 N β i λ j − 1 | u j ⟩ = A − 1 | b ⟩ = | x ⟩ . {\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle .} Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } . == Run time analysis == === Classical efficiency === The best classical algorithm which produces the actual solution vector x → {\displaystyle {\vec {x}}} is Gaussian elimination, which runs in O ( N 3 ) {\displaystyle O(N^{3})} time. If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector x → {\displaystyle {\vec {x}}} , which can be found in O ( N s κ ) {\displaystyle O(Ns\kappa )} time by minimizing the quadratic function | A x → − b → | 2 {\displaystyle |A{\vec {x}}-{\vec {b}}|^{2}} . When only a summary statistic of the solution vector x → {\displaystyle {\vec {x}}} is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of x → † M x → {\displaystyle {\vec {x}}^{\dagger }M{\vec {x}}} in O ( N κ ) {\displaystyle O(N{\sqrt {\kappa }})} . === Quantum efficiency === The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be O ( κ 2 log ⁡ N / ε ) {\displaystyle O(\kappa ^{2}\log N/\varepsilon )} , where ε > 0 {\displaystyle \varepsilon >0} is the error parameter and κ {\displaystyle \kappa } is the condition number of A {\displaystyle A} . This was subsequently improved to O ( κ log 3 ⁡ κ log ⁡ N / ε 3 ) {\displaystyle O(\kappa \log ^{3}\kappa \log N/\varepsilon ^{3})} by Andris Ambainis and a quantum algorithm with runtime polynomial in log ⁡ ( 1 / ε ) {\displaystyle \log(1/\varepsilon )} was developed by Childs et al. Since the HHL algorithm maintains its logarithmic scaling in N {\displaystyle N} only for sparse or low rank matrices, Wossnig et al. extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in O ( N log ⁡ N κ 2 ) {\displaystyle O({\sqrt {N}}\log N\kappa ^{2})} time compared to the O ( N log ⁡ N κ 2 ) {\displaystyle O(N\log N\kappa ^{2})} of the standard HHL algorithm. === Optimality === An important factor in the performance of the matrix inversion algorithm is the condition number κ {\displaystyle \kappa } , which represents the ratio of A {\displaystyle A} 's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as A {\displaystyle A} becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix A {\displaystyle A} lie between 1 κ {\displaystyle {\frac {1}{\kappa }}} and 1, in which case the claimed run-time proportional to κ 2 {\displaystyle \kappa ^{2}} will be achieved. Therefore, the speedup over classical algorithms is increased further when κ {\displaystyle \kappa } is a p o l y ( log ⁡ ( N ) ) {\displaystyle \mathrm {poly} (\log(N))} . If the run-time of the algorithm were made poly-logarithmic in κ {\displaystyle \kappa } then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE. == Error analysis == Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating e i A t {\displaystyle e^{iAt}} . Assuming that A {\displaystyle A} is s-sparse, this can be done with an error bounded by a constant ε {\displaystyle \varepsilon } , which will translate to the additive error achieved in the output state | x ⟩ {\displaystyle |x\rangle } . The phase estimation step errs by O ( 1 t 0 ) {\displaystyle O\left({\frac {1}{t_{0}}}\right)} in estimating λ {\displaystyle \lambda } , which translates into a relative error of O ( 1 λ t 0 ) {\displaystyle O\left({\frac {1}{\lambda t_{0}}}\right)} in λ − 1 {\displaystyle \lambda ^{-1}} . If λ ≥ 1 / κ {\displaystyle \lambda \geq 1/\kappa } , taking t 0 = O ( κ ε ) {\displaystyle t_{0}=O(\kappa \varepsilon )} induces a final error of ε {\displaystyle \varepsilon } . This requires that the overall run-time efficiency be increased proportional to O ( 1 ε ) {\displaystyle O\left({\frac {1}{\varepsilon }}\right)} to minimize error. == Experimental realization == While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the development of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by Cai et al., Barz et al. and Pan et al. in parallel. === Cai et al. === Published in Physical Review Letters 110, 230501 (2013), Cai et al. reported an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving 2 × 2 {\displaystyle 2\times 2} linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of 0.825 to 0.993. === Barz et al. === On February 5, 2013, Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion. === Pan et al. === On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity. === Wen et al. === Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al. in 2018 using the algorithm developed by Subaşı et al. == Applications == Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited practical use due to the current small size of quantum computers. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science and engineering, both on its own and as a subroutine in more complex problems. === Electromagnetic scattering === Clader et al. provided a preconditioned version of the linear systems algorithm that provided two advances. First, they demonstrated how a preconditioner could be included within the quantum algorithm. This expands the class of problems that can achieve the promised exponential speedup, since the scaling of HHL and the best classical algorithms are both polynomial in the condition number. The second advance was the demonstration of how to use HHL to solve for the radar cross-section of a complex shape. This was one of the first end to end examples of how to use HHL to solve a concrete problem exponentially faster than the best known classical algorithm. === Linear differential equation solving === Dominic Berry proposed a new algorithm for solving linear time dependent differential equations as an extension of the quantum algorithm for solving linear systems of equations. Berry provides an efficient algorithm for solving the full-time evolution under sparse linear differential equations on a quantum computer. === Nonlinear differential equation solving === Two groups proposed efficient algorithms for numerically integrating dissipative nonlinear ordinary differential equations. Liu et al. utilized Carleman linearization technique for second order equations and Lloyd et al. employed a mean field linearization method inspired by nonlinear Schrödinger equation for general order nonlinearities. The resulting linear equations are solved using quantum algorithms for linear differential equations. === Finite element method === The Finite Element Method uses large systems of linear equations to find approximate solutions to various physical and mathematical models. Montanaro and Pallister demonstrate that the HHL algorithm, when applied to certain FEM problems, can achieve a polynomial quantum speedup. They suggest that an exponential speedup is not possible in problems with fixed dimensions, and for which the solution meets certain smoothness conditions. Quantum speedups for the finite element method are higher for problems which include solutions with higher-order derivatives and large spatial dimensions. For example, problems in many-body dynamics require the solution of equations containing derivatives on orders scaling with the number of bodies, and some problems in computational finance, such as Black-Scholes models, require large spatial dimensions. === Least-squares fitting === Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the number of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm. === Machine learning and big data analysis === Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and thus are well-suited platforms for machine learning algorithms. The quantum algorithm for linear systems of equations has been applied to a support vector machine, which is an optimized linear or non-linear binary classifier. A support vector machine can be used for supervised machine learning, in which training set of already classified data is available, or unsupervised machine learning, in which all data given to the system is unclassified. Rebentrost et al. show that a quantum support vector machine can be used for big data classification and achieve an exponential speedup over classical computers. In June 2018, Zhao et al. developed an algorithm for performing Bayesian training of deep neural networks in quantum computers with an exponential speedup over classical training due to the use of the quantum algorithm for linear systems of equations, providing also the first general-purpose implementation of the algorithm to be run in cloud-based quantum computers. === Finance === Proposals for using HHL in finance include solving partial differential equations for the Black–Scholes equation and determining portfolio optimization via a Markowitz solution. === Quantum chemistry === In 2023, Baskaran et al. proposed the use of HHL algorithm to quantum chemistry calculations, via the linearized coupled cluster method (LCC). The connection between the HHL algorithm and the LCC method is due to the fact that the latter can be recast in the form of system of linear equations. A key factor that makes this approach useful for quantum chemistry is that the number of state register qubits is the natural logarithm of the number of excitations, thus offering an exponential suppression in the number of required qubits when compared to variational quantum eigensolver or the quantum phase estimation algorithms. This leads to a 'coexistence across scales', where in a given quantum computing era, HHL-LCC could be applied to much larger systems whereas QPE-CASCI could be employed for smaller molecular systems but with better accuracy in predicting molecular properties. On the algorithmic side, the authors introduce the 'AdaptHHL' approach, which circumvents the need to expend an ~Ο(N3) classical overhead associated with fixing a value for the parameter 'c' in the controlled-rotation module of the algorithm. == Implementation difficulties == Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm. the solution vector, | b ⟩ {\displaystyle |b\rangle } , has to be efficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takes O ( n c ) {\displaystyle O(n^{c})} steps the exponential advantage of HHL would vanish. the QPE phases calls for the generation of the unitary e i A t {\displaystyle e^{iAt}} , and its controlled application. The efficiency of this step depends on the A {\displaystyle A} matrix being sparse and 'well conditioned' (low κ {\displaystyle \kappa } ). Otherwise, the application of e i A t {\displaystyle e^{iAt}} would grow as O ( n c ) {\displaystyle O(n^{c})} and once again, the algorithm's quantum advantage would vanish. lastly, the vector, | x ⟩ {\displaystyle |x\rangle } , is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operator ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } . If actual values of x → {\displaystyle {\vec {x}}} are needed, then HHL would need to be repeated O ( n ) {\displaystyle O(n)} times, killing the exponential speed-up. However, three ways of avoiding getting the actual values have been proposed: first, if only some properties of the solution are needed; second, if the results are needed only to feed downstream matrix operations; third, if only a sample of the solution is needed. == See also == Differentiable programming == References ==
Wikipedia/HHL_algorithm
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines field theory and the principle of relativity with ideas behind quantum mechanics.: xi  QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on QFT. == History == Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. === Theoretical background === Quantum field theory results from the combination of classical field theory, quantum mechanics, and special relativity.: xi  A brief overview of these theoretical precursors follows. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Isaac Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact".: 4  It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector in the case of gravitational field) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.: 18  Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.: 301 : 2  The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.: 19  Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization.: Ch.2  Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles. In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave–particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, Louis de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.: 22–23  In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformations, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred.: 19  It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators. === Quantum electrodynamics === Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.: 1  Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators.: 1  With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.: 22  In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence and non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.: 71  In 1928, Dirac wrote down a wave equation that described relativistic electrons: the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.: 71–72  The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.: 22–23  It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory.: 72 : 23  QFT naturally incorporated antiparticles in its formalism.: 24  === Infinities and renormalization === Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields, suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta.: 25  It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Such achievements were not understood and recognized by the theoretical community. Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.: 26  In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift.: 28  Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations. The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the calculated values of mass and charge, infinite though they may be, by their finite measured values. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory. As Tomonaga said in his Nobel lecture:Since those parts of the modified mass and charge due to field reactions [become infinite], it is impossible to calculate them by the theory. However, the mass and charge observed in experiments are not the original mass and charge but the mass and charge as modified by field reactions, and they are finite. On the other hand, the mass and charge appearing in the theory are… the values modified by field reactions. Since this is so, and particularly since the theory is unable to calculate the modified mass and charge, we may adopt the procedure of substituting experimental values for them phenomenologically... This procedure is called the renormalization of mass and charge… After long, laborious calculations, less skillful than Schwinger's, we obtained a result... which was in agreement with [the] Americans'. By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarization. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities". At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams.: 2  The latter can be used to visually and intuitively organize and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.: 5  It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.: 2  === Non-renormalizability === Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.: 30  The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.: 30  The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant α ≈ 1/137, which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.: 31  With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.: 31  === Source theory === Schwinger, however, took a different route. For more than a decade he and his students had been nearly the only exponents of field theory,: 454  but in 1951 he found a way around the problem of the infinities with a new method using external sources as currents coupled to gauge fields. Motivated by the former findings, Schwinger kept pursuing this approach in order to "quantumly" generalize the classical process of coupling external forces to the configuration space parameters known as Lagrange multipliers. He summarized his source theory in 1966 then expanded the theory's applications to quantum electrodynamics in his three volume-set titled: Particles, Sources, and Fields. Developments in pion physics, in which the new viewpoint was most successfully applied, convinced him of the great advantages of mathematical simplicity and conceptual clarity that its use bestowed. In source theory there are no divergences, and no renormalization. It may be regarded as the calculational tool of field theory, but it is more general. Using source theory, Schwinger was able to calculate the anomalous magnetic moment of the electron, which he had done in 1947, but this time with no ‘distracting remarks’ about infinite quantities.: 467  Schwinger also applied source theory to his QFT theory of gravity, and was able to reproduce all four of Einstein's classic results: gravitational red shift, deflection and slowing of light by gravity, and the perihelion precession of Mercury. The neglect of source theory by the physics community was a major disappointment for Schwinger:The lack of appreciation of these facts by others was depressing, but understandable. -J. SchwingerSee "the shoes incident" between J. Schwinger and S. Weinberg. === Standard model === In 1954, Yang Chen-Ning and Robert Mills generalized the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups.: 5  In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.: 32  Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable. Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.: 5–6  By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored,: 6  until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion. Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) : 11  Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.: 32  These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades.: 3  The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model. === Other developments === The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory.: 4  Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry theories only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973,: 7  but to date have not been widely accepted as part of the Standard Model due to lack of experimental evidence. Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory,: 6  itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity. === Condensed-matter-physics === Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter. Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems. Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect. == Principles == For simplicity, natural units are used in the following sections, in which the reduced Planck constant ħ and the speed of light c are both set to one. === Classical fields === A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity g(x, t) and the electric field E(x, t) and magnetic field B(x, t) in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom. Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantization and path integrals are two common formulations of QFT.: 61  To motivate the fundamentals of QFT, an overview of classical field theory follows. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as ϕ(x, t), where x is the position vector, and t is the time. Suppose the Lagrangian of the field, L {\displaystyle L} , is L = ∫ d 3 x L = ∫ d 3 x [ 1 2 ϕ ˙ 2 − 1 2 ( ∇ ϕ ) 2 − 1 2 m 2 ϕ 2 ] , {\displaystyle L=\int d^{3}x\,{\mathcal {L}}=\int d^{3}x\,\left[{\frac {1}{2}}{\dot {\phi }}^{2}-{\frac {1}{2}}(\nabla \phi )^{2}-{\frac {1}{2}}m^{2}\phi ^{2}\right],} where L {\displaystyle {\mathcal {L}}} is the Lagrangian density, ϕ ˙ {\displaystyle {\dot {\phi }}} is the time-derivative of the field, ∇ is the gradient operator, and m is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian:: 16  ∂ ∂ t [ ∂ L ∂ ( ∂ ϕ / ∂ t ) ] + ∑ i = 1 3 ∂ ∂ x i [ ∂ L ∂ ( ∂ ϕ / ∂ x i ) ] − ∂ L ∂ ϕ = 0 , {\displaystyle {\frac {\partial }{\partial t}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial t)}}\right]+\sum _{i=1}^{3}{\frac {\partial }{\partial x^{i}}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial x^{i})}}\right]-{\frac {\partial {\mathcal {L}}}{\partial \phi }}=0,} we obtain the equations of motion for the field, which describe the way it varies in time and space: ( ∂ 2 ∂ t 2 − ∇ 2 + m 2 ) ϕ = 0. {\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}+m^{2}\right)\phi =0.} This is known as the Klein–Gordon equation.: 17  The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: ϕ ( x , t ) = ∫ d 3 p ( 2 π ) 3 1 2 ω p ( a p e − i ω p t + i p ⋅ x + a p ∗ e i ω p t − i p ⋅ x ) , {\displaystyle \phi (\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left(a_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+a_{\mathbf {p} }^{*}e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right),} where a is a complex number (normalized by convention), * denotes complex conjugation, and ωp is the frequency of the normal mode: ω p = | p | 2 + m 2 . {\displaystyle \omega _{\mathbf {p} }={\sqrt {|\mathbf {p} |^{2}+m^{2}}}.} Thus each normal mode corresponding to a single p can be seen as a classical harmonic oscillator with frequency ωp.: 21,26  === Canonical quantization === The quantization procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by x ( t ) = 1 2 ω a e − i ω t + 1 2 ω a ∗ e i ω t , {\displaystyle x(t)={\frac {1}{\sqrt {2\omega }}}ae^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}a^{*}e^{i\omega t},} where a is a complex number (normalized by convention), and ω is the oscillator's frequency. Note that x is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label x of a quantum field. For a quantum harmonic oscillator, x(t) is promoted to a linear operator x ^ ( t ) {\displaystyle {\hat {x}}(t)} : x ^ ( t ) = 1 2 ω a ^ e − i ω t + 1 2 ω a ^ † e i ω t . {\displaystyle {\hat {x}}(t)={\frac {1}{\sqrt {2\omega }}}{\hat {a}}e^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}{\hat {a}}^{\dagger }e^{i\omega t}.} Complex numbers a and a* are replaced by the annihilation operator a ^ {\displaystyle {\hat {a}}} and the creation operator a ^ † {\displaystyle {\hat {a}}^{\dagger }} , respectively, where † denotes Hermitian conjugation. The commutation relation between the two is [ a ^ , a ^ † ] = 1. {\displaystyle \left[{\hat {a}},{\hat {a}}^{\dagger }\right]=1.} The Hamiltonian of the simple harmonic oscillator can be written as H ^ = ℏ ω a ^ † a ^ + 1 2 ℏ ω . {\displaystyle {\hat {H}}=\hbar \omega {\hat {a}}^{\dagger }{\hat {a}}+{\frac {1}{2}}\hbar \omega .} The vacuum state | 0 ⟩ {\displaystyle |0\rangle } , which is the lowest energy state, is defined by a ^ | 0 ⟩ = 0 {\displaystyle {\hat {a}}|0\rangle =0} and has energy 1 2 ℏ ω . {\displaystyle {\frac {1}{2}}\hbar \omega .} One can easily check that [ H ^ , a ^ † ] = ℏ ω a ^ † , {\displaystyle [{\hat {H}},{\hat {a}}^{\dagger }]=\hbar \omega {\hat {a}}^{\dagger },} which implies that a ^ † {\displaystyle {\hat {a}}^{\dagger }} increases the energy of the simple harmonic oscillator by ℏ ω {\displaystyle \hbar \omega } . For example, the state a ^ † | 0 ⟩ {\displaystyle {\hat {a}}^{\dagger }|0\rangle } is an eigenstate of energy 3 ℏ ω / 2 {\displaystyle 3\hbar \omega /2} . Any energy eigenstate state of a single harmonic oscillator can be obtained from | 0 ⟩ {\displaystyle |0\rangle } by successively applying the creation operator a ^ † {\displaystyle {\hat {a}}^{\dagger }} :: 20  and any state of the system can be expressed as a linear combination of the states | n ⟩ ∝ ( a ^ † ) n | 0 ⟩ . {\displaystyle |n\rangle \propto \left({\hat {a}}^{\dagger }\right)^{n}|0\rangle .} A similar procedure can be applied to the real scalar field ϕ, by promoting it to a quantum field operator ϕ ^ {\displaystyle {\hat {\phi }}} , while the annihilation operator a ^ p {\displaystyle {\hat {a}}_{\mathbf {p} }} , the creation operator a ^ p † {\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }} and the angular frequency ω p {\displaystyle \omega _{\mathbf {p} }} are now for a particular p: ϕ ^ ( x , t ) = ∫ d 3 p ( 2 π ) 3 1 2 ω p ( a ^ p e − i ω p t + i p ⋅ x + a ^ p † e i ω p t − i p ⋅ x ) . {\displaystyle {\hat {\phi }}(\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left({\hat {a}}_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+{\hat {a}}_{\mathbf {p} }^{\dagger }e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right).} Their commutation relations are:: 21  [ a ^ p , a ^ q † ] = ( 2 π ) 3 δ ( p − q ) , [ a ^ p , a ^ q ] = [ a ^ p † , a ^ q † ] = 0 , {\displaystyle \left[{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }^{\dagger }\right]=(2\pi )^{3}\delta (\mathbf {p} -\mathbf {q} ),\quad \left[{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }\right]=\left[{\hat {a}}_{\mathbf {p} }^{\dagger },{\hat {a}}_{\mathbf {q} }^{\dagger }\right]=0,} where δ is the Dirac delta function. The vacuum state | 0 ⟩ {\displaystyle |0\rangle } is defined by a ^ p | 0 ⟩ = 0 , for all p . {\displaystyle {\hat {a}}_{\mathbf {p} }|0\rangle =0,\quad {\text{for all }}\mathbf {p} .} Any quantum state of the field can be obtained from | 0 ⟩ {\displaystyle |0\rangle } by successively applying creation operators a ^ p † {\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }} (or by a linear combination of such states), e.g. : 22  ( a ^ p 3 † ) 3 a ^ p 2 † ( a ^ p 1 † ) 2 | 0 ⟩ . {\displaystyle \left({\hat {a}}_{\mathbf {p} _{3}}^{\dagger }\right)^{3}{\hat {a}}_{\mathbf {p} _{2}}^{\dagger }\left({\hat {a}}_{\mathbf {p} _{1}}^{\dagger }\right)^{2}|0\rangle .} While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantizing an arbitrary number of particles instead of a single particle is often also called second quantization.: 19  The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantize (complex) scalar fields, Dirac fields,: 52  vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:: 77  L = 1 2 ( ∂ μ ϕ ) ( ∂ μ ϕ ) − 1 2 m 2 ϕ 2 − λ 4 ! ϕ 4 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi )\left(\partial ^{\mu }\phi \right)-{\frac {1}{2}}m^{2}\phi ^{2}-{\frac {\lambda }{4!}}\phi ^{4},} where μ is a spacetime index, ∂ 0 = ∂ / ∂ t , ∂ 1 = ∂ / ∂ x 1 {\displaystyle \partial _{0}=\partial /\partial t,\ \partial _{1}=\partial /\partial x^{1}} , etc. The summation over the index μ has been omitted following the Einstein notation. If the parameter λ is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. === Path integrals === The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state | ϕ I ⟩ {\displaystyle |\phi _{I}\rangle } at time t = 0 to some final state | ϕ F ⟩ {\displaystyle |\phi _{F}\rangle } at t = T, the total time T is divided into N small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let H be the Hamiltonian (i.e. generator of time evolution), then: 10  ⟨ ϕ F | e − i H T | ϕ I ⟩ = ∫ d ϕ 1 ∫ d ϕ 2 ⋯ ∫ d ϕ N − 1 ⟨ ϕ F | e − i H T / N | ϕ N − 1 ⟩ ⋯ ⟨ ϕ 2 | e − i H T / N | ϕ 1 ⟩ ⟨ ϕ 1 | e − i H T / N | ϕ I ⟩ . {\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int d\phi _{1}\int d\phi _{2}\cdots \int d\phi _{N-1}\,\langle \phi _{F}|e^{-iHT/N}|\phi _{N-1}\rangle \cdots \langle \phi _{2}|e^{-iHT/N}|\phi _{1}\rangle \langle \phi _{1}|e^{-iHT/N}|\phi _{I}\rangle .} Taking the limit N → ∞, the above product of integrals becomes the Feynman path integral:: 282 : 12  ⟨ ϕ F | e − i H T | ϕ I ⟩ = ∫ D ϕ ( t ) exp ⁡ { i ∫ 0 T d t L } , {\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int {\mathcal {D}}\phi (t)\,\exp \left\{i\int _{0}^{T}dt\,L\right\},} where L is the Lagrangian involving ϕ and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian H via Legendre transformation. The initial and final conditions of the path integral are respectively ϕ ( 0 ) = ϕ I , ϕ ( T ) = ϕ F . {\displaystyle \phi (0)=\phi _{I},\quad \phi (T)=\phi _{F}.} In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. === Two-point correlation function === In calculations, one often encounters expression like ⟨ 0 | T { ϕ ( x ) ϕ ( y ) } | 0 ⟩ or ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ {\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \quad {\text{or}}\quad \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle } in the free or interacting theory, respectively. Here, x {\displaystyle x} and y {\displaystyle y} are position four-vectors, T {\displaystyle T} is the time ordering operator that shuffles its operands so the time-components x 0 {\displaystyle x^{0}} and y 0 {\displaystyle y^{0}} increase from right to left, and | Ω ⟩ {\displaystyle |\Omega \rangle } is the ground state (vacuum state) of the interacting theory, different from the free ground state | 0 ⟩ {\displaystyle |0\rangle } . This expression represents the probability amplitude for the field to propagate from y to x, and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short.: 82  The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantization or path integrals to be: 31,288 : 23  ⟨ 0 | T { ϕ ( x ) ϕ ( y ) } | 0 ⟩ ≡ D F ( x − y ) = lim ϵ → 0 ∫ d 4 p ( 2 π ) 4 i p μ p μ − m 2 + i ϵ e − i p μ ( x μ − y μ ) . {\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \equiv D_{F}(x-y)=\lim _{\epsilon \to 0}\int {\frac {d^{4}p}{(2\pi )^{4}}}{\frac {i}{p_{\mu }p^{\mu }-m^{2}+i\epsilon }}e^{-ip_{\mu }(x^{\mu }-y^{\mu })}.} In an interacting theory, where the Lagrangian or Hamiltonian contains terms L I ( t ) {\displaystyle L_{I}(t)} or H I ( t ) {\displaystyle H_{I}(t)} that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantization formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function. In canonical quantization, the two-point correlation function can be written as:: 87  ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ = lim T → ∞ ( 1 − i ϵ ) ⟨ 0 | T { ϕ I ( x ) ϕ I ( y ) exp ⁡ [ − i ∫ − T T d t H I ( t ) ] } | 0 ⟩ ⟨ 0 | T { exp ⁡ [ − i ∫ − T T d t H I ( t ) ] } | 0 ⟩ , {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\left\langle 0\left|T\left\{\phi _{I}(x)\phi _{I}(y)\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}\right|0\right\rangle }{\left\langle 0\left|T\left\{\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}\right|0\right\rangle }},} where ε is an infinitesimal number and ϕI is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in ϕ 4 {\displaystyle \phi ^{4}} -theory, the interacting term of the Hamiltonian is H I ( t ) = ∫ d 3 x λ 4 ! ϕ I ( x ) 4 {\textstyle H_{I}(t)=\int d^{3}x\,{\frac {\lambda }{4!}}\phi _{I}(x)^{4}} ,: 84  and the expansion of the two-point correlator in terms of λ {\displaystyle \lambda } becomes ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ = ∑ n = 0 ∞ ( − i λ ) n ( 4 ! ) n n ! ∫ d 4 z 1 ⋯ ∫ d 4 z n ⟨ 0 | T { ϕ I ( x ) ϕ I ( y ) ϕ I ( z 1 ) 4 ⋯ ϕ I ( z n ) 4 } | 0 ⟩ ∑ n = 0 ∞ ( − i λ ) n ( 4 ! ) n n ! ∫ d 4 z 1 ⋯ ∫ d 4 z n ⟨ 0 | T { ϕ I ( z 1 ) 4 ⋯ ϕ I ( z n ) 4 } | 0 ⟩ . {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle ={\frac {\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\int d^{4}z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(x)\phi _{I}(y)\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }{\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\int d^{4}z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }}.} This perturbation expansion expresses the interacting two-point function in terms of quantities ⟨ 0 | ⋯ | 0 ⟩ {\displaystyle \langle 0|\cdots |0\rangle } that are evaluated in the free theory. In the path integral formulation, the two-point correlation function can be written: 284  ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ = lim T → ∞ ( 1 − i ϵ ) ∫ D ϕ ϕ ( x ) ϕ ( y ) exp ⁡ [ i ∫ − T T d 4 z L ] ∫ D ϕ exp ⁡ [ i ∫ − T T d 4 z L ] , {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\int {\mathcal {D}}\phi \,\phi (x)\phi (y)\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}{\int {\mathcal {D}}\phi \,\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}},} where L {\displaystyle {\mathcal {L}}} is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in λ, reducing the interacting two-point function to quantities in the free theory. Wick's theorem further reduce any n-point correlation function in the free theory to a sum of products of two-point correlation functions. For example, ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 2 ) ϕ ( x 3 ) ϕ ( x 4 ) } | 0 ⟩ = ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 2 ) } | 0 ⟩ ⟨ 0 | T { ϕ ( x 3 ) ϕ ( x 4 ) } | 0 ⟩ + ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 3 ) } | 0 ⟩ ⟨ 0 | T { ϕ ( x 2 ) ϕ ( x 4 ) } | 0 ⟩ + ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 4 ) } | 0 ⟩ ⟨ 0 | T { ϕ ( x 2 ) ϕ ( x 3 ) } | 0 ⟩ . {\displaystyle {\begin{aligned}\langle 0|T\{\phi (x_{1})\phi (x_{2})\phi (x_{3})\phi (x_{4})\}|0\rangle &=\langle 0|T\{\phi (x_{1})\phi (x_{2})\}|0\rangle \langle 0|T\{\phi (x_{3})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{3})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{4})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{3})\}|0\rangle .\end{aligned}}} Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory.: 90  This makes the Feynman propagator one of the most important quantities in quantum field theory. === Feynman diagram === Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the λ1 term in the two-point correlation function in the ϕ4 theory is − i λ 4 ! ∫ d 4 z ⟨ 0 | T { ϕ ( x ) ϕ ( y ) ϕ ( z ) ϕ ( z ) ϕ ( z ) ϕ ( z ) } | 0 ⟩ . {\displaystyle {\frac {-i\lambda }{4!}}\int d^{4}z\,\langle 0|T\{\phi (x)\phi (y)\phi (z)\phi (z)\phi (z)\phi (z)\}|0\rangle .} After applying Wick's theorem, one of the terms is 12 ⋅ − i λ 4 ! ∫ d 4 z D F ( x − z ) D F ( y − z ) D F ( z − z ) . {\displaystyle 12\cdot {\frac {-i\lambda }{4!}}\int d^{4}z\,D_{F}(x-z)D_{F}(y-z)D_{F}(z-z).} This term can instead be obtained from the Feynman diagram . The diagram consists of external vertices connected with one edge and represented by dots (here labeled x {\displaystyle x} and y {\displaystyle y} ). internal vertices connected with four edges and represented by dots (here labeled z {\displaystyle z} ). edges connecting the vertices and represented by lines. Every vertex corresponds to a single ϕ {\displaystyle \phi } field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the so-called Feynman rules: For every internal vertex z i {\displaystyle z_{i}} , write down a factor − i λ ∫ d 4 z i {\textstyle -i\lambda \int d^{4}z_{i}} . For every edge that connects two vertices z i {\displaystyle z_{i}} and z j {\displaystyle z_{j}} , write down a factor D F ( z i − z j ) {\displaystyle D_{F}(z_{i}-z_{j})} . Divide by the symmetry factor of the diagram. With the symmetry factor 2 {\displaystyle 2} , following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space.: 91–94  In order to compute the n-point correlation function to the k-th order, list all valid Feynman diagrams with n external points and k or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, ⟨ Ω | T { ϕ ( x 1 ) ⋯ ϕ ( x n ) } | Ω ⟩ {\displaystyle \langle \Omega |T\{\phi (x_{1})\cdots \phi (x_{n})\}|\Omega \rangle } is equal to the sum of (expressions corresponding to) all connected diagrams with n external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the ϕ4 interaction theory discussed above, every vertex must have four legs.: 98  In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method.: 102–115  Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing n loops are referred to as n-loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction.: 44  Lines whose end points are vertices can be thought of as the propagation of virtual particles.: 31  === Renormalization === Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass m and the coupling constant λ, have no physical meaning — m, λ, and the field strength ϕ are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off Λ, obtain expressions for the physical quantities, and then take the limit Λ → ∞. This is an example of regularization, a class of methods to treat divergences in QFT, with Λ being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalized perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of ϕ4 theory, the field strength is first redefined: ϕ = Z 1 / 2 ϕ r , {\displaystyle \phi =Z^{1/2}\phi _{r},} where ϕ is the bare field, ϕr is the renormalized field, and Z is a constant to be determined. The Lagrangian density becomes: L = 1 2 ( ∂ μ ϕ r ) ( ∂ μ ϕ r ) − 1 2 m r 2 ϕ r 2 − λ r 4 ! ϕ r 4 + 1 2 δ Z ( ∂ μ ϕ r ) ( ∂ μ ϕ r ) − 1 2 δ m ϕ r 2 − δ λ 4 ! ϕ r 4 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}m_{r}^{2}\phi _{r}^{2}-{\frac {\lambda _{r}}{4!}}\phi _{r}^{4}+{\frac {1}{2}}\delta _{Z}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}\delta _{m}\phi _{r}^{2}-{\frac {\delta _{\lambda }}{4!}}\phi _{r}^{4},} where mr and λr are the experimentally measurable, renormalized, mass and coupling constant, respectively, and δ Z = Z − 1 , δ m = m 2 Z − m r 2 , δ λ = λ Z 2 − λ r {\displaystyle \delta _{Z}=Z-1,\quad \delta _{m}=m^{2}Z-m_{r}^{2},\quad \delta _{\lambda }=\lambda Z^{2}-\lambda _{r}} are constants to be determined. The first three terms are the ϕ4 Lagrangian density written in terms of the renormalized quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularization scheme (such as the cut-off regularization introduced above or dimensional regularization); call the regulator Λ. Compute Feynman diagrams, in which divergent terms will depend on Λ. Then, define δZ, δm, and δλ such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit Λ → ∞ is taken. In this way, meaningful finite quantities are obtained.: 323–326  It is only possible to eliminate all infinities to obtain a finite result in renormalizable theories, whereas in non-renormalizable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalizable QFT,: 719–727  while quantum gravity is non-renormalizable.: 798 : 421  ==== Renormalization group ==== The renormalization group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales.: 393  The way in which each parameter changes with scale is described by its β function.: 417  Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation.: 410–411  As an example, the coupling constant in QED, namely the elementary charge e, has the following β function: β ( e ) ≡ 1 Λ d e d Λ = e 3 12 π 2 + O ( e 5 ) , {\displaystyle \beta (e)\equiv {\frac {1}{\Lambda }}{\frac {de}{d\Lambda }}={\frac {e^{3}}{12\pi ^{2}}}+O{\mathord {\left(e^{5}\right)}},} where Λ is the energy scale under which the measurement of e is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.: 420  The coupling constant g in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group SU(3), has the following β function: β ( g ) ≡ 1 Λ d g d Λ = g 3 16 π 2 ( − 11 + 2 3 N f ) + O ( g 5 ) , {\displaystyle \beta (g)\equiv {\frac {1}{\Lambda }}{\frac {dg}{d\Lambda }}={\frac {g^{3}}{16\pi ^{2}}}\left(-11+{\frac {2}{3}}N_{f}\right)+O{\mathord {\left(g^{5}\right)}},} where Nf is the number of quark flavours. In the case where Nf ≤ 16 (the Standard Model has Nf = 6), the coupling constant g decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.: 531  Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and N = 4 supersymmetric Yang–Mills theory. According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off Λ, i.e. that the theory is no longer valid at energies higher than Λ, and all degrees of freedom above the scale Λ are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalizable effective field theory.: 402–403  The difference between renormalizable and non-renormalizable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them.: 2  According to this view, non-renormalizable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off Λ from calculations in such a theory merely indicates that new physical phenomena appear at scales above Λ, where a new theory is necessary.: 156  === Other theories === The quantization and renormalization procedures outlined in the preceding sections are performed for the free theory and ϕ4 theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction. As an example, quantum electrodynamics contains a Dirac field ψ representing the electron field and a vector field Aμ representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: L = ψ ¯ ( i γ μ ∂ μ − m ) ψ − 1 4 F μ ν F μ ν − e ψ ¯ γ μ ψ A μ , {\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\gamma ^{\mu }\partial _{\mu }-m\right)\psi -{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }-e{\bar {\psi }}\gamma ^{\mu }\psi A_{\mu },} where γμ are Dirac matrices, ψ ¯ = ψ † γ 0 {\displaystyle {\bar {\psi }}=\psi ^{\dagger }\gamma ^{0}} , and F μ ν = ∂ μ A ν − ∂ ν A μ {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }} is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass m and the (bare) elementary charge e. The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.: 78  Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of electrons, while those pointing backward in time represent the propagation of positrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. ==== Gauge symmetry ==== If the following transformation to the fields is performed at every spacetime point x (a local transformation), then the QED Lagrangian remains unchanged, or invariant: ψ ( x ) → e i α ( x ) ψ ( x ) , A μ ( x ) → A μ ( x ) + i e − 1 e − i α ( x ) ∂ μ e i α ( x ) , {\displaystyle \psi (x)\to e^{i\alpha (x)}\psi (x),\quad A_{\mu }(x)\to A_{\mu }(x)+ie^{-1}e^{-i\alpha (x)}\partial _{\mu }e^{i\alpha (x)},} where α(x) is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory.: 482–483  Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations e i α ( x ) {\displaystyle e^{i\alpha (x)}} and e i α ′ ( x ) {\displaystyle e^{i\alpha '(x)}} is yet another symmetry transformation e i [ α ( x ) + α ′ ( x ) ] {\displaystyle e^{i[\alpha (x)+\alpha '(x)]}} . For any α(x), e i α ( x ) {\displaystyle e^{i\alpha (x)}} is an element of the U(1) group, thus QED is said to have U(1) gauge symmetry.: 496  The photon field Aμ may be referred to as the U(1) gauge boson. U(1) is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories).: 489  Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an SU(3) gauge symmetry. It contains three Dirac fields ψi, i = 1,2,3 representing quark fields as well as eight vector fields Aa,μ, a = 1,...,8 representing gluon fields, which are the SU(3) gauge bosons.: 547  The QCD Lagrangian density is:: 490–491  L = i ψ ¯ i γ μ ( D μ ) i j ψ j − 1 4 F μ ν a F a , μ ν − m ψ ¯ i ψ i , {\displaystyle {\mathcal {L}}=i{\bar {\psi }}^{i}\gamma ^{\mu }(D_{\mu })^{ij}\psi ^{j}-{\frac {1}{4}}F_{\mu \nu }^{a}F^{a,\mu \nu }-m{\bar {\psi }}^{i}\psi ^{i},} where Dμ is the gauge covariant derivative: D μ = ∂ μ − i g A μ a t a , {\displaystyle D_{\mu }=\partial _{\mu }-igA_{\mu }^{a}t^{a},} where g is the coupling constant, ta are the eight generators of SU(3) in the fundamental representation (3×3 matrices), F μ ν a = ∂ μ A ν a − ∂ ν A μ a + g f a b c A μ b A ν c , {\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c},} and fabc are the structure constants of SU(3). Repeated indices i,j,a are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: ψ i ( x ) → U i j ( x ) ψ j ( x ) , A μ a ( x ) t a → U ( x ) [ A μ a ( x ) t a + i g − 1 ∂ μ ] U † ( x ) , {\displaystyle \psi ^{i}(x)\to U^{ij}(x)\psi ^{j}(x),\quad A_{\mu }^{a}(x)t^{a}\to U(x)\left[A_{\mu }^{a}(x)t^{a}+ig^{-1}\partial _{\mu }\right]U^{\dagger }(x),} where U(x) is an element of SU(3) at every spacetime point x: U ( x ) = e i α ( x ) a t a . {\displaystyle U(x)=e^{i\alpha (x)^{a}t^{a}}.} The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantization, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density L [ ϕ , ∂ μ ϕ ] {\displaystyle {\mathcal {L}}[\phi ,\partial _{\mu }\phi ]} under a certain local transformation of the fields, the measure ∫ D ϕ {\textstyle \int {\mathcal {D}}\phi } of the path integral may change.: 243  For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group SU(3) × SU(2) × U(1), in which all anomalies exactly cancel.: 705–707  The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group. Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law.: 17–18 : 73  For example, the U(1) symmetry of QED implies charge conservation. Gauge-transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field Aμ, being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarization. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing Aμ can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description.: 168  To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally.: 512–515  A more rigorous generalization of the Faddeev–Popov procedure is given by BRST quantization.: 517  ==== Spontaneous symmetry-breaking ==== Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.: 347  To illustrate the mechanism, consider a linear sigma model containing N real scalar fields, described by the Lagrangian density: L = 1 2 ( ∂ μ ϕ i ) ( ∂ μ ϕ i ) + 1 2 μ 2 ϕ i ϕ i − λ 4 ( ϕ i ϕ i ) 2 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\phi ^{i}\right)\left(\partial ^{\mu }\phi ^{i}\right)+{\frac {1}{2}}\mu ^{2}\phi ^{i}\phi ^{i}-{\frac {\lambda }{4}}\left(\phi ^{i}\phi ^{i}\right)^{2},} where μ and λ are real parameters. The theory admits an O(N) global symmetry: ϕ i → R i j ϕ j , R ∈ O ( N ) . {\displaystyle \phi ^{i}\to R^{ij}\phi ^{j},\quad R\in \mathrm {O} (N).} The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field ϕ0 satisfying ϕ 0 i ϕ 0 i = μ 2 λ . {\displaystyle \phi _{0}^{i}\phi _{0}^{i}={\frac {\mu ^{2}}{\lambda }}.} Without loss of generality, let the ground state be in the N-th direction: ϕ 0 i = ( 0 , ⋯ , 0 , μ λ ) . {\displaystyle \phi _{0}^{i}=\left(0,\cdots ,0,{\frac {\mu }{\sqrt {\lambda }}}\right).} The original N fields can be rewritten as: ϕ i ( x ) = ( π 1 ( x ) , ⋯ , π N − 1 ( x ) , μ λ + σ ( x ) ) , {\displaystyle \phi ^{i}(x)=\left(\pi ^{1}(x),\cdots ,\pi ^{N-1}(x),{\frac {\mu }{\sqrt {\lambda }}}+\sigma (x)\right),} and the original Lagrangian density as: L = 1 2 ( ∂ μ π k ) ( ∂ μ π k ) + 1 2 ( ∂ μ σ ) ( ∂ μ σ ) − 1 2 ( 2 μ 2 ) σ 2 − λ μ σ 3 − λ μ π k π k σ − λ 2 π k π k σ 2 − λ 4 ( π k π k ) 2 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\pi ^{k}\right)\left(\partial ^{\mu }\pi ^{k}\right)+{\frac {1}{2}}\left(\partial _{\mu }\sigma \right)\left(\partial ^{\mu }\sigma \right)-{\frac {1}{2}}\left(2\mu ^{2}\right)\sigma ^{2}-{\sqrt {\lambda }}\mu \sigma ^{3}-{\sqrt {\lambda }}\mu \pi ^{k}\pi ^{k}\sigma -{\frac {\lambda }{2}}\pi ^{k}\pi ^{k}\sigma ^{2}-{\frac {\lambda }{4}}\left(\pi ^{k}\pi ^{k}\right)^{2},} where k = 1, ..., N − 1. The original O(N) global symmetry is no longer manifest, leaving only the subgroup O(N − 1). The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.: 349–350  Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, O(N) has N(N − 1)/2 continuous symmetries (the dimension of its Lie algebra), while O(N − 1) has (N − 1)(N − 2)/2. The number of broken symmetries is their difference, N − 1, which corresponds to the N − 1 massless fields πk.: 351  On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarized massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson.: 743–744  In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures.: 199  In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.: 690  ==== Supersymmetry ==== All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesized the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.: 795 : 443  The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations Pμ and the Lorentz transformations Jμν.: 58–60  In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators Qα, called supercharges, which themselves transform as Weyl fermions.: 795 : 444  The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, QαI, I = 1, ..., N, which generate the corresponding N = 1 supersymmetry, N = 2 supersymmetry, and so on.: 795 : 450  Supersymmetry can also be constructed in other dimensions, most notably in (1+1) dimensions for its application in superstring theory. The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group.: 448  Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), N = 4 supersymmetric Yang–Mills theory,: 450  and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.: 444  If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity. Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalization) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its super-partner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.: 796–797  Nevertheless, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.: 797 : 443  ==== Other spacetimes ==== The ϕ4 theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime. In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases. In high-energy physics, string theory is a type of (1+1)-dimensional QFT,: 452  while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.: 428–429  In Minkowski space, the flat metric ημν is used to raise and lower spacetime indices in the Lagrangian, e.g. A μ A μ = η μ ν A μ A ν , ∂ μ ϕ ∂ μ ϕ = η μ ν ∂ μ ϕ ∂ ν ϕ , {\displaystyle A_{\mu }A^{\mu }=\eta _{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,} where ημν is the inverse of ημν satisfying ημρηρν = δμν. For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used: A μ A μ = g μ ν A μ A ν , ∂ μ ϕ ∂ μ ϕ = g μ ν ∂ μ ϕ ∂ ν ϕ , {\displaystyle A_{\mu }A^{\mu }=g_{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =g^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,} where gμν is the inverse of gμν. For a real scalar field, the Lagrangian density in a general spacetime background is L = | g | ( 1 2 g μ ν ∇ μ ϕ ∇ ν ϕ − 1 2 m 2 ϕ 2 ) , {\displaystyle {\mathcal {L}}={\sqrt {|g|}}\left({\frac {1}{2}}g^{\mu \nu }\nabla _{\mu }\phi \nabla _{\nu }\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right),} where g = det(gμν), and ∇μ denotes the covariant derivative. The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background. ==== Topological quantum field theory ==== The correlation functions and physical predictions of a QFT depend on the spacetime metric gμν. For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric.: 36  QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers.: 1–5  The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime, which relates the braiding statistics of anyons in physics to the link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond. === Perturbative and non-perturbative methods === Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton. Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory and the Thirring model. == Mathematical rigor == In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined. However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory, can be given a sound mathematical interpretation from their finite-dimensional analogues. Since the 1950s, theoretical physicists and mathematicians have attempted to organize all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics,: 2  which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem, and also to mathematically rigorous constructions of many interacting QFTs in two and three spacetime dimensions, e.g. two-dimensional scalar field theories with arbitrary polynomial interactions, the three-dimensional scalar field theories with a quartic interaction, etc. Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms. Algebraic quantum field theory is another approach to the axiomatization of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms.: 2–3  One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).: 10  Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows. Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on R 4 {\displaystyle \mathbb {R} ^{4}} and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in Streater & Wightman (1964), Osterwalder & Schrader (1973) and Osterwalder & Schrader (1975). == See also == == References == Bibliography Streater, R.; Wightman, A. (1964). PCT, Spin and Statistics and all That. W. A. Benjamin. Osterwalder, K.; Schrader, R. (1973). "Axioms for Euclidean Green's functions". Communications in Mathematical Physics. 31 (2): 83–112. Bibcode:1973CMaPh..31...83O. doi:10.1007/BF01645738. S2CID 189829853. Osterwalder, K.; Schrader, R. (1975). "Axioms for Euclidean Green's functions II". Communications in Mathematical Physics. 42 (3): 281–305. Bibcode:1975CMaPh..42..281O. doi:10.1007/BF01608978. S2CID 119389461. == Further reading == General readers Pais, A. (1994) [1986]. Inward Bound: Of Matter and Forces in the Physical World (reprint ed.). Oxford, New York, Toronto: Oxford University Press. ISBN 978-0198519973. Schweber, S. S. (1994). QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga. Princeton University Press. ISBN 9780691033273. Feynman, R.P. (2001) [1964]. The Character of Physical Law. MIT Press. ISBN 978-0-262-56003-0. Feynman, R.P. (2006) [1985]. QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 978-0-691-12575-6. Gribbin, J. (1998). Q is for Quantum: Particle Physics from A to Z. Weidenfeld & Nicolson. ISBN 978-0-297-81752-9. Carroll, Sean (2024). The Biggest Ideas in the Universe : quanta and fields. Dutton. ISBN 978-0-593-18660-2. Introductory text Pierre van Baal (2016). A Course in Field Theory. CRC Press. ISBN 9780429073601. McMahon, D. (2008). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-154382-8. Bogolyubov, N.; Shirkov, D. (1982). Quantum Fields. Benjamin Cummings. ISBN 978-0-8053-0983-6. Frampton, P.H. (2000). Gauge Field Theories. Frontiers in Physics (2nd ed.). Wiley.; Frampton, Paul H. (22 September 2008). 2008, 3rd edition. John Wiley & Sons. ISBN 978-3527408351. Greiner, W.; Müller, B. (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0. Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0. Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Group. ISBN 978-0-201-11749-3. Kleinert, H.; Schulte-Frohlinde, Verena (2001). Critical Properties of φ4-Theories. World Scientific. ISBN 978-981-02-4658-7. Kleinert, H. (2008). Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation (PDF). World Scientific. ISBN 978-981-279-170-2. Lancaster, Tom; Blundell, Stephen (2014). Quantum field theory for the gifted amateur. Oxford: Oxford University Press. ISBN 978-0-19-969933-9. OCLC 859651399. Loudon, R. (1983). The Quantum Theory of Light. Oxford University Press. ISBN 978-0-19-851155-7. Mandl, F.; Shaw, G. (1993). Quantum Field Theory. John Wiley & Sons. ISBN 978-0-471-94186-6. Ryder, L.H. (1985). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-33859-2. Schwartz, M.D. (2014). Quantum Field Theory and the Standard Model. Cambridge University Press. ISBN 978-1107034730. Archived from the original on 2018-03-22. Retrieved 2020-05-13. Ynduráin, F.J. (1996). Relativistic Quantum Mechanics and Introduction to Field Theory (1st ed.). Springer. Bibcode:1996rqmi.book.....Y. doi:10.1007/978-3-642-61057-8. ISBN 978-3-540-60453-2. Greiner, W.; Reinhardt, J. (1996). Field Quantization. Springer. ISBN 978-3-540-59179-5. Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0-201-50397-5. Scharf, Günter (2014) [1989]. Finite Quantum Electrodynamics: The Causal Approach (third ed.). Dover Publications. ISBN 978-0486492735. Srednicki, M. (2007). Quantum Field Theory. Cambridge University Press. ISBN 978-0521-8644-97. Tong, David (2015). "Lectures on Quantum Field Theory". Retrieved 2016-02-09. Williams, A.G. (2022). Introduction to Quantum Field Theory: Classical Mechanics to Gauge Field Theories. Cambridge University Press. ISBN 978-1108470902. Zee, Anthony (2010). Quantum Field Theory in a Nutshell (2nd ed.). Princeton University Press. ISBN 978-0691140346. Advanced texts Heitler, W. (1953). The Quantum Theory of Radiation. Dover Publications, Inc. ISBN 0-486-64558-4. Umezawa, H. (1956) Quantum Field Theory. North Holland Puplishing. Barton, G. (1963). Introduction to Advanced Field Theory. Intescience Publishers. Brown, Lowell S. (1994). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-46946-3. Bogoliubov, N.; Logunov, A.A.; Oksak, A.I.; Todorov, I.T. (1990). General Principles of Quantum Field Theory. Kluwer Academic Publishers. ISBN 978-0-7923-0540-8. Weinberg, S. (1995). The Quantum Theory of Fields. Vol. 1. Cambridge University Press. ISBN 978-0521550017. == External links == Media related to Quantum field theory at Wikimedia Commons "Quantum field theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Stanford Encyclopedia of Philosophy: "Quantum Field Theory", by Meinard Kuhlmann. Siegel, Warren, 2005. Fields. arXiv:hep-th/9912205 . Quantum Field Theory by P. J. Mulders
Wikipedia/Relativistic_quantum_field_theory
In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields. The solutions to the equations, universally denoted as ψ or Ψ (Greek psi), are referred to as "wave functions" in the context of RQM, and "fields" in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background). In the Schrödinger picture, the wave function or field is the solution to the Schrödinger equation, i ℏ ∂ ∂ t ψ = H ^ ψ , {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi ={\hat {H}}\psi ,} one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator Ĥ describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator. More generally – the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group. == History == === Early 1920s: Classical and quantum mechanics === The failure of classical mechanics applied to molecular, atomic, and nuclear systems and smaller induced the need for a new mechanics: quantum mechanics. The mathematical formulation was led by De Broglie, Bohr, Schrödinger, Pauli, and Heisenberg, and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant ħ, the quantum of action, tends to zero. This is the correspondence principle. At this point, special relativity was not fully combined with quantum mechanics, so the Schrödinger and Heisenberg formulations, as originally proposed, could not be used in situations where the particles travel near the speed of light, or when the number of each type of particle changes (this happens in real particle interactions; the numerous forms of particle decays, annihilation, matter creation, pair production, and so on). === Late 1920s: Relativistic quantum mechanics of spin-0 and spin-1/2 particles === A description of quantum mechanical systems which could account for relativistic effects was sought for by many theoretical physicists from the late 1920s to the mid-1940s. The first basis for relativistic quantum mechanics, i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein–Gordon equation: by inserting the energy operator and momentum operator into the relativistic energy–momentum relation: The solutions to (1) are scalar fields. The KG equation is undesirable due to its prediction of negative energies and probabilities, as a result of the quadratic nature of (2) – inevitable in a relativistic theory. This equation was initially proposed by Schrödinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schrödinger equation) was still of importance. Nevertheless, (1) is applicable to spin-0 bosons. Neither the non-relativistic nor relativistic equations found by Schrödinger could predict the fine structure in the Hydrogen spectral series. The mysterious underlying property was spin. The first two-dimensional spin matrices (better known as the Pauli matrices) were introduced by Pauli in the Pauli equation; the Schrödinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields, but this was phenomenological. Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation, for massless spin-1/2 fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation (2) to the electron – by various manipulations he factorized the equation into the form and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices α and β in a relativistic wave equation, and explained the fine structure of hydrogen. The solutions to (3A) are multi-component spinor fields, and each component satisfies (1). A remarkable result of spinor solutions is that half of the components describe a particle while the other half describe an antiparticle; in this case the electron and positron. The Dirac equation is now known to apply for all massive spin-1/2 fermions. In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation. Although a landmark in quantum theory, the Dirac equation is only true for spin-1/2 fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular – not all physicists were comfortable with the "Dirac sea" of negative energy states). === 1930s–1960s: Relativistic quantum mechanics of higher-spin particles === The natural problem became clear: to generalize the Dirac equation to particles with any spin; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions. This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one "root" of (3A): where ψ is a spinor field, now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices α and β are infinite-dimensional matrices, related to infinitesimal Lorentz transformations. He did not demand that each component of 3B satisfy equation (2); instead he regenerated the equation using a Lorentz-invariant action, via the principle of least action, and application of Lorentz group theory. Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938–1939) see Duffin–Kemmer–Petiau algebra. The Dirac–Fierz–Pauli formalism was more sophisticated than Majorana's, as spinors were new mathematical tools in the early twentieth century, although Majorana's paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940. Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors A and B, symmetric in all indices, for a massive particle of spin n + 1/2 for integer n (see Van der Waerden notation for the meaning of the dotted indices): where p is the momentum as a covariant spinor operator. For n = 0, the equations reduce to the coupled Dirac equations, and A and B together transform as the original Dirac spinor. Eliminating either A or B shows that A and B each fulfill (1). The direct derivation of the Dirac–Pauli–Fierz equations using the Bargmann–Wigner operators is given by Isaev and Podoinitsyn. In 1941, Rarita and Schwinger focussed on spin-3/2 particles and derived the Rarita–Schwinger equation, including a Lagrangian to generate it, and later generalized the equations analogous to spin n + 1/2 for integer n. In 1945, Pauli suggested Majorana's 1932 paper to Bhabha, who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in (3A) and (3B) by an arbitrary constant, subject to a set of conditions which the wave functions must obey. Finally, in the year 1948 (the same year as Feynman's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann–Wigner equations. In the early 1960s, a reformulation of the Bargmann–Wigner equations was made by H. Joos and Steven Weinberg, the Joos–Weinberg equation. Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles. === 1960s–present === The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present. == Linear equations == The following equations have solutions which satisfy the superposition principle, that is, the wave functions are additive. Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wave functions are denoted ψ, and ∂μ are the components of the four-gradient operator. In matrix equations, the Pauli matrices are denoted by σμ in which μ = 0, 1, 2, 3, where σ0 is the 2 × 2 identity matrix: σ 0 = ( 1 0 0 1 ) {\displaystyle \sigma ^{0}={\begin{pmatrix}1&0\\0&1\\\end{pmatrix}}} and the other matrices have their usual representations. The expression σ μ ∂ μ ≡ σ 0 ∂ 0 + σ 1 ∂ 1 + σ 2 ∂ 2 + σ 3 ∂ 3 {\displaystyle \sigma ^{\mu }\partial _{\mu }\equiv \sigma ^{0}\partial _{0}+\sigma ^{1}\partial _{1}+\sigma ^{2}\partial _{2}+\sigma ^{3}\partial _{3}} is a 2 × 2 matrix operator which acts on 2-component spinor fields. The gamma matrices are denoted by γμ, in which again μ = 0, 1, 2, 3, and there are a number of representations to select from. The matrix γ0 is not necessarily the 4 × 4 identity matrix. The expression i ℏ γ μ ∂ μ + m c ≡ i ℏ ( γ 0 ∂ 0 + γ 1 ∂ 1 + γ 2 ∂ 2 + γ 3 ∂ 3 ) + m c ( 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle i\hbar \gamma ^{\mu }\partial _{\mu }+mc\equiv i\hbar (\gamma ^{0}\partial _{0}+\gamma ^{1}\partial _{1}+\gamma ^{2}\partial _{2}+\gamma ^{3}\partial _{3})+mc{\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}} is a 4 × 4 matrix operator which acts on 4-component spinor fields. Note that terms such as "mc" scalar multiply an identity matrix of the relevant dimension, the common sizes are 2 × 2 or 4 × 4, and are conventionally not written for simplicity. === Linear gauge fields === The Duffin–Kemmer–Petiau equation is an alternative equation for spin-0 and spin-1 particles: ( i ℏ β a ∂ a − m c ) ψ = 0 {\displaystyle (i\hbar \beta ^{a}\partial _{a}-mc)\psi =0} == Constructing RWEs == === Using 4-vectors and the energy–momentum relation === Start with the standard special relativity (SR) 4-vectors 4-position X μ = X = ( c t , x → ) {\displaystyle X^{\mu }=\mathbf {X} =(ct,{\vec {\mathbf {x} }})} 4-velocity U μ = U = γ ( c , u → ) {\displaystyle U^{\mu }=\mathbf {U} =\gamma (c,{\vec {\mathbf {u} }})} 4-momentum P μ = P = ( E c , p → ) {\displaystyle P^{\mu }=\mathbf {P} =\left({\frac {E}{c}},{\vec {\mathbf {p} }}\right)} 4-wavevector K μ = K = ( ω c , k → ) {\displaystyle K^{\mu }=\mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)} 4-gradient ∂ μ = ∂ = ( ∂ t c , − ∇ → ) {\displaystyle \partial ^{\mu }=\mathbf {\partial } =\left({\frac {\partial _{t}}{c}},-{\vec {\mathbf {\nabla } }}\right)} Note that each 4-vector is related to another by a Lorentz scalar: U = d d τ X {\displaystyle \mathbf {U} ={\frac {d}{d\tau }}\mathbf {X} } , where τ {\displaystyle \tau } is the proper time P = m 0 U {\displaystyle \mathbf {P} =m_{0}\mathbf {U} } , where m 0 {\displaystyle m_{0}} is the rest mass K = ( 1 / ℏ ) P {\displaystyle \mathbf {K} =(1/\hbar )\mathbf {P} } , which is the 4-vector version of the Planck–Einstein relation & the de Broglie matter wave relation ∂ = − i K {\displaystyle \mathbf {\partial } =-i\mathbf {K} } , which is the 4-gradient version of complex-valued plane waves Now, just apply the standard Lorentz scalar product rule to each one: U ⋅ U = ( c ) 2 {\displaystyle \mathbf {U} \cdot \mathbf {U} =(c)^{2}} P ⋅ P = ( m 0 c ) 2 {\displaystyle \mathbf {P} \cdot \mathbf {P} =(m_{0}c)^{2}} K ⋅ K = ( m 0 c ℏ ) 2 {\displaystyle \mathbf {K} \cdot \mathbf {K} =\left({\frac {m_{0}c}{\hbar }}\right)^{2}} ∂ ⋅ ∂ = ( − i m 0 c ℏ ) 2 = − ( m 0 c ℏ ) 2 {\displaystyle \mathbf {\partial } \cdot \mathbf {\partial } =\left({\frac {-im_{0}c}{\hbar }}\right)^{2}=-\left({\frac {m_{0}c}{\hbar }}\right)^{2}} The last equation is a fundamental quantum relation. When applied to a Lorentz scalar field ψ {\displaystyle \psi } , one gets the Klein–Gordon equation, the most basic of the quantum relativistic wave equations. [ ∂ ⋅ ∂ + ( m 0 c ℏ ) 2 ] ψ = 0 {\displaystyle \left[\mathbf {\partial } \cdot \mathbf {\partial } +\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} : in 4-vector format [ ∂ μ ∂ μ + ( m 0 c ℏ ) 2 ] ψ = 0 {\displaystyle \left[\partial _{\mu }\partial ^{\mu }+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} : in tensor format [ ( ℏ ∂ μ + i m 0 c ) ( ℏ ∂ μ − i m 0 c ) ] ψ = 0 {\displaystyle \left[(\hbar \partial _{\mu }+im_{0}c)(\hbar \partial ^{\mu }-im_{0}c)\right]\psi =0} : in factored tensor format The Schrödinger equation is the low-velocity limiting case (v ≪ c) of the Klein–Gordon equation. When the relation is applied to a four-vector field A μ {\displaystyle A^{\mu }} instead of a Lorentz scalar field ψ {\displaystyle \psi } , then one gets the Proca equation (in Lorenz gauge): [ ∂ ⋅ ∂ + ( m 0 c ℏ ) 2 ] A μ = 0 {\displaystyle \left[\mathbf {\partial } \cdot \mathbf {\partial } +\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]A^{\mu }=0} If the rest mass term is set to zero (light-like particles), then this gives the free Maxwell equation (in Lorenz gauge) [ ∂ ⋅ ∂ ] A μ = 0 {\displaystyle [\mathbf {\partial } \cdot \mathbf {\partial } ]A^{\mu }=0} === Representations of the Lorentz group === Under a proper orthochronous Lorentz transformation x → Λx in Minkowski space, all one-particle quantum states ψjσ of spin j with spin z-component σ locally transform under some representation D of the Lorentz group: ψ ( x ) → D ( Λ ) ψ ( Λ − 1 x ) {\displaystyle \psi (x)\rightarrow D(\Lambda )\psi (\Lambda ^{-1}x)} where D(Λ) is some finite-dimensional representation, i.e. a matrix. Here ψ is thought of as a column vector containing components with the allowed values of σ. The quantum numbers j and σ as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of σ may occur more than once depending on the representation. Representations with several possible values for j are considered below. The irreducible representations are labeled by a pair of half-integers or integers (A, B). From these all other representations can be built up using a variety of standard methods, like taking tensor products and direct sums. In particular, space-time itself constitutes a 4-vector representation (⁠1/2⁠, ⁠1/2⁠) so that Λ ∈ D(1/2, 1/2). To put this into context; Dirac spinors transform under the (⁠1/2⁠, 0) ⊕ (0, ⁠1/2⁠) representation. In general, the (A, B) representation space has subspaces that under the subgroup of spatial rotations, SO(3), transform irreducibly like objects of spin j, where each allowed value: j = A + B , A + B − 1 , … , | A − B | , {\displaystyle j=A+B,A+B-1,\dots ,|A-B|,} occurs exactly once. In general, tensor products of irreducible representations are reducible; they decompose as direct sums of irreducible representations. The representations D(j, 0) and D(0, j) can each separately represent particles of spin j. A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation. == Non-linear equations == There are equations which have solutions that do not satisfy the superposition principle. === Nonlinear gauge fields === Yang–Mills equation: describes a non-abelian gauge field Yang–Mills–Higgs equations: describes a non-abelian gauge field coupled with a massive spin-0 particle === Spin 2 === Einstein field equations: describe interaction of matter with the gravitational field (massless spin-2 field): R μ ν − 1 2 g μ ν R + g μ ν Λ = 8 π G c 4 T μ ν {\displaystyle R_{\mu \nu }-{\frac {1}{2}}g_{\mu \nu }\,R+g_{\mu \nu }\Lambda ={\frac {8\pi G}{c^{4}}}T_{\mu \nu }} The solution is a metric tensor field, rather than a wave function. == See also == List of equations in nuclear and particle physics List of equations in quantum mechanics Lorentz transformation Mathematical descriptions of the electromagnetic field Quantization of the electromagnetic field Minimal coupling Scalar field theory Status of special relativity == References == == Further reading ==
Wikipedia/Relativistic_wave_equations
In mathematical physics, the Eckhaus equation – or the Kundu–Eckhaus equation – is a nonlinear partial differential equation within the nonlinear Schrödinger class: i ψ t + ψ x x + 2 ( | ψ | 2 ) x ψ + | ψ | 4 ψ = 0. {\displaystyle i\psi _{t}+\psi _{xx}+2\left(|\psi |^{2}\right)_{x}\,\psi +|\psi |^{4}\,\psi =0.} The equation was independently introduced by Wiktor Eckhaus and by Anjan Kundu to model the propagation of waves in dispersive media. == Linearization == The Eckhaus equation can be linearized to the linear Schrödinger equation: i φ t + φ x x = 0 , {\displaystyle i\varphi _{t}+\varphi _{xx}=0,} through the non-linear transformation: φ ( x , t ) = ψ ( x , t ) exp ⁡ ( ∫ − ∞ x | ψ ( x ′ , t ) | 2 d x ′ ) . {\displaystyle \varphi (x,t)=\psi (x,t)\,\exp \left(\int _{-\infty }^{x}|\psi (x^{\prime },t)|^{2}\;{\text{d}}x^{\prime }\right).} The inverse transformation is: ψ ( x , t ) = φ ( x , t ) ( 1 + 2 ∫ − ∞ x | φ ( x ′ , t ) | 2 d x ′ ) 1 / 2 . {\displaystyle \psi (x,t)={\frac {\varphi (x,t)}{\displaystyle \left(1+2\,\int _{-\infty }^{x}|\varphi (x^{\prime },t)|^{2}\;{\text{d}}x^{\prime }\right)^{1/2}}}.} This linearization also implies that the Eckhaus equation is integrable. == Notes == == References == Ablowitz, M.J.; Ahrens, C.D.; De Lillo, S. (2005), "On a "quasi" integrable discrete Eckhaus equation", Journal of Nonlinear Mathematical Physics, 12 (Supplement 1): 1–12, Bibcode:2005JNMP...12S...1A, doi:10.2991/jnmp.2005.12.s1.1, S2CID 59441129 Calogero, F.; De Lillo, S. (1987), "The Eckhaus PDE iψt + ψxx+ 2(|ψ|2)x ψ + |ψ|4 ψ = 0", Inverse Problems, 3 (4): 633–682, Bibcode:1987InvPr...3..633C, doi:10.1088/0266-5611/3/4/012, S2CID 250876392 Eckhaus, W. (1985), The long-time behaviour for perturbed wave-equations and related problems, Department of Mathematics, University of Utrecht, Preprint no. 404. Published in part in: Eckhaus, W. (1986), "The long-time behaviour for perturbed wave-equations and related problems", in Kröner, E.; Kirchgässner, K. (eds.), Trends in applications of pure mathematics to mechanics, Lecture Notes in Physics, vol. 249, Berlin: Springer, pp. 168–194, doi:10.1007/BFb0016391, ISBN 978-3-540-16467-8 Kundu, A. (1984), "Landau–Lifshitz and higher-order nonlinear systems gauge generated from nonlinear Schrödinger-type equations", Journal of Mathematical Physics, 25 (12): 3433–3438, Bibcode:1984JMP....25.3433K, doi:10.1063/1.526113 Taghizadeh, N.; Mirzazadeh, M.; Tascan, F. (2012), "The first-integral method applied to the Eckhaus equation", Applied Mathematics Letters, 25 (5): 798–802, doi:10.1016/j.aml.2011.10.021 Zwillinger, D. (1998), Handbook of differential equations (3rd ed.), Academic Press, ISBN 978-0-12-784396-4
Wikipedia/Eckhaus_equation
Cross-entropy benchmarking (also referred to as XEB) is a quantum benchmarking protocol which can be used to demonstrate quantum supremacy. In XEB, a random quantum circuit is executed on a quantum computer multiple times in order to collect a set of k {\displaystyle k} samples in the form of bitstrings { x 1 , … , x k } {\displaystyle \{x_{1},\dots ,x_{k}\}} . The bitstrings are then used to calculate the cross-entropy benchmark fidelity ( F X E B {\displaystyle F_{\rm {XEB}}} ) via a classical computer, given by F X E B = 2 n ⟨ P ( x i ) ⟩ k − 1 = 2 n k ( ∑ i = 1 k | ⟨ 0 n | C | x i ⟩ | 2 ) − 1 {\displaystyle F_{\rm {XEB}}=2^{n}\langle P(x_{i})\rangle _{k}-1={\frac {2^{n}}{k}}\left(\sum _{i=1}^{k}|\langle 0^{n}|C|x_{i}\rangle |^{2}\right)-1} , where n {\displaystyle n} is the number of qubits in the circuit and P ( x i ) {\displaystyle P(x_{i})} is the probability of a bitstring x i {\displaystyle {x_{i}}} for an ideal quantum circuit C {\displaystyle C} . If F X E B = 1 {\displaystyle F_{XEB}=1} , the samples were collected from a noiseless quantum computer. If F X E B = 0 {\displaystyle F_{\rm {XEB}}=0} , then the samples could have been obtained via random guessing. This means that if a quantum computer did generate those samples, then the quantum computer is too noisy and thus has no chance of performing beyond-classical computations. Since it takes an exponential amount of resources to classically simulate a quantum circuit, there comes a point when the biggest supercomputer that runs the best classical algorithm for simulating quantum circuits can't compute the XEB. Crossing this point is known as achieving quantum supremacy; and after entering the quantum supremacy regime, XEB can only be estimated. The Sycamore processor was the first to demonstrate quantum supremacy via XEB. Instances of random circuits with n = 53 {\displaystyle n=53} and 20 cycles were run to obtain an XEB of 0.0024 {\displaystyle 0.0024} . Generating samples took 200 seconds on the quantum processor when it would have taken 10,000 years on Summit at the time of the experiment. Improvements in classical algorithms have shortened the runtime to about a week on Sunway TaihuLight thus collapsing Sycamore's claim to quantum supremacy. As of 2021, the latest demonstration of quantum supremacy by Zuchongzhi 2.1 with n = 60 {\displaystyle n=60} , 24 cycles and an XEB of 0.000366 {\displaystyle 0.000366} holds. It takes around 4 hours to generate samples on Zuchongzhi 2.1 when it would take 10,000 years on Sunway. == See also == Boson sampling == References ==
Wikipedia/Cross-entropy_benchmarking
In quantum electrodynamics, the vertex function describes the coupling between a photon and an electron beyond the leading order of perturbation theory. In particular, it is the one particle irreducible correlation function involving the fermion ψ {\displaystyle \psi } , the antifermion ψ ¯ {\displaystyle {\bar {\psi }}} , and the vector potential A. == Definition == The vertex function Γ μ {\displaystyle \Gamma ^{\mu }} can be defined in terms of a functional derivative of the effective action Seff as Γ μ = − 1 e δ 3 S e f f δ ψ ¯ δ ψ δ A μ {\displaystyle \Gamma ^{\mu }=-{1 \over e}{\delta ^{3}S_{\mathrm {eff} } \over \delta {\bar {\psi }}\delta \psi \delta A_{\mu }}} The dominant (and classical) contribution to Γ μ {\displaystyle \Gamma ^{\mu }} is the gamma matrix γ μ {\displaystyle \gamma ^{\mu }} , which explains the choice of the letter. The vertex function is constrained by the symmetries of quantum electrodynamics — Lorentz invariance; gauge invariance or the transversality of the photon, as expressed by the Ward identity; and invariance under parity — to take the following form: Γ μ = γ μ F 1 ( q 2 ) + i σ μ ν q ν 2 m F 2 ( q 2 ) {\displaystyle \Gamma ^{\mu }=\gamma ^{\mu }F_{1}(q^{2})+{\frac {i\sigma ^{\mu \nu }q_{\nu }}{2m}}F_{2}(q^{2})} where σ μ ν = ( i / 2 ) [ γ μ , γ ν ] {\displaystyle \sigma ^{\mu \nu }=(i/2)[\gamma ^{\mu },\gamma ^{\nu }]} , q ν {\displaystyle q_{\nu }} is the incoming four-momentum of the external photon (on the right-hand side of the figure), and F1(q2) and F2(q2) are form factors that depend only on the momentum transfer q2. At tree level (or leading order), F1(q2) = 1 and F2(q2) = 0. Beyond leading order, the corrections to F1(0) are exactly canceled by the field strength renormalization. The form factor F2(0) corresponds to the anomalous magnetic moment a of the fermion, defined in terms of the Landé g-factor as: a = g − 2 2 = F 2 ( 0 ) {\displaystyle a={\frac {g-2}{2}}=F_{2}(0)} == See also == Nonoblique correction == References == Gross, F. (1993). Relativistic Quantum Mechanics and Field Theory (1st ed.). Wiley-VCH. ISBN 978-0471591139. Peskin, Michael E.; Schroeder, Daniel V. (1995). An Introduction to Quantum Field Theory. Reading: Addison-Wesley. ISBN 0-201-50397-2. Weinberg, S. (2002), Foundations, The Quantum Theory of Fields, vol. I, Cambridge University Press, ISBN 0-521-55001-7 == External links == Media related to Vertex function at Wikimedia Commons
Wikipedia/Vertex_function
Two-photon physics, also called gamma–gamma physics, is a branch of particle physics that describes the interactions between two photons. Normally, beams of light pass through each other unperturbed. Inside an optical material, and if the intensity of the beams is high enough, the beams may affect each other through a variety of non-linear optical effects. In pure vacuum, some weak scattering of light by light exists as well. Also, above some threshold of this center-of-mass energy of the system of the two photons, matter can be created. == Astronomy == === Cosmological/intergalactic gamma rays === Photon–photon interactions limit the spectrum of observed gamma-ray photons at moderate cosmological distances to a photon energy below around 20 GeV, that is, to a wavelength of greater than approximately 6.2×10−11 m. This limit reaches up to around 20 TeV at merely intergalactic distances. An analogy would be light traveling through a fog: at near distances a light source is more clearly visible than at long distances due to the scattering of light by fog particles. Similarly, the further a gamma-ray travels through the universe, the more likely it is to be scattered by an interaction with a low energy photon from the extragalactic background light. At those energies and distances, very high energy gamma-ray photons have a significant probability of a photon-photon interaction with a low energy background photon from the extragalactic background light resulting in either the creation of particle-antiparticle pairs via direct pair production or (less often) by photon-photon scattering events that lower the incident photon energies. This renders the universe effectively opaque to very high energy photons at intergalactic to cosmological distances. == Experiments == Two-photon physics can be studied with high-energy particle accelerators, where the accelerated particles are not the photons themselves but charged particles that will radiate photons. The most significant studies so far were performed at the Large Electron–Positron Collider (LEP) at CERN. If the transverse momentum transfer and thus the deflection is large, one or both electrons can be detected; this is called tagging. The other particles that are created in the interaction are tracked by large detectors to reconstruct the physics of the interaction. Frequently, photon-photon interactions will be studied via ultraperipheral collisions (UPCs) of heavy ions, such as gold or lead. These are collisions in which the colliding nuclei do not touch each other; i.e., the impact parameter b {\displaystyle b} is larger than the sum of the radii of the nuclei. The strong interaction between the quarks composing the nuclei is thus greatly suppressed, making the weaker electromagnetic γ γ {\displaystyle \gamma \gamma } interaction much more visible. In UPCs, because the ions are heavily charged, it is possible to have two independent interactions between a single ion pair, such as production of two electron-positron pairs. UPCs are studied with the STARlight simulation code. Light-by-light scattering, as predicted in, can be studied using the strong electromagnetic fields of the hadrons collided at the LHC, it has first been seen in 2016 by the ATLAS collaboration and was then confirmed by the CMS collaboration., including at high two-photon energies. The best previous constraint on the elastic photon–photon scattering cross section was set by PVLAS, which reported an upper limit far above the level predicted by the Standard Model. Observation of a cross section larger than that predicted by the Standard Model could signify new physics such as axions, the search of which is the primary goal of PVLAS and several similar experiments. == Processes == From quantum electrodynamics it can be found that photons cannot couple directly to each other and a fermionic field according to the Landau-Yang theorem since they carry no charge and no 2 fermion + 2 boson vertex exists due to requirements of renormalizability, but they can interact through higher-order processes or couple directly to each other in a vertex with an additional two W bosons: a photon can, within the bounds of the uncertainty principle, fluctuate into a virtual charged fermion–antifermion pair, to either of which the other photon can couple. This fermion pair can be leptons or quarks. Thus, two-photon physics experiments can be used as ways to study the photon structure, or, somewhat metaphorically, what is "inside" the photon. There are three interaction processes: Direct or pointlike: The photon couples directly to a quark inside the target photon. If a lepton–antilepton pair is created, this process involves only quantum electrodynamics (QED), but if a quark–antiquark pair is created, it involves both QED and perturbative quantum chromodynamics (QCD). The intrinsic quark content of the photon is described by the photon structure function, experimentally analyzed in deep-inelastic electron–photon scattering. Single resolved: The quark pair of the target photon form a vector meson. The probing photon couples to a constituent of this meson. Double resolved: Both target and probe photon have formed a vector meson. This results in an interaction between two hadrons. For the latter two cases, the scale of the interaction is such as the strong coupling constant is large. This is called vector meson dominance (VMD) and has to be modelled in non-perturbative QCD. == See also == Channelling radiation has been considered as a method to generate polarized high energy photon beams for gamma–gamma colliders. Matter creation Pair production Delbrück scattering Breit–Wheeler process == References == == External links == Lauber,J A, 1997, A small tutorial in gamma–gamma Physics Archive Two-photon physics at LEP Two-photon physics at CESR Archive
Wikipedia/Two-photon_physics
Cavity Quantum Electrodynamics (cavity QED) is the study of the interaction between light confined in a reflective cavity and atoms or other particles, under conditions where the quantum nature of photons is significant. It could in principle be used to construct a quantum computer. The case of a single 2-level atom in the cavity is mathematically described by the Jaynes–Cummings model, and undergoes vacuum Rabi oscillations | e ⟩ | n − 1 ⟩ ↔ | g ⟩ | n ⟩ {\displaystyle |e\rangle |n-1\rangle \leftrightarrow |g\rangle |n\rangle } , that is between an excited atom and n − 1 {\displaystyle n-1} photons, and a ground state atom and n {\displaystyle n} photons. If the cavity is in resonance with the atomic transition, a half-cycle of oscillation starting with no photons coherently swaps the atom qubit's state onto the cavity field's, ( α | g ⟩ + β | e ⟩ ) | 0 ⟩ ↔ | g ⟩ ( α | 0 ⟩ + β | 1 ⟩ ) {\displaystyle (\alpha |g\rangle +\beta |e\rangle )|0\rangle \leftrightarrow |g\rangle (\alpha |0\rangle +\beta |1\rangle )} , and can be repeated to swap it back again; this could be used as a single photon source (starting with an excited atom), or as an interface between an atom or trapped ion quantum computer and optical quantum communication. Other interaction durations create entanglement between the atom and cavity field; for example, a quarter-cycle on resonance starting from | e ⟩ | 0 ⟩ {\displaystyle |e\rangle |0\rangle } gives the maximally entangled state (a Bell state) ( | e ⟩ | 0 ⟩ + | g ⟩ | 1 ⟩ ) / 2 {\displaystyle (|e\rangle |0\rangle +|g\rangle |1\rangle )/{\sqrt {2}}} . This can in principle be used as a quantum computer, mathematically equivalent to a trapped ion quantum computer with cavity photons replacing phonons. == Nobel Prize in Physics == The 2012 Nobel Prize for Physics was awarded to Serge Haroche and David Wineland for their work on controlling quantum systems. Haroche shares half of the prize for developing a new field called cavity quantum electrodynamics (CQED) – whereby the properties of an atom are controlled by placing it in an optical or microwave cavity. Haroche focused on microwave experiments and turned the technique on its head – using CQED to control the properties of individual photons. In a series of ground-breaking experiments, Haroche used CQED to realize Schrödinger's famous cat experiment in which a system is in a superposition of two very different quantum states until a measurement is made on the system. Such states are extremely fragile, and the techniques developed to create and measure CQED states are now being applied to the development of quantum computers. == See also == Circuit quantum electrodynamics Superconducting radio frequency Dicke model == References == Herbert Walther; Benjamin T H Varcoe; Berthold-Georg Englert; Thomas Becker (2006). "Cavity quantum electrodynamics". Rep. Prog. Phys. 69 (5): 1325–1382. Bibcode:2006RPPh...69.1325W. doi:10.1088/0034-4885/69/5/R02. S2CID 122420445. Microwave wavelengths, atoms passing through cavity R Miller; T E Northup; K M Birnbaum; A Boca; A D Boozer; H J Kimble (2005). "Trapped atoms in cavity QED: coupling quantized light and matter". J. Phys. B: At. Mol. Opt. Phys. 38 (9): S551 – S565. Bibcode:2005JPhB...38S.551M. doi:10.1088/0953-4075/38/9/007. S2CID 1114899. Optical wavelengths, atoms trapped
Wikipedia/Cavity_quantum_electrodynamics
A stationary state is a quantum state with all observables independent of time. It is an eigenvector of the energy operator (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences explained below. == Introduction == A stationary state is called stationary because the system remains in the same state as time elapses, in every observable way. For a single-particle Hamiltonian, this means that the particle has a constant probability distribution for its position, its velocity, its spin, etc. (This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) The wavefunction itself is not stationary: It continually changes its overall complex phase factor, so as to form a standing wave. The oscillation frequency of the standing wave, multiplied by the Planck constant, is the energy of the state according to the Planck–Einstein relation. Stationary states are quantum states that are solutions to the time-independent Schrödinger equation: H ^ | Ψ ⟩ = E Ψ | Ψ ⟩ , {\displaystyle {\hat {H}}|\Psi \rangle =E_{\Psi }|\Psi \rangle ,} where This is an eigenvalue equation: H ^ {\displaystyle {\hat {H}}} is a linear operator on a vector space, | Ψ ⟩ {\displaystyle |\Psi \rangle } is an eigenvector of H ^ {\displaystyle {\hat {H}}} , and E Ψ {\displaystyle E_{\Psi }} is its eigenvalue. If a stationary state | Ψ ⟩ {\displaystyle |\Psi \rangle } is plugged into the time-dependent Schrödinger equation, the result is i ℏ ∂ ∂ t | Ψ ⟩ = E Ψ | Ψ ⟩ . {\displaystyle i\hbar {\frac {\partial }{\partial t}}|\Psi \rangle =E_{\Psi }|\Psi \rangle .} Assuming that H ^ {\displaystyle {\hat {H}}} is time-independent (unchanging in time), this equation holds for any time t. Therefore, this is a differential equation describing how | Ψ ⟩ {\displaystyle |\Psi \rangle } varies in time. Its solution is | Ψ ( t ) ⟩ = e − i E Ψ t / ℏ | Ψ ( 0 ) ⟩ . {\displaystyle |\Psi (t)\rangle =e^{-iE_{\Psi }t/\hbar }|\Psi (0)\rangle .} Therefore, a stationary state is a standing wave that oscillates with an overall complex phase factor, and its oscillation angular frequency is equal to its energy divided by ℏ {\displaystyle \hbar } . == Stationary state properties == As shown above, a stationary state is not mathematically constant: | Ψ ( t ) ⟩ = e − i E Ψ t / ℏ | Ψ ( 0 ) ⟩ . {\displaystyle |\Psi (t)\rangle =e^{-iE_{\Psi }t/\hbar }|\Psi (0)\rangle .} However, all observable properties of the state are in fact constant in time. For example, if | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } represents a simple one-dimensional single-particle wavefunction Ψ ( x , t ) {\displaystyle \Psi (x,t)} , the probability that the particle is at location x is | Ψ ( x , t ) | 2 = | e − i E Ψ t / ℏ Ψ ( x , 0 ) | 2 = | e − i E Ψ t / ℏ | 2 | Ψ ( x , 0 ) | 2 = | Ψ ( x , 0 ) | 2 , {\displaystyle |\Psi (x,t)|^{2}=\left|e^{-iE_{\Psi }t/\hbar }\Psi (x,0)\right|^{2}=\left|e^{-iE_{\Psi }t/\hbar }\right|^{2}\left|\Psi (x,0)\right|^{2}=\left|\Psi (x,0)\right|^{2},} which is independent of the time t. The Heisenberg picture is an alternative mathematical formulation of quantum mechanics where stationary states are truly mathematically constant in time. As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a 1s electron in a hydrogen atom is in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed. == Spontaneous decay == Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic) quantum mechanics, the hydrogen atom has many stationary states: 1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level will spontaneously emit one or more photons to decay into the ground state. This seems to contradict the idea that stationary states should have unchanging properties. The explanation is that the Hamiltonian used in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian from quantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, but not stationary according to the true Hamiltonian, because of vacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian. == Comparison to "orbital" in chemistry == An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, an atomic orbital for an electron in an atom, or a molecular orbital for an electron in a molecule. For a molecule that contains only a single electron (e.g. atomic hydrogen or H2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is a many-particle state requiring a more complicated description (such as a Slater determinant). In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron–electron instantaneous repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation".) In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system. In chemistry, calculation of molecular orbitals typically also assume the Born–Oppenheimer approximation. == See also == Transition of state Quantum number Quantum mechanic vacuum or vacuum state Virtual particle Steady state == References == == Further reading == Stationary states, Alan Holden, Oxford University Press, 1971, ISBN 0-19-851121-3
Wikipedia/Energy_eigenstate
Set theory is the branch of mathematical logic that studies sets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory – as a branch of mathematics – is mostly concerned with those that are relevant to mathematics as a whole. The modern study of set theory was initiated by the German mathematicians Richard Dedekind and Georg Cantor in the 1870s. In particular, Georg Cantor is commonly considered the founder of set theory. The non-formalized systems investigated during this early stage go under the name of naive set theory. After the discovery of paradoxes within naive set theory (such as Russell's paradox, Cantor's paradox and the Burali-Forti paradox), various axiomatic systems were proposed in the early twentieth century, of which Zermelo–Fraenkel set theory (with or without the axiom of choice) is still the best-known and most studied. Set theory is commonly employed as a foundational system for the whole of mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Besides its foundational role, set theory also provides the framework to develop a mathematical theory of infinity, and has various applications in computer science (such as in the theory of relational algebra), philosophy, formal semantics, and evolutionary dynamics. Its foundational appeal, together with its paradoxes, and its implications for the concept of infinity and its multiple applications have made set theory an area of major interest for logicians and philosophers of mathematics. Contemporary research into set theory covers a vast array of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. == History == === Early history === The basic notion of grouping objects has existed since at least the emergence of numbers, and the notion of treating sets as their own objects has existed since at least the Tree of Porphyry, 3rd-century AD. The simplicity and ubiquity of sets makes it hard to determine the origin of sets as now used in mathematics, however, Bernard Bolzano's Paradoxes of the Infinite (Paradoxien des Unendlichen, 1851) is generally considered the first rigorous introduction of sets to mathematics. In his work, he (among other things) expanded on Galileo's paradox, and introduced one-to-one correspondence of infinite sets, for example between the intervals [ 0 , 5 ] {\displaystyle [0,5]} and [ 0 , 12 ] {\displaystyle [0,12]} by the relation 5 y = 12 x {\displaystyle 5y=12x} . However, he resisted saying these sets were equinumerous, and his work is generally considered to have been uninfluential in mathematics of his time. Before mathematical set theory, basic concepts of infinity were considered to be solidly in the domain of philosophy (see: Infinity (philosophy) and Infinity § History). Since the 5th century BC, beginning with Greek philosopher Zeno of Elea in the West (and early Indian mathematicians in the East), mathematicians had struggled with the concept of infinity. With the development of calculus in the late 17th century, philosophers began to generally distinguish between actual and potential infinity, wherein mathematics was only considered in the latter. Carl Friedrich Gauss famously stated: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics." Development of mathematical set theory was motivated by several mathematicians. Bernhard Riemann's lecture On the Hypotheses which lie at the Foundations of Geometry (1854) proposed new ideas about topology, and about basing mathematics (especially geometry) in terms of sets or manifolds in the sense of a class (which he called Mannigfaltigkeit) now called point-set topology. The lecture was published by Richard Dedekind in 1868, along with Riemann's paper on trigonometric series (which presented the Riemann integral), The latter was a starting point a movement in real analysis for the study of “seriously” discontinuous functions. A young Georg Cantor entered into this area, which led him to the study of point-sets. Around 1871, influenced by Riemann, Dedekind began working with sets in his publications, which dealt very clearly and precisely with equivalence relations, partitions of sets, and homomorphisms. Thus, many of the usual set-theoretic procedures of twentieth-century mathematics go back to his work. However, he did not publish a formal explanation of his set theory until 1888. === Naive set theory === Set theory, as understood by modern mathematicians, is generally considered to be founded by a single paper in 1874 by Georg Cantor titled On a Property of the Collection of All Real Algebraic Numbers. In his paper, he developed the notion of cardinality, comparing the sizes of two sets by setting them in one-to-one correspondence. His "revolutionary discovery" was that the set of all real numbers is uncountable, that is, one cannot put all real numbers in a list. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. Cantor introduced fundamental constructions in set theory, such as the power set of a set A, which is the set of all possible subsets of A. He later proved that the size of the power set of A is strictly larger than the size of A, even when A is an infinite set; this result soon became known as Cantor's theorem. Cantor developed a theory of transfinite numbers, called cardinals and ordinals, which extended the arithmetic of the natural numbers. His notation for the cardinal numbers was the Hebrew letter ℵ {\displaystyle \aleph } (ℵ, aleph) with a natural number subscript; for the ordinals he employed the Greek letter ω {\displaystyle \omega } (ω, omega). Set theory was beginning to become an essential ingredient of the new “modern” approach to mathematics. Originally, Cantor's theory of transfinite numbers was regarded as counter-intuitive – even shocking. This caused it to encounter resistance from mathematical contemporaries such as Leopold Kronecker and Henri Poincaré and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections (see: Controversy over Cantor's theory). Dedekind's algebraic style only began to find followers in the 1890s Despite the controversy, Cantor's set theory gained remarkable ground around the turn of the 20th century with the work of several notable mathematicians and philosophers. Richard Dedekind, around the same time, began working with sets in his publications, and famously constructing the real numbers using Dedekind cuts. He also worked with Giuseppe Peano in developing the Peano axioms, which formalized natural-number arithmetic, using set-theoretic ideas, which also introduced the epsilon symbol for set membership. Possibly most prominently, Gottlob Frege began to develop his Foundations of Arithmetic. In his work, Frege tries to ground all mathematics in terms of logical axioms using Cantor's cardinality. For example, the sentence "the number of horses in the barn is four" means that four objects fall under the concept horse in the barn. Frege attempted to explain our grasp of numbers through cardinality ('the number of...', or N x : F x {\displaystyle Nx:Fx} ), relying on Hume's principle. However, Frege's work was short-lived, as it was found by Bertrand Russell that his axioms lead to a contradiction. Specifically, Frege's Basic Law V (now known as the axiom schema of unrestricted comprehension). According to Basic Law V, for any sufficiently well-defined property, there is the set of all and only the objects that have that property. The contradiction, called Russell's paradox, is shown as follows: Let R be the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) If R is not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: Let R = { x ∣ x ∉ x } , then R ∈ R ⟺ R ∉ R {\displaystyle {\text{Let }}R=\{x\mid x\not \in x\}{\text{, then }}R\in R\iff R\not \in R} This came around a time of several paradoxes or counter-intuitive results. For example, that the parallel postulate cannot be proved, the existence of mathematical objects that cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved with Peano arithmetic. The result was a foundational crisis of mathematics. == Basic concepts and notation == Set theory begins with a fundamental binary relation between an object o and a set A. If o is a member (or element) of A, the notation o ∈ A is used. A set is described by listing elements separated by commas, or by a characterizing property of its elements, within braces { }. Since sets are objects, the membership relation can relate sets as well, i.e., sets themselves can be members of other sets. A derived binary relation between two sets is the subset relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, denoted A ⊆ B. For example, {1, 2} is a subset of {1, 2, 3}, and so is {2} but {1, 4} is not. As implied by this definition, a set is a subset of itself. For cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined, variously denoted A ⊂ B {\displaystyle A\subset B} , A ⊊ B {\displaystyle A\subsetneq B} , or A ⫋ B {\displaystyle A\subsetneqq B} (note however that the notation A ⊂ B {\displaystyle A\subset B} is sometimes used synonymously with A ⊆ B {\displaystyle A\subseteq B} ; that is, allowing the possibility that A and B are equal). We call A a proper subset of B if and only if A is a subset of B, but A is not equal to B. Also, 1, 2, and 3 are members (elements) of the set {1, 2, 3}, but are not subsets of it; and in turn, the subsets, such as {1}, are not members of the set {1, 2, 3}. More complicated relations can exist; for example, the set {1} is both a member and a proper subset of the set {1, {1}}. Just as arithmetic features binary operations on numbers, set theory features binary operations on sets. The following is a partial list of them: Union of the sets A and B, denoted A ∪ B, is the set of all objects that are a member of A, or B, or both. For example, the union of {1, 2, 3} and {2, 3, 4} is the set {1, 2, 3, 4}. Intersection of the sets A and B, denoted A ∩ B, is the set of all objects that are members of both A and B. For example, the intersection of {1, 2, 3} and {2, 3, 4} is the set {2, 3}. Set difference of U and A, denoted U ∖ A, is the set of all members of U that are not members of A. The set difference {1, 2, 3} ∖ {2, 3, 4} is {1}, while conversely, the set difference {2, 3, 4} ∖ {1, 2, 3} is {4}. When A is a subset of U, the set difference U ∖ A is also called the complement of A in U. In this case, if the choice of U is clear from the context, the notation Ac is sometimes used instead of U ∖ A, particularly if U is a universal set as in the study of Venn diagrams. Symmetric difference of sets A and B, denoted A △ B or A ⊖ B, is the set of all objects that are a member of exactly one of A and B (elements which are in one of the sets, but not in both). For instance, for the sets {1, 2, 3} and {2, 3, 4}, the symmetric difference set is {1, 4}. It is the set difference of the union and the intersection, (A ∪ B) ∖ (A ∩ B) or (A ∖ B) ∪ (B ∖ A). Cartesian product of A and B, denoted A × B, is the set whose members are all possible ordered pairs (a, b), where a is a member of A and b is a member of B. For example, the Cartesian product of {1, 2} and {red, white} is {(1, red), (1, white), (2, red), (2, white)}. Some basic sets of central importance are the set of natural numbers, the set of real numbers and the empty set – the unique set containing no elements. The empty set is also occasionally called the null set, though this name is ambiguous and can lead to several interpretations. The empty set can be denoted with empty braces " { } {\displaystyle \{\}} " or the symbol " ∅ {\displaystyle \varnothing } " or " ∅ {\displaystyle \emptyset } ". The power set of a set A, denoted P ( A ) {\displaystyle {\mathcal {P}}(A)} , is the set whose members are all of the possible subsets of A. For example, the power set of {1, 2} is { {}, {1}, {2}, {1, 2} }. Notably, P ( A ) {\displaystyle {\mathcal {P}}(A)} contains both A and the empty set. == Ontology == A set is pure if all of its members are sets, all members of its members are sets, and so on. For example, the set containing only the empty set is a nonempty pure set. In modern set theory, it is common to restrict attention to the von Neumann universe of pure sets, and many systems of axiomatic set theory are designed to axiomatize the pure sets only. There are many technical advantages to this restriction, and little generality is lost, because essentially all mathematical concepts can be modeled by pure sets. Sets in the von Neumann universe are organized into a cumulative hierarchy, based on how deeply their members, members of members, etc. are nested. Each set in this hierarchy is assigned (by transfinite recursion) an ordinal number α {\displaystyle \alpha } , known as its rank. The rank of a pure set X {\displaystyle X} is defined to be the least ordinal that is strictly greater than the rank of any of its elements. For example, the empty set is assigned rank 0, while the set containing only the empty set is assigned rank 1. For each ordinal α {\displaystyle \alpha } , the set V α {\displaystyle V_{\alpha }} is defined to consist of all pure sets with rank less than α {\displaystyle \alpha } . The entire von Neumann universe is denoted V {\displaystyle V} . == Formalized set theory == Elementary set theory can be studied informally and intuitively, and so can be taught in primary schools using Venn diagrams. The intuitive approach tacitly assumes that a set may be formed from the class of all objects satisfying any particular defining condition. This assumption gives rise to paradoxes, the simplest and best known of which are Russell's paradox and the Burali-Forti paradox. Axiomatic set theory was originally devised to rid set theory of such paradoxes. The most widely studied systems of axiomatic set theory imply that all sets form a cumulative hierarchy. Such systems come in two flavors, those whose ontology consists of: Sets alone. This includes the most common axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). Fragments of ZFC include: Zermelo set theory, which replaces the axiom schema of replacement with that of separation; General set theory, a small fragment of Zermelo set theory sufficient for the Peano axioms and finite sets; Kripke–Platek set theory, which omits the axioms of infinity, powerset, and choice, and weakens the axiom schemata of separation and replacement. Sets and proper classes. These include Von Neumann–Bernays–Gödel set theory, which has the same strength as ZFC for theorems about sets alone, and Morse–Kelley set theory and Tarski–Grothendieck set theory, both of which are stronger than ZFC. The above systems can be modified to allow urelements, objects that can be members of sets but that are not themselves sets and do not have any members. The New Foundations systems of NFU (allowing urelements) and NF (lacking them), associate with Willard Van Orman Quine, are not based on a cumulative hierarchy. NF and NFU include a "set of everything", relative to which every set has a complement. In these systems urelements matter, because NF, but not NFU, produces sets for which the axiom of choice does not hold. Despite NF's ontology not reflecting the traditional cumulative hierarchy and violating well-foundedness, Thomas Forster has argued that it does reflect an iterative conception of set. Systems of constructive set theory, such as CST, CZF, and IZF, embed their set axioms in intuitionistic instead of classical logic. Yet other systems accept classical logic but feature a nonstandard membership relation. These include rough set theory and fuzzy set theory, in which the value of an atomic formula embodying the membership relation is not simply True or False. The Boolean-valued models of ZFC are a related subject. An enrichment of ZFC called internal set theory was proposed by Edward Nelson in 1977. == Applications == Many mathematical concepts can be defined precisely using only set theoretic concepts. For example, mathematical structures as diverse as graphs, manifolds, rings, vector spaces, and relational algebras can all be defined as sets satisfying various (axiomatic) properties. Equivalence and order relations are ubiquitous in mathematics, and the theory of mathematical relations can be described in set theory. Set theory is also a promising foundational system for much of mathematics. Since the publication of the first volume of Principia Mathematica, it has been claimed that most (or even all) mathematical theorems can be derived using an aptly designed set of axioms for set theory, augmented with many definitions, using first or second-order logic. For example, properties of the natural and real numbers can be derived within set theory, as each of these number systems can be defined by representing their elements as sets of specific forms. Set theory as a foundation for mathematical analysis, topology, abstract algebra, and discrete mathematics is likewise uncontroversial; mathematicians accept (in principle) that theorems in these areas can be derived from the relevant definitions and the axioms of set theory. However, it remains that few full derivations of complex mathematical theorems from set theory have been formally verified, since such formal derivations are often much longer than the natural language proofs mathematicians commonly present. One verification project, Metamath, includes human-written, computer-verified derivations of more than 12,000 theorems starting from ZFC set theory, first-order logic and propositional logic. == Areas of study == Set theory is a major area of research in mathematics with many interrelated subfields: === Combinatorial set theory === Combinatorial set theory concerns extensions of finite combinatorics to infinite sets. This includes the study of cardinal arithmetic and the study of extensions of Ramsey's theorem such as the Erdős–Rado theorem. === Descriptive set theory === Descriptive set theory is the study of subsets of the real line and, more generally, subsets of Polish spaces. It begins with the study of pointclasses in the Borel hierarchy and extends to the study of more complex hierarchies such as the projective hierarchy and the Wadge hierarchy. Many properties of Borel sets can be established in ZFC, but proving these properties hold for more complicated sets requires additional axioms related to determinacy and large cardinals. The field of effective descriptive set theory is between set theory and recursion theory. It includes the study of lightface pointclasses, and is closely related to hyperarithmetical theory. In many cases, results of classical descriptive set theory have effective versions; in some cases, new results are obtained by proving the effective version first and then extending ("relativizing") it to make it more broadly applicable. A recent area of research concerns Borel equivalence relations and more complicated definable equivalence relations. This has important applications to the study of invariants in many fields of mathematics. === Fuzzy set theory === In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. In fuzzy set theory this condition was relaxed by Lotfi A. Zadeh so an object has a degree of membership in a set, a number between 0 and 1. For example, the degree of membership of a person in the set of "tall people" is more flexible than a simple yes or no answer and can be a real number such as 0.75. === Inner model theory === An inner model of Zermelo–Fraenkel set theory (ZF) is a transitive class that includes all the ordinals and satisfies all the axioms of ZF. The canonical example is the constructible universe L developed by Gödel. One reason that the study of inner models is of interest is that it can be used to prove consistency results. For example, it can be shown that regardless of whether a model V of ZF satisfies the continuum hypothesis or the axiom of choice, the inner model L constructed inside the original model will satisfy both the generalized continuum hypothesis and the axiom of choice. Thus the assumption that ZF is consistent (has at least one model) implies that ZF together with these two principles is consistent. The study of inner models is common in the study of determinacy and large cardinals, especially when considering axioms such as the axiom of determinacy that contradict the axiom of choice. Even if a fixed model of set theory satisfies the axiom of choice, it is possible for an inner model to fail to satisfy the axiom of choice. For example, the existence of sufficiently large cardinals implies that there is an inner model satisfying the axiom of determinacy (and thus not satisfying the axiom of choice). === Large cardinals === A large cardinal is a cardinal number with an extra property. Many such properties are studied, including inaccessible cardinals, measurable cardinals, and many more. These properties typically imply the cardinal number must be very large, with the existence of a cardinal with the specified property unprovable in Zermelo–Fraenkel set theory. === Determinacy === Determinacy refers to the fact that, under appropriate assumptions, certain two-player games of perfect information are determined from the start in the sense that one player must have a winning strategy. The existence of these strategies has important consequences in descriptive set theory, as the assumption that a broader class of games is determined often implies that a broader class of sets will have a topological property. The axiom of determinacy (AD) is an important object of study; although incompatible with the axiom of choice, AD implies that all subsets of the real line are well behaved (in particular, measurable and with the perfect set property). AD can be used to prove that the Wadge degrees have an elegant structure. === Forcing === Paul Cohen invented the method of forcing while searching for a model of ZFC in which the continuum hypothesis fails, or a model of ZF in which the axiom of choice fails. Forcing adjoins to some given model of set theory additional sets in order to create a larger model with properties determined (i.e. "forced") by the construction and the original model. For example, Cohen's construction adjoins additional subsets of the natural numbers without changing any of the cardinal numbers of the original model. Forcing is also one of two methods for proving relative consistency by finitistic methods, the other method being Boolean-valued models. === Cardinal invariants === A cardinal invariant is a property of the real line measured by a cardinal number. For example, a well-studied invariant is the smallest cardinality of a collection of meagre sets of reals whose union is the entire real line. These are invariants in the sense that any two isomorphic models of set theory must give the same cardinal for each invariant. Many cardinal invariants have been studied, and the relationships between them are often complex and related to axioms of set theory. === Set-theoretic topology === Set-theoretic topology studies questions of general topology that are set-theoretic in nature or that require advanced methods of set theory for their solution. Many of these theorems are independent of ZFC, requiring stronger axioms for their proof. A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. == Controversy == From set theory's inception, some mathematicians have objected to it as a foundation for mathematics. The most common objection to set theory, one Kronecker voiced in set theory's earliest years, starts from the constructivist view that mathematics is loosely related to computation. If this view is granted, then the treatment of infinite sets, both in naive and in axiomatic set theory, introduces into mathematics methods and objects that are not computable even in principle. The feasibility of constructivism as a substitute foundation for mathematics was greatly increased by Errett Bishop's influential book Foundations of Constructive Analysis. A different objection put forth by Henri Poincaré is that defining sets using the axiom schemas of specification and replacement, as well as the axiom of power set, introduces impredicativity, a type of circularity, into the definitions of mathematical objects. The scope of predicatively founded mathematics, while less than that of the commonly accepted Zermelo–Fraenkel theory, is much greater than that of constructive mathematics, to the point that Solomon Feferman has said that "all of scientifically applicable analysis can be developed [using predicative methods]". Ludwig Wittgenstein condemned set theory philosophically for its connotations of mathematical platonism. He wrote that "set theory is wrong", since it builds on the "nonsense" of fictitious symbolism, has "pernicious idioms", and that it is nonsensical to talk about "all numbers". Wittgenstein identified mathematics with algorithmic human deduction; the need for a secure foundation for mathematics seemed, to him, nonsensical. Moreover, since human effort is necessarily finite, Wittgenstein's philosophy required an ontological commitment to radical constructivism and finitism. Meta-mathematical statements – which, for Wittgenstein, included any statement quantifying over infinite domains, and thus almost all modern set theory – are not mathematics. Few modern philosophers have adopted Wittgenstein's views after a spectacular blunder in Remarks on the Foundations of Mathematics: Wittgenstein attempted to refute Gödel's incompleteness theorems after having only read the abstract. As reviewers Kreisel, Bernays, Dummett, and Goodstein all pointed out, many of his critiques did not apply to the paper in full. Only recently have philosophers such as Crispin Wright begun to rehabilitate Wittgenstein's arguments. Category theorists have proposed topos theory as an alternative to traditional axiomatic set theory. Topos theory can interpret various alternatives to that theory, such as constructivism, finite set theory, and computable set theory. Topoi also give a natural setting for forcing and discussions of the independence of choice from ZF, as well as providing the framework for pointless topology and Stone spaces. An active area of research is the univalent foundations and related to it homotopy type theory. Within homotopy type theory, a set may be regarded as a homotopy 0-type, with universal properties of sets arising from the inductive and recursive properties of higher inductive types. Principles such as the axiom of choice and the law of the excluded middle can be formulated in a manner corresponding to the classical formulation in set theory or perhaps in a spectrum of distinct ways unique to type theory. Some of these principles may be proven to be a consequence of other principles. The variety of formulations of these axiomatic principles allows for a detailed analysis of the formulations required in order to derive various mathematical results. == Mathematical education == As set theory gained popularity as a foundation for modern mathematics, there has been support for the idea of introducing the basics of naive set theory early in mathematics education. In the US in the 1960s, the New Math experiment aimed to teach basic set theory, among other abstract concepts, to primary school students but was met with much criticism. The math syllabus in European schools followed this trend and currently includes the subject at different levels in all grades. Venn diagrams are widely employed to explain basic set-theoretic relationships to primary school students (even though John Venn originally devised them as part of a procedure to assess the validity of inferences in term logic). Set theory is used to introduce students to logical operators (NOT, AND, OR), and semantic or rule description (technically intensional definition) of sets (e.g. "months starting with the letter A"), which may be useful when learning computer programming, since Boolean logic is used in various programming languages. Likewise, sets and other collection-like objects, such as multisets and lists, are common datatypes in computer science and programming. In addition to that, certain sets are commonly used in mathematical teaching, such as the sets N {\displaystyle \mathbb {N} } of natural numbers, Z {\displaystyle \mathbb {Z} } of integers, R {\displaystyle \mathbb {R} } of real numbers, etc.). These are commonly used when defining a mathematical function as a relation from one set (the domain) to another set (the range). == See also == Glossary of set theory Class (set theory) List of set theory topics Relational model – borrows from set theory Venn diagram Elementary Theory of the Category of Sets Structural set theory == Notes == == Citations == == References == Devlin, Keith (1993), The Joy of Sets: Fundamentals of Contemporary Set Theory, Undergraduate Texts in Mathematics (2nd ed.), Springer Verlag, doi:10.1007/978-1-4612-0903-4, ISBN 0-387-94094-4 Ferreirós, Jose (2001), Labyrinth of Thought: A History of Set Theory and Its Role in Modern Mathematics, Berlin: Springer, ISBN 978-3-7643-5749-8 Monk, J. Donald (1969), Introduction to Set Theory, McGraw-Hill Book Company, ISBN 978-0-898-74006-6 Potter, Michael (2004), Set Theory and Its Philosophy: A Critical Introduction, Oxford University Press, ISBN 978-0-191-55643-2 Smullyan, Raymond M.; Fitting, Melvin (2010), Set Theory and the Continuum Problem, Dover Publications, ISBN 978-0-486-47484-7 Tiles, Mary (2004), The Philosophy of Set Theory: An Historical Introduction to Cantor's Paradise, Dover Publications, ISBN 978-0-486-43520-6 Dauben, Joseph W. (1977), "Georg Cantor and Pope Leo XIII: Mathematics, Theology, and the Infinite", Journal of the History of Ideas, 38 (1): 85–108, doi:10.2307/2708842, JSTOR 2708842 Dauben, Joseph W. (1979), [Unavailable on archive.org] Georg Cantor: his mathematics and philosophy of the infinite, Boston: Harvard University Press, ISBN 978-0-691-02447-9 == External links == Daniel Cunningham, Set Theory article in the Internet Encyclopedia of Philosophy. Jose Ferreiros, "The Early Development of Set Theory" article in the [Stanford Encyclopedia of Philosophy]. Foreman, Matthew, Akihiro Kanamori, eds. Handbook of Set Theory. 3 vols., 2010. Each chapter surveys some aspect of contemporary research in set theory. Does not cover established elementary set theory, on which see Devlin (1993). "Axiomatic set theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Set theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Schoenflies, Arthur (1898). Mengenlehre in Klein's encyclopedia. Online books, and library resources in your library and in other libraries about set theory Rudin, Walter B. (April 6, 1990), "Set Theory: An Offspring of Analysis", Marden Lecture in Mathematics, University of Wisconsin-Milwaukee, archived from the original on 2021-10-31 – via YouTube
Wikipedia/Set_Theory
The transferable belief model (TBM) is an elaboration on the Dempster–Shafer theory (DST), which is a mathematical model used to evaluate the probability that a given proposition is true from other propositions that are assigned probabilities. It was developed by Philippe Smets who proposed his approach as a response to Zadeh’s example against Dempster's rule of combination. In contrast to the original DST the TBM propagates the open-world assumption that relaxes the assumption that all possible outcomes are known. Under the open world assumption Dempster's rule of combination is adapted such that there is no normalization. The underlying idea is that the probability mass pertaining to the empty set is taken to indicate an unexpected outcome, e.g. the belief in a hypothesis outside the frame of discernment. This adaptation violates the probabilistic character of the original DST and also Bayesian inference. Therefore, the authors substituted notation such as probability masses and probability update with terms such as degrees of belief and transfer giving rise to the name of the method: The transferable belief model. == Zadeh’s example in TBM context == Lotfi Zadeh describes an information fusion problem. A patient has an illness that can be caused by three different factors A, B or C. Doctor 1 says that the patient's illness is very likely to be caused by A (very likely, meaning probability p = 0.95), but B is also possible but not likely (p = 0.05). Doctor 2 says that the cause is very likely C (p = 0.95), but B is also possible but not likely (p = 0.05). How is one to make one's own opinion from this? Bayesian updating the first opinion with the second (or the other way round) implies certainty that the cause is B. Dempster's rule of combination lead to the same result. This can be seen as paradoxical, since although the two doctors point at different causes, A and C, they both agree that B is not likely. (For this reason the standard Bayesian approach is to adopt Cromwell's rule and avoid the use of 0 or 1 as probabilities.) == Formal definition == The TBM describes beliefs at two levels: a credal level where beliefs are entertained and quantified by belief functions, a pignistic level where beliefs can be used to make decisions and are quantified by probability functions. === Credal level === According to the DST, a probability mass function m {\displaystyle m} is defined such that: m : 2 X → [ 0 , 1 ] {\displaystyle m:2^{X}\rightarrow [0,1]\,\!} with ∑ A ∈ 2 X m ( A ) = 1 {\displaystyle \sum _{A\in 2^{X}}m(A)=1\,\!} where the power set 2 X {\displaystyle 2^{X}} contains all possible subsets of the frame of discernment X {\displaystyle X} . In contrast to the DST the mass m {\displaystyle m} allocated to the empty set ∅ {\displaystyle \emptyset } is not required to be zero, and hence generally 0 ≤ m ( ∅ ) ≤ 1.0 {\displaystyle 0\leq m(\emptyset )\leq 1.0} holds true. The underlying idea is that the frame of discernment is not necessarily exhaustive, and thus belief allocated to a proposition A ∈ 2 X {\displaystyle A\in 2^{X}} , is in fact allocated to A ∈ 2 X ∪ e {\displaystyle A\in 2^{X}\cup {e}} where e {\displaystyle {e}} is the set of unknown outcomes. Consequently, the combination rule underlying the TBM corresponds to Dempster's rule of combination, except the normalization that grants m ( ∅ ) = 0 {\displaystyle m(\emptyset )=0} . Hence, in the TBM any two independent functions m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} are combined to a single function m 1 , 2 {\displaystyle m_{1,2}} by: m 1 , 2 ( A ) = ( m 1 ⊗ m 2 ) ( A ) = ∑ B ∩ C = A m 1 ( B ) m 2 ( C ) {\displaystyle m_{1,2}(A)=(m_{1}\otimes m_{2})(A)=\sum _{B\cap C=A}m_{1}(B)m_{2}(C)\,\!} where A , B , C ∈ 2 X ≠ ∅ . {\displaystyle A,B,C\in 2^{X}\neq \emptyset .\,\!} In the TBM the degree of belief in a hypothesis H ∈ 2 X ≠ ∅ {\displaystyle H\in 2^{X}\neq \emptyset } is defined by a function: bel : 2 X → [ 0 , 1 ] {\displaystyle \operatorname {bel} :2^{X}\rightarrow [0,1]\,\!} with bel ⁡ ( H ) = ∑ ∅ ≠ A ⊆ H m ( A ) {\displaystyle \operatorname {bel} (H)=\sum _{\emptyset \neq A\subseteq H}m(A)} bel ⁡ ( ∅ ) = 0. {\displaystyle \operatorname {bel} (\emptyset )=0.\,\!} === Pignistic level === When a decision must be made the credal beliefs are transferred to pignistic probabilities by: P Bet ( x ) = ∑ x ∈ A ⊆ X m ( A ) | A | {\displaystyle P_{\text{Bet}}(x)=\sum _{x\in A\subseteq X}{\frac {m(A)}{|A|}}\,\!} where x ∈ X {\displaystyle x\in X} denote the atoms (also denoted as singletons) and | A | {\displaystyle |A|} the number of atoms x {\displaystyle x} that appear in A {\displaystyle A} . Hence, probability masses m ( A ) {\displaystyle m(A)} are equally distributed among the atoms of A. This strategy corresponds to the principle of insufficient reason (also denoted as principle of maximum entropy) according to which an unknown distribution most probably corresponds to a uniform distribution. In the TBM pignistic probability functions are described by functions P Bet {\displaystyle P_{\text{Bet}}} . Such a function satisfies the probability axioms: P Bet : X → [ 0 , 1 ] {\displaystyle P_{\text{Bet}}:X\rightarrow [0,1]\,\!} with ∑ x ∈ X P Bet ( x ) = 1 {\displaystyle \sum _{x\in X}P_{\text{Bet}}(x)=1\,\!} P Bet ( ∅ ) = 0 {\displaystyle P_{\text{Bet}}(\emptyset )=0\,\!} Philip Smets introduced them as pignistic to stress the fact that those probability functions are based on incomplete data, whose only purpose is a forced decision, e.g. to place a bet. This is in contrast to the credal beliefs described above, whose purpose is representing the actual belief. == Open world example == When tossing a coin one usually assumes that Head or Tail will occur, so that Pr ( Head ) + Pr ( Tail ) = 1 {\displaystyle \Pr({\text{Head}})+\Pr({\text{Tail}})=1} . The open-world assumption is that the coin can be stolen in mid-air, disappear, break apart or otherwise fall sideways so that neither Head nor Tail occurs, so that the power set of {Head,Tail} is considered and there is a decomposition of the overall probability (i.e. 1) of the following form: Pr ( ∅ ) + Pr ( Head ) + Pr ( Tail ) + Pr ( Head,Tail ) = 1. {\displaystyle \Pr(\emptyset )+\Pr({\text{Head}})+\Pr({\text{Tail}})+\Pr({\text{Head,Tail}})=1.} == See also == Dempster–Shafer theory == Notes == == References == Smets Ph. (1988) "Belief function". In: Non Standard Logics for Automated Reasoning, ed. Smets Ph., Mamdani A, Dubois D. and Prade H. Academic Press, London Ph, Smets (1990). "The combination of evidence in the transferable belief model". IEEE Transactions on Pattern Analysis and Machine Intelligence. 12 (5): 447–458. CiteSeerX 10.1.1.377.5969. doi:10.1109/34.55104. Smets Ph. (1993) "An axiomatic justification for the use of belief function to quantify beliefs", IJCAI'93 (Inter. Joint Conf. on AI), Chambery, 598–603 Smets, Ph.; Kennes, R. (1994). "The transferable belief model". Artificial Intelligence. 66 (2): 191–234. doi:10.1016/0004-3702(94)90026-4. Smets Ph. and Kruse R. (1995) "The transferable belief model for belief representation" In: Smets and Motro A. (eds.) Uncertainty Management in Information Systems: from Needs to solutions. Kluwer, Boston Haenni, R. (2006). "Uncover Dempster's Rule Where It Is Hidden" in: Proceedings of the 9th International Conference on Information Fusion (FUSION 2006), Florence, Italy, 2006. Ramasso, E., Rombaut, M., Pellerin D. (2007) "Forward-Backward-Viterbi procedures in the Transferable Belief Model for state sequence analysis using belief functions", ECSQARU, Hammamet : Tunisie (2007). Touil, K.; Zribi, M.; Benjelloun, M. (2007). "Application of transferable belief model to navigation system". Integrated Computer-Aided Engineering. 14 (1): 93–105. doi:10.3233/ICA-2007-14108. Dempster, A.P. (2007). "The Dempster–Shafer calculus for statisticians". International Journal of Approximate Reasoning. 48 (2): 365–377. doi:10.1016/j.ijar.2007.03.004. == External links == The Transferable Belief Model Publications on TBM Software for TBM in Matlab
Wikipedia/Transferable_belief_model
Linear belief functions are an extension of the Dempster–Shafer theory of belief functions to the case when variables of interest are continuous. Examples of such variables include financial asset prices, portfolio performance, and other antecedent and consequent variables. The theory was originally proposed by Arthur P. Dempster in the context of Kalman Filters and later was elaborated, refined, and applied to knowledge representation in artificial intelligence and decision making in finance and accounting by Liping Liu. == Concept == A linear belief function intends to represent our belief regarding the location of the true value as follows: We are certain that the truth is on a so-called certainty hyperplane but we do not know its exact location; along some dimensions of the certainty hyperplane, we believe the true value could be anywhere from –∞ to +∞ and the probability of being at a particular location is described by a normal distribution; along other dimensions, our knowledge is vacuous, i.e., the true value is somewhere from –∞ to +∞ but the associated probability is unknown. A belief function in general is defined by a mass function over a class of focal elements, which may have nonempty intersections. A linear belief function is a special type of belief function in the sense that its focal elements are exclusive, parallel sub-hyperplanes over the certainty hyperplane and its mass function is a normal distribution across the sub-hyperplanes. Based on the above geometrical description, Shafer and Liu propose two mathematical representations of a LBF: a wide-sense inner product and a linear functional in the variable space, and as their duals over a hyperplane in the sample space. Monney proposes still another structure called Gaussian hints. Although these representations are mathematically neat, they tend to be unsuitable for knowledge representation in expert systems. == Knowledge representation == A linear belief function can represent both logical and probabilistic knowledge for three types of variables: deterministic such as an observable or controllable, random whose distribution is normal, and vacuous on which no knowledge bears. Logical knowledge is represented by linear equations, or geometrically, a certainty hyperplane. Probabilistic knowledge is represented by a normal distribution across all parallel focal elements. In general, assume X is a vector of multiple normal variables with mean μ and covariance Σ. Then, the multivariate normal distribution can be equivalently represented as a moment matrix: M ( X ) = ( μ Σ ) . {\displaystyle M(X)=\left({\begin{array}{*{20}c}\mu \\\Sigma \end{array}}\right).} If the distribution is non-degenerate, i.e., Σ has a full rank and its inverse exists, the moment matrix can be fully swept: M ( X → ) = ( μ Σ − 1 − Σ − 1 ) {\displaystyle M({\vec {X}})=\left({\begin{array}{*{20}c}\mu \Sigma ^{-1}\\-\Sigma ^{-1}\end{array}}\right)} Except for normalization constant, the above equation completely determines the normal density function for X. Therefore, M ( X → ) {\displaystyle M({\vec {X}})} represents the probability distribution of X in the potential form. These two simple matrices allow us to represent three special cases of linear belief functions. First, for an ordinary normal probability distribution M(X) represents it. Second, suppose one makes a direct observation on X and obtains a value μ. In this case, since there is no uncertainty, both variance and covariance vanish, i.e., Σ = 0. Thus, a direct observation can be represented as: M ( X ) = ( μ 0 ) {\displaystyle M(X)=\left({\begin{array}{*{20}c}\mu \\0\end{array}}\right)} Third, suppose one is completely ignorant about X. This is a very thorny case in Bayesian statistics since the density function does not exist. By using the fully swept moment matrix, we represent the vacuous linear belief functions as a zero matrix in the swept form follows: M ( X → ) = [ 0 0 ] {\displaystyle M({\vec {X}})=\left[{\begin{array}{*{20}c}0\\0\end{array}}\right]} One way to understand the representation is to imagine complete ignorance as the limiting case when the variance of X approaches to ∞, where one can show that Σ−1 = 0 and hence M ( X → ) {\displaystyle M({\vec {X}})} vanishes. However, the above equation is not the same as an improper prior or normal distribution with infinite variance. In fact, it does not correspond to any unique probability distribution. For this reason, a better way is to understand the vacuous linear belief functions as the neutral element for combination (see later). To represent the remaining three special cases, we need the concept of partial sweeping. Unlike a full sweeping, a partial sweeping is a transformation on a subset of variables. Suppose X and Y are two vectors of normal variables with the joint moment matrix: M ( X , Y ) = [ μ 1 Σ 11 Σ 21 μ 2 Σ 12 Σ 22 ] {\displaystyle M(X,Y)=\left[{\begin{array}{*{20}c}{\begin{array}{*{20}c}\mu _{1}\\\Sigma _{11}\\\Sigma _{21}\end{array}}&{\begin{array}{*{20}c}\mu _{2}\\\Sigma _{12}\\\Sigma _{22}\end{array}}\end{array}}\right]} Then M(X, Y) may be partially swept. For example, we can define the partial sweeping on X as follows: M ( X → , Y ) = [ μ 1 ( Σ 11 ) − 1 − ( Σ 11 ) − 1 Σ 21 ( Σ 11 ) − 1 μ 2 − μ 1 ( Σ 11 ) − 1 Σ 12 ( Σ 11 ) − 1 Σ 12 Σ 22 − Σ 21 ( Σ 11 ) − 1 Σ 12 ] {\displaystyle M({\vec {X}},Y)=\left[{\begin{array}{*{20}c}{\begin{array}{*{20}c}\mu _{1}(\Sigma _{11})^{-1}\\-(\Sigma _{11})^{-1}\\\Sigma _{21}(\Sigma _{11})^{-1}\end{array}}&{\begin{array}{*{20}c}\mu _{2}-\mu _{1}(\Sigma _{11})^{-1}\Sigma _{12}\\(\Sigma _{11})^{-1}\Sigma _{12}\\\Sigma _{22}-\Sigma _{21}(\Sigma _{11})^{-1}\Sigma _{12}\end{array}}\end{array}}\right]} If X is one-dimensional, a partial sweeping replaces the variance of X by its negative inverse and multiplies the inverse with other elements. If X is multidimensional, the operation involves the inverse of the covariance matrix of X and other multiplications. A swept matrix obtained from a partial sweeping on a subset of variables can be equivalently obtained by a sequence of partial sweepings on each individual variable in the subset and the order of the sequence does not matter. Similarly, a fully swept matrix is the result of partial sweepings on all variables. We can make two observations. First, after the partial sweeping on X, the mean vector and covariance matrix of X are respectively μ 1 ( Σ 11 ) − 1 {\displaystyle \mu _{1}(\Sigma _{11})^{-1}} and − ( Σ 11 ) − 1 {\displaystyle -(\Sigma _{11})^{-1}} , which are the same as that of a full sweeping of the marginal moment matrix of X. Thus, the elements corresponding to X in the above partial sweeping equation represent the marginal distribution of X in potential form. Second, according to statistics, μ 2 − μ 1 ( Σ 11 ) − 1 Σ 12 {\displaystyle \mu _{2}-\mu _{1}(\Sigma _{11})^{-1}\Sigma _{12}} is the conditional mean of Y given X = 0; Σ 22 − Σ 21 ( Σ 11 ) − 1 Σ 12 {\displaystyle \Sigma _{22}-\Sigma _{21}(\Sigma _{11})^{-1}\Sigma _{12}} is the conditional covariance matrix of Y given X = 0; and ( Σ 11 ) − 1 Σ 12 {\displaystyle (\Sigma _{11})^{-1}\Sigma _{12}} is the slope of the regression model of Y on X. Therefore, the elements corresponding to Y indices and the intersection of X and Y in M ( X → , Y ) {\displaystyle M({\vec {X}},Y)} represents the conditional distribution of Y given X = 0. These semantics render the partial sweeping operation a useful method for manipulating multivariate normal distributions. They also form the basis of the moment matrix representations for the three remaining important cases of linear belief functions, including proper belief functions, linear equations, and linear regression models. === Proper linear belief functions === For variables X and Y, assume there exists a piece of evidence justifying a normal distribution for variables Y while bearing no opinions for variables X. Also, assume that X and Y are not perfectly linearly related, i.e., their correlation is less than 1. This case involves a mix of an ordinary normal distribution for Y and a vacuous belief function for X. Thus, we represent it using a partially swept matrix as follows: M ( X → , Y ) = [ 0 0 0 μ 2 0 Σ 22 ] {\displaystyle M({\vec {X}},Y)=\left[{\begin{array}{*{20}c}{\begin{array}{*{20}c}0\\0\\0\end{array}}&{\begin{array}{*{20}c}\mu _{2}\\0\\\Sigma _{22}\\\end{array}}\end{array}}\right]} This is how we could understand the representation. Since we are ignorant on X, we use its swept form and set μ 1 ( Σ 11 ) − 1 = 0 {\displaystyle \mu _{1}(\Sigma _{11})^{-1}=0} and − ( Σ 11 ) − 1 = 0 {\displaystyle -(\Sigma _{11})^{-1}=0} . Since the correlation between X and Y is less than 1, the regression coefficient of X on Y approaches to 0 when the variance of X approaches to ∞. Therefore, ( Σ 11 ) − 1 Σ 12 = 0 {\displaystyle (\Sigma _{11})^{-1}\Sigma _{12}=0} . Similarly, one can prove that μ 1 ( Σ 11 ) − 1 Σ 12 = 0 {\displaystyle \mu _{1}(\Sigma _{11})^{-1}\Sigma _{12}=0} and Σ 21 ( Σ 11 ) − 1 Σ 12 = 0 {\displaystyle \Sigma _{21}(\Sigma _{11})^{-1}\Sigma _{12}=0} . === Linear equations === Suppose X and Y are two row vectors, and Y = XA + b, where A and b are the coefficient matrices. We represent the equation using a partially swept matrix as follows: M ( X → , Y ) = [ 0 0 A T b A 0 ] {\displaystyle M({\vec {X}},Y)=\left[{\begin{array}{*{20}c}{\begin{array}{*{20}c}0\\0\\A^{T}\end{array}}&{\begin{array}{*{20}c}b\\A\\0\end{array}}\end{array}}\right]} We can understand the representation based on the fact that a linear equation contains two pieces of knowledge: (1) complete ignorance about all variables; and (2) a degenerate conditional distribution of dependent variables given independent variables. Since X is an independent vector in the equation, we are completely ignorant about it. Thus, μ 1 ( Σ 11 ) − 1 = 0 {\displaystyle \mu _{1}(\Sigma _{11})^{-1}=0} and − ( Σ 11 ) − 1 = 0 {\displaystyle -(\Sigma _{11})^{-1}=0} . Given X = 0, Y is completely determined to be b. Thus, the conditional mean of Y is b and the conditional variance is 0. Also, the regression coefficient matrix is A. Note that the knowledge to be represented in linear equations is very close to that in a proper linear belief functions, except that the former assumes a perfect correlation between X and Y while the latter does not. This observation is interesting; it characterizes the difference between partial ignorance and linear equations in one parameter — correlation. === Linear regression models === A linear regression model is a more general and interesting case than previous ones. Suppose X and Y are two vectors and Y = XA + b + E, where A and b are the appropriate coefficient matrices and E is an independent white noise satisfying E ~ N(0, Σ). We represent the model as the following partially swept matrix: M ( X → , Y ) = [ 0 0 A T b A Σ ] {\displaystyle M({\vec {X}},Y)=\left[{\begin{array}{*{20}c}{\begin{array}{*{20}c}0\\0\\A^{T}\end{array}}&{\begin{array}{*{20}c}b\\A\\\Sigma \end{array}}\end{array}}\right]} This linear regression model may be considered as the combination of two pieces of knowledge (see later), one is specified by the linear equation involving three variables X, Y, and E, and the other is a simple normal distribution of E, i.e., E ~ N(0, Σ). Alternatively, one may consider it similar to a linear equation, except that, given X = 0, Y is not completely determined to be b. Instead, the conditional mean of Y is b while the conditional variance is Σ. Note that, in this alternative interpretation, a linear regression model forms a basic building block for knowledge representation and is encoded as one moment matrix. Besides, the noise term E does not appear in the representation. Therefore, it makes the representation more efficient. From representing the six special cases, we see a clear advantage of the moment matrix representation, i.e., it allows a unified representation for seemingly diverse types of knowledge, including linear equations, joint and conditional distributions, and ignorance. The unification is significant not only for knowledge representation in artificial intelligence but also for statistical analysis and engineering computation. For example, the representation treats the typical logical and probabilistic components in statistics — observations, distributions, improper priors (for Bayesian statistics), and linear equation models — not as separate concepts, but as manifestations of a single concept. It allows one to see the inner connections between these concepts or manifestations and to interplay them for computational purposes. == Knowledge operations == There are two basic operations for making inferences in expert systems using linear belief functions: combination and marginalization. Combination corresponds to the integration of knowledge whereas marginalization corresponds to the coarsening of knowledge. Making an inference involves combining relevant knowledge into a full body of knowledge and then projecting the full body of knowledge to a partial domain, in which an inference question is to be answered. === Marginalization === Marginalization projects a linear belief function into one with fewer variables. Expressed as a moment matrix, it is simply the restriction of a nonswept moment matrix to a submatrix corresponding to the remaining variables. For example, for the joint distribution M(X, Y), its marginal to Y is: M ↓ Y ( X , Y ) = [ μ 2 Σ 22 ] {\displaystyle M^{\downarrow Y}(X,Y)=\left[{\begin{array}{*{20}c}\mu _{2}\\\Sigma _{22}\end{array}}\right]} When removing a variable, it is important that the variable has not been swept on in the corresponding moment matrix, i.e., it does not have an arrow sign above the variable. For example, projecting the matrix M ( X → , Y ) {\displaystyle M({\vec {X}},Y)} to Y produces: M ↓ Y ( X → , Y ) = [ μ 2 − μ 1 ( Σ 11 ) − 1 Σ 12 Σ 22 − Σ 21 ( Σ 11 ) − 1 Σ 12 ] {\displaystyle M^{\downarrow Y}({\vec {X}},Y)=\left[{\begin{array}{*{20}c}\mu _{2}-\mu _{1}(\Sigma _{11})^{-1}\Sigma _{12}\\\Sigma _{22}-\Sigma _{21}(\Sigma _{11})^{-1}\Sigma _{12}\end{array}}\right]} which is not the same linear belief function of Y. However, it is easy to see that removing any or all variables in Y from the partially swept matrix will still produce the correct result — a matrix representing the same function for the remaining variables. To remove a variable that has been already swept on, we have to reverse the sweeping using partial or full reverse sweepings. Assume M ( X → ) {\displaystyle M({\vec {X}})} is a fully swept moment matrix, M ( X → ) = ( μ ¯ Σ ¯ ) {\displaystyle M({\vec {X}})=\left({\begin{array}{*{20}c}{\bar {\mu }}\\{\bar {\Sigma }}\\\end{array}}\right)} Then a full reverse sweeping of M ( X → ) {\displaystyle M({\vec {X}})} will recover the moment matrix M(X) as follows: M ( X ) = ( − μ ¯ Σ ¯ − 1 − Σ ¯ − 1 ) {\displaystyle M(X)=\left({\begin{array}{*{20}c}{-{\bar {\mu }}{\bar {\Sigma }}^{-1}}\\{-{\bar {\Sigma }}^{-1}}\\\end{array}}\right)} If a moment matrix is in a partially swept form, say M ( X → , Y ) = [ μ ¯ 1 Σ ¯ 11 Σ ¯ 21 μ ¯ 2 Σ ¯ 12 Σ ¯ 22 ] {\displaystyle M({\vec {X}},Y)=\left[{\begin{array}{*{20}c}{\begin{array}{*{20}c}{{\bar {\mu }}_{1}}\\{{\bar {\Sigma }}_{11}}\\{{\bar {\Sigma }}_{21}}\\\end{array}}&{\begin{array}{*{20}c}{{\bar {\mu }}_{2}}\\{{\bar {\Sigma }}_{12}}\\{{\bar {\Sigma }}_{22}}\\\end{array}}\\\end{array}}\right]} its partially reverse sweeping on X is defined as follows: M ( X , Y ) = [ − μ ¯ 1 ( Σ ¯ 11 ) − 1 − ( Σ ¯ 11 ) − 1 − Σ ¯ 21 ( Σ ¯ 11 ) − 1 μ ¯ 2 − μ ¯ 1 ( Σ ¯ 11 ) − 1 Σ ¯ 12 − ( Σ ¯ 11 ) − 1 Σ ¯ 12 Σ ¯ 22 − Σ ¯ 21 ( Σ ¯ 11 ) − 1 Σ ¯ 12 ] {\displaystyle M(X,Y)=\left[{\begin{array}{*{20}c}{\begin{array}{*{20}c}{-{\bar {\mu }}_{1}({\bar {\Sigma }}_{11})^{-1}}\\{-({\bar {\Sigma }}_{11})^{-1}}\\{-{\bar {\Sigma }}_{21}({\bar {\Sigma }}_{11})^{-1}}\\\end{array}}&{\begin{array}{*{20}c}{{\bar {\mu }}_{2}-{\bar {\mu }}_{1}({\bar {\Sigma }}_{11})^{-1}{\bar {\Sigma }}_{12}}\\{-({\bar {\Sigma }}_{11})^{-1}{\bar {\Sigma }}_{12}}\\{{\bar {\Sigma }}_{22}-{\bar {\Sigma }}_{21}({\bar {\Sigma }}_{11})^{-1}{\bar {\Sigma }}_{12}}\\\end{array}}\\\end{array}}\right]} Reverse sweepings are similar to those of forward ones, except for a sign difference for some multiplications. However, forward and reverse sweepings are opposite operations. It can be easily shown that applying the fully reverse sweeping to M ( X → ) {\displaystyle M({\vec {X}})} will recover the initial moment matrix M(X). It can also be proved that applying a partial reverse sweeping on X to the matrix M ( X → , Y ) {\displaystyle M({\vec {X}},Y)} will recover the moment matrix M(X,Y). As a matter of fact, Liu proves that a moment matrix will be recovered through a reverse sweeping after a forward sweeping on the same set of variables. It can be also recovered through a forward sweeping after a reverse sweeping. Intuitively, a partial forward sweeping factorizes a joint into a marginal and a conditional, whereas a partial reverse sweeping multiplies them into a joint. === Combination === According to Dempster’s rule, the combination of belief functions may be expressed as the intersection of focal elements and the multiplication of probability density functions. Liping Liu applies the rule to linear belief functions in particular and obtains a formula of combination in terms of density functions. Later he proves a claim by Arthur P. Dempster and reexpresses the formula as the sum of two fully swept matrices. Mathematically, assume M 1 ( X → ) = ( μ ¯ 1 Σ ¯ 1 ) {\displaystyle M_{1}({\vec {X}})=\left({\begin{array}{*{20}c}{{\bar {\mu }}_{1}}\\{{\bar {\Sigma }}_{1}}\\\end{array}}\right)} and M 2 ( X → ) = ( μ ¯ 2 Σ ¯ 2 ) {\displaystyle M_{2}({\vec {X}})=\left({\begin{array}{*{20}c}{{\bar {\mu }}_{2}}\\{{\bar {\Sigma }}_{2}}\\\end{array}}\right)} are two LBFs for the same vector of variables X. Then their combination is a fully swept matrix: M ( X → ) = ( μ ¯ 1 + μ ¯ 2 Σ ¯ 1 + Σ ¯ 2 ) {\displaystyle M({\vec {X}})=\left({\begin{array}{*{20}c}{{\bar {\mu }}_{1}+{\bar {\mu }}_{2}}\\{{\bar {\Sigma }}_{1}+{\bar {\Sigma }}_{2}}\\\end{array}}\right)} This above equation is often used for multiplying two normal distributions. Here we use it to define the combination of two linear belief functions, which include normal distributions as a special case. Also, note that a vacuous linear belief function (0 swept matrix) is the neutral element for combination. When applying the equation, we need to consider two special cases. First, if two matrices to be combined have different dimensions, then one or both matrices must be vacuously extended, i.e., assuming ignorance on the variables that are no present in each matrix. For example, if M1(X,Y) and M2(X,Z) are to be combined, we will first extend them into M 1 ( X , Y , Z → ) {\displaystyle M_{1}(X,Y,{\vec {Z}})} and M 2 ( X , Y → , Z ) {\displaystyle M_{2}(X,{\vec {Y}},Z)} respectively such that M 1 ( X , Y , Z → ) {\displaystyle M_{1}(X,Y,{\vec {Z}})} is ignorant about Z and M 2 ( X , Y → , Z ) {\displaystyle M_{2}(X,{\vec {Y}},Z)} is ignorant about Y. The vacuous extension was initially proposed by Kong for discrete belief functions. Second, if a variable has zero variance, it will not permit a sweeping operation. In this case, we can pretend the variance to be an extremely small number, say ε, and perform the desired sweeping and combination. We can then apply a reverse sweeping to the combined matrix on the same variable and let ε approaches 0. Since zero variance means complete certainty about a variable, this ε-procedure will vanish ε terms in the final result. In general, to combine two linear belief functions, their moment matrices must be fully swept. However, one may combine a fully swept matrix with a partially swept one directly if the variables of the former matrix have been all swept on in the later. We can use the linear regression model — Y = XA + b + E — to illustrate the property. As we mentioned, the regression model may be considered as the combination of two pieces of knowledge: one is specified by the linear equation involving three variables X, Y, and E, and the other is a simple normal distribution of E, i.e., E ~ N(0, Σ). Let M 1 ( X → , E → , Y ) = [ 0 0 b 0 0 A 0 0 I A T I 0 ] {\displaystyle M_{1}({\vec {X}},{\vec {\rm {E}}},Y)=\left[{\begin{array}{*{20}c}0&0&b\\0&0&A\\0&0&I\\{A^{T}}&I&0\\\end{array}}\right]} and M 2 ( E → ) = [ 0 − Σ − 1 ] {\displaystyle M_{2}({\vec {\rm {E}}})=\left[{\begin{array}{*{20}c}0\\{-\Sigma ^{-1}}\\\end{array}}\right]} be their moment matrices respectively. Then the two matrices can be combined directly without sweeping M 1 ( X → , E → , Y ) {\displaystyle M_{1}({\vec {X}},{\vec {\rm {E}}},Y)} on Y first. The result of the combination is a partially swept matrix as follows: M ( X → , E → , Y ) = [ 0 0 b 0 0 A 0 − Σ − 1 I A T I 0 ] {\displaystyle M({\vec {X}},{\vec {\rm {E}}},Y)=\left[{\begin{array}{*{20}c}0&0&b\\0&0&A\\0&{-\Sigma ^{-1}}&I\\{A^{T}}&I&0\\\end{array}}\right]} If we apply a reverse sweeping on E and then remove E from the matrix, we will obtain the same representation of the regression model. == Applications == We may use an audit problem to illustrate the three types of variables as follows. Suppose we want to audit the ending balance of accounts receivable (E). As we saw earlier, E is equal to the beginning balance (B) plus the sales (S) for the period minus the cash receipts (C) on the sales plus a residual (R) that represents insignificant sales returns and cash discounts. Thus, we can represent the logical relation as a linear equation: E = B + S − C + R {\displaystyle E=B+S-C+R} Furthermore, if the auditor believes E and B are 100 thousand dollars on the average with a standard deviation 5 and the covariance 15, we can represent the belief as a multivariate normal distribution. If historical data indicate that the residual R is zero on the average with a standard deviation of 0.5 thousand dollars, we can summarize the historical data by normal distribution R ~ N(0, 0.52). If there is a direct observation on cash receipts, we can represent the evidence as an equation say, C = 50 (thousand dollars). If the auditor knows nothing about the beginning balance of accounts receivable, we can represent his or her ignorance by a vacuous LBF. Finally, if historical data suggests that, given cash receipts C, the sales S is on the average 8C + 4 and has a standard deviation 4 thousand dollars, we can represent the knowledge as a linear regression model S ~ N(4 + 8C, 16). == References ==
Wikipedia/Linear_belief_function
In computer science, a rough set, first described by Polish computer scientist Zdzisław I. Pawlak, is a formal approximation of a crisp set (i.e., conventional set) in terms of a pair of sets which give the lower and the upper approximation of the original set. In the standard version of rough set theory described in Pawlak (1991), the lower- and upper-approximation sets are crisp sets, but in other variations, the approximating sets may be fuzzy sets. == Definitions == The following section contains an overview of the basic framework of rough set theory, as originally proposed by Zdzisław I. Pawlak, along with some of the key definitions. More formal properties and boundaries of rough sets can be found in Pawlak (1991) and cited references. The initial and basic theory of rough sets is sometimes referred to as "Pawlak Rough Sets" or "classical rough sets", as a means to distinguish it from more recent extensions and generalizations. === Information system framework === Let I = ( U , A ) {\displaystyle I=(\mathbb {U} ,\mathbb {A} )} be an information system (attribute–value system), where U {\displaystyle \mathbb {U} } is a non-empty, finite set of objects (the universe) and A {\displaystyle \mathbb {A} } is a non-empty, finite set of attributes such that I : U → V a {\displaystyle I:\mathbb {U} \rightarrow V_{a}} for every a ∈ A {\displaystyle a\in \mathbb {A} } . V a {\displaystyle V_{a}} is the set of values that attribute a {\displaystyle a} may take. The information table assigns a value a ( x ) {\displaystyle a(x)} from V a {\displaystyle V_{a}} to each attribute a {\displaystyle a} and object x {\displaystyle x} in the universe U {\displaystyle \mathbb {U} } . With any P ⊆ A {\displaystyle P\subseteq \mathbb {A} } there is an associated equivalence relation I N D ( P ) {\displaystyle \mathrm {IND} (P)} : I N D ( P ) = { ( x , y ) ∈ U 2 ∣ ∀ a ∈ P , a ( x ) = a ( y ) } {\displaystyle \mathrm {IND} (P)=\left\{(x,y)\in \mathbb {U} ^{2}\mid \forall a\in P,a(x)=a(y)\right\}} The relation I N D ( P ) {\displaystyle \mathrm {IND} (P)} is called a P {\displaystyle P} -indiscernibility relation. The partition of U {\displaystyle \mathbb {U} } is a family of all equivalence classes of I N D ( P ) {\displaystyle \mathrm {IND} (P)} and is denoted by U / I N D ( P ) {\displaystyle \mathbb {U} /\mathrm {IND} (P)} (or U / P {\displaystyle \mathbb {U} /P} ). If ( x , y ) ∈ I N D ( P ) {\displaystyle (x,y)\in \mathrm {IND} (P)} , then x {\displaystyle x} and y {\displaystyle y} are indiscernible (or indistinguishable) by attributes from P {\displaystyle P} . The equivalence classes of the P {\displaystyle P} -indiscernibility relation are denoted [ x ] P {\displaystyle [x]_{P}} . === Example: equivalence-class structure === For example, consider the following information table: When the full set of attributes P = { P 1 , P 2 , P 3 , P 4 , P 5 } {\displaystyle P=\{P_{1},P_{2},P_{3},P_{4},P_{5}\}} is considered, we see that we have the following seven equivalence classes: { { O 1 , O 2 } { O 3 , O 7 , O 10 } { O 4 } { O 5 } { O 6 } { O 8 } { O 9 } {\displaystyle {\begin{cases}\{O_{1},O_{2}\}\\\{O_{3},O_{7},O_{10}\}\\\{O_{4}\}\\\{O_{5}\}\\\{O_{6}\}\\\{O_{8}\}\\\{O_{9}\}\end{cases}}} Thus, the two objects within the first equivalence class, { O 1 , O 2 } {\displaystyle \{O_{1},O_{2}\}} , cannot be distinguished from each other based on the available attributes, and the three objects within the second equivalence class, { O 3 , O 7 , O 10 } {\displaystyle \{O_{3},O_{7},O_{10}\}} , cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects. It is apparent that different attribute subset selections will in general lead to different indiscernibility classes. For example, if attribute P = { P 1 } {\displaystyle P=\{P_{1}\}} alone is selected, we obtain the following, much coarser, equivalence-class structure: { { O 1 , O 2 } { O 3 , O 5 , O 7 , O 9 , O 10 } { O 4 , O 6 , O 8 } {\displaystyle {\begin{cases}\{O_{1},O_{2}\}\\\{O_{3},O_{5},O_{7},O_{9},O_{10}\}\\\{O_{4},O_{6},O_{8}\}\end{cases}}} === Definition of a rough set === Let X ⊆ U {\displaystyle X\subseteq \mathbb {U} } be a target set that we wish to represent using attribute subset P {\displaystyle P} ; that is, we are told that an arbitrary set of objects X {\displaystyle X} comprises a single class, and we wish to express this class (i.e., this subset) using the equivalence classes induced by attribute subset P {\displaystyle P} . In general, X {\displaystyle X} cannot be expressed exactly, because the set may include and exclude objects which are indistinguishable on the basis of attributes P {\displaystyle P} . For example, consider the target set X = { O 1 , O 2 , O 3 , O 4 } {\displaystyle X=\{O_{1},O_{2},O_{3},O_{4}\}} , and let attribute subset P = { P 1 , P 2 , P 3 , P 4 , P 5 } {\displaystyle P=\{P_{1},P_{2},P_{3},P_{4},P_{5}\}} , the full available set of features. The set X {\displaystyle X} cannot be expressed exactly, because in [ x ] P , {\displaystyle [x]_{P},} , objects { O 3 , O 7 , O 10 } {\displaystyle \{O_{3},O_{7},O_{10}\}} are indiscernible. Thus, there is no way to represent any set X {\displaystyle X} which includes O 3 {\displaystyle O_{3}} but excludes objects O 7 {\displaystyle O_{7}} and O 10 {\displaystyle O_{10}} . However, the target set X {\displaystyle X} can be approximated using only the information contained within P {\displaystyle P} by constructing the P {\displaystyle P} -lower and P {\displaystyle P} -upper approximations of X {\displaystyle X} : P _ X = { x ∣ [ x ] P ⊆ X } {\displaystyle {\underline {P}}X=\{x\mid [x]_{P}\subseteq X\}} P ¯ X = { x ∣ [ x ] P ∩ X ≠ ∅ } {\displaystyle {\overline {P}}X=\{x\mid [x]_{P}\cap X\neq \emptyset \}} ==== Lower approximation and positive region ==== The P {\displaystyle P} -lower approximation, or positive region, is the union of all equivalence classes in [ x ] P {\displaystyle [x]_{P}} which are contained by (i.e., are subsets of) the target set – in the example, P _ X = { O 1 , O 2 } ∪ { O 4 } {\displaystyle {\underline {P}}X=\{O_{1},O_{2}\}\cup \{O_{4}\}} , the union of the two equivalence classes in [ x ] P {\displaystyle [x]_{P}} which are contained in the target set. The lower approximation is the complete set of objects in U / P {\displaystyle \mathbb {U} /P} that can be positively (i.e., unambiguously) classified as belonging to target set X {\displaystyle X} . ==== Upper approximation and negative region ==== The P {\displaystyle P} -upper approximation is the union of all equivalence classes in [ x ] P {\displaystyle [x]_{P}} which have non-empty intersection with the target set – in the example, P ¯ X = { O 1 , O 2 } ∪ { O 4 } ∪ { O 3 , O 7 , O 10 } {\displaystyle {\overline {P}}X=\{O_{1},O_{2}\}\cup \{O_{4}\}\cup \{O_{3},O_{7},O_{10}\}} , the union of the three equivalence classes in [ x ] P {\displaystyle [x]_{P}} that have non-empty intersection with the target set. The upper approximation is the complete set of objects that in U / P {\displaystyle \mathbb {U} /P} that cannot be positively (i.e., unambiguously) classified as belonging to the complement ( X ¯ {\displaystyle {\overline {X}}} ) of the target set X {\displaystyle X} . In other words, the upper approximation is the complete set of objects that are possibly members of the target set X {\displaystyle X} . The set U − P ¯ X {\displaystyle \mathbb {U} -{\overline {P}}X} therefore represents the negative region, containing the set of objects that can be definitely ruled out as members of the target set. ==== Boundary region ==== The boundary region, given by set difference P ¯ X − P _ X {\displaystyle {\overline {P}}X-{\underline {P}}X} , consists of those objects that can neither be ruled in nor ruled out as members of the target set X {\displaystyle X} . In summary, the lower approximation of a target set is a conservative approximation consisting of only those objects which can positively be identified as members of the set. (These objects have no indiscernible "clones" which are excluded by the target set.) The upper approximation is a liberal approximation which includes all objects that might be members of target set. (Some objects in the upper approximation may not be members of the target set.) From the perspective of U / P {\displaystyle \mathbb {U} /P} , the lower approximation contains objects that are members of the target set with certainty (probability = 1), while the upper approximation contains objects that are members of the target set with non-zero probability (probability > 0). ==== The rough set ==== The tuple ⟨ P _ X , P ¯ X ⟩ {\displaystyle \langle {\underline {P}}X,{\overline {P}}X\rangle } composed of the lower and upper approximation is called a rough set; thus, a rough set is composed of two crisp sets, one representing a lower boundary of the target set X {\displaystyle X} , and the other representing an upper boundary of the target set X {\displaystyle X} . The accuracy of the rough-set representation of the set X {\displaystyle X} can be given by the following: α P ( X ) = | P _ X | | P ¯ X | {\displaystyle \alpha _{P}(X)={\frac {\left|{\underline {P}}X\right|}{\left|{\overline {P}}X\right|}}} That is, the accuracy of the rough set representation of X {\displaystyle X} , α P ( X ) {\displaystyle \alpha _{P}(X)} , 0 ≤ α P ( X ) ≤ 1 {\displaystyle 0\leq \alpha _{P}(X)\leq 1} , is the ratio of the number of objects which can positively be placed in X {\displaystyle X} to the number of objects that can possibly be placed in X {\displaystyle X} – this provides a measure of how closely the rough set is approximating the target set. Clearly, when the upper and lower approximations are equal (i.e., boundary region empty), then α P ( X ) = 1 {\displaystyle \alpha _{P}(X)=1} , and the approximation is perfect; at the other extreme, whenever the lower approximation is empty, the accuracy is zero (regardless of the size of the upper approximation). ==== Objective analysis ==== Rough set theory is one of many methods that can be employed to analyse uncertain (including vague) systems, although less common than more traditional methods of probability, statistics, entropy and Dempster–Shafer theory. However a key difference, and a unique strength, of using classical rough set theory is that it provides an objective form of analysis. Unlike other methods, as those given above, classical rough set analysis requires no additional information, external parameters, models, functions, grades or subjective interpretations to determine set membership – instead it only uses the information presented within the given data. More recent adaptations of rough set theory, such as dominance-based, decision-theoretic and fuzzy rough sets, have introduced more subjectivity to the analysis. === Definability === In general, the upper and lower approximations are not equal; in such cases, we say that target set X {\displaystyle X} is undefinable or roughly definable on attribute set P {\displaystyle P} . When the upper and lower approximations are equal (i.e., the boundary is empty), P ¯ X = P _ X {\displaystyle {\overline {P}}X={\underline {P}}X} , then the target set X {\displaystyle X} is definable on attribute set P {\displaystyle P} . We can distinguish the following special cases of undefinability: Set X {\displaystyle X} is internally undefinable if P _ X = ∅ {\displaystyle {\underline {P}}X=\emptyset } and P ¯ X ≠ U {\displaystyle {\overline {P}}X\neq \mathbb {U} } . This means that on attribute set P {\displaystyle P} , there are no objects which we can be certain belong to target set X {\displaystyle X} , but there are objects which we can definitively exclude from set X {\displaystyle X} . Set X {\displaystyle X} is externally undefinable if P _ X ≠ ∅ {\displaystyle {\underline {P}}X\neq \emptyset } and P ¯ X = U {\displaystyle {\overline {P}}X=\mathbb {U} } . This means that on attribute set P {\displaystyle P} , there are objects which we can be certain belong to target set X {\displaystyle X} , but there are no objects which we can definitively exclude from set X {\displaystyle X} . Set X {\displaystyle X} is totally undefinable if P _ X = ∅ {\displaystyle {\underline {P}}X=\emptyset } and P ¯ X = U {\displaystyle {\overline {P}}X=\mathbb {U} } . This means that on attribute set P {\displaystyle P} , there are no objects which we can be certain belong to target set X {\displaystyle X} , and there are no objects which we can definitively exclude from set X {\displaystyle X} . Thus, on attribute set P {\displaystyle P} , we cannot decide whether any object is, or is not, a member of X {\displaystyle X} . === Reduct and core === An interesting question is whether there are attributes in the information system (attribute–value table) which are more important to the knowledge represented in the equivalence class structure than other attributes. Often, we wonder whether there is a subset of attributes which can, by itself, fully characterize the knowledge in the database; such an attribute set is called a reduct. Formally, a reduct is a subset of attributes R E D ⊆ P {\displaystyle \mathrm {RED} \subseteq P} such that [ x ] R E D {\displaystyle [x]_{\mathrm {RED} }} = [ x ] P {\displaystyle [x]_{P}} , that is, the equivalence classes induced by the reduced attribute set R E D {\displaystyle \mathrm {RED} } are the same as the equivalence class structure induced by the full attribute set P {\displaystyle P} . the attribute set R E D {\displaystyle \mathrm {RED} } is minimal, in the sense that [ x ] ( R E D − { a } ) ≠ [ x ] P {\displaystyle [x]_{(\mathrm {RED} -\{a\})}\neq [x]_{P}} for any attribute a ∈ R E D {\displaystyle a\in \mathrm {RED} } ; in other words, no attribute can be removed from set R E D {\displaystyle \mathrm {RED} } without changing the equivalence classes [ x ] P {\displaystyle [x]_{P}} . A reduct can be thought of as a sufficient set of features – sufficient, that is, to represent the category structure. In the example table above, attribute set { P 3 , P 4 , P 5 } {\displaystyle \{P_{3},P_{4},P_{5}\}} is a reduct – the information system projected on just these attributes possesses the same equivalence class structure as that expressed by the full attribute set: { { O 1 , O 2 } { O 3 , O 7 , O 10 } { O 4 } { O 5 } { O 6 } { O 8 } { O 9 } {\displaystyle {\begin{cases}\{O_{1},O_{2}\}\\\{O_{3},O_{7},O_{10}\}\\\{O_{4}\}\\\{O_{5}\}\\\{O_{6}\}\\\{O_{8}\}\\\{O_{9}\}\end{cases}}} Attribute set { P 3 , P 4 , P 5 } {\displaystyle \{P_{3},P_{4},P_{5}\}} is a reduct because eliminating any of these attributes causes a collapse of the equivalence-class structure, with the result that [ x ] R E D ≠ [ x ] P {\displaystyle [x]_{\mathrm {RED} }\neq [x]_{P}} . The reduct of an information system is not unique: there may be many subsets of attributes which preserve the equivalence-class structure (i.e., the knowledge) expressed in the information system. In the example information system above, another reduct is { P 1 , P 2 , P 5 } {\displaystyle \{P_{1},P_{2},P_{5}\}} , producing the same equivalence-class structure as [ x ] P {\displaystyle [x]_{P}} . The set of attributes which is common to all reducts is called the core: the core is the set of attributes which is possessed by every reduct, and therefore consists of attributes which cannot be removed from the information system without causing collapse of the equivalence-class structure. The core may be thought of as the set of necessary attributes – necessary, that is, for the category structure to be represented. In the example, the only such attribute is { P 5 } {\displaystyle \{P_{5}\}} ; any one of the other attributes can be removed singly without damaging the equivalence-class structure, and hence these are all dispensable. However, removing { P 5 } {\displaystyle \{P_{5}\}} by itself does change the equivalence-class structure, and thus { P 5 } {\displaystyle \{P_{5}\}} is the indispensable attribute of this information system, and hence the core. It is possible for the core to be empty, which means that there is no indispensable attribute: any single attribute in such an information system can be deleted without altering the equivalence-class structure. In such cases, there is no essential or necessary attribute which is required for the class structure to be represented. === Attribute dependency === One of the most important aspects of database analysis or data acquisition is the discovery of attribute dependencies; that is, we wish to discover which variables are strongly related to which other variables. Generally, it is these strong relationships that will warrant further investigation, and that will ultimately be of use in predictive modeling. In rough set theory, the notion of dependency is defined very simply. Let us take two (disjoint) sets of attributes, set P {\displaystyle P} and set Q {\displaystyle Q} , and inquire what degree of dependency obtains between them. Each attribute set induces an (indiscernibility) equivalence class structure, the equivalence classes induced by P {\displaystyle P} given by [ x ] P {\displaystyle [x]_{P}} , and the equivalence classes induced by Q {\displaystyle Q} given by [ x ] Q {\displaystyle [x]_{Q}} . Let [ x ] Q = { Q 1 , Q 2 , Q 3 , … , Q N } {\displaystyle [x]_{Q}=\{Q_{1},Q_{2},Q_{3},\dots ,Q_{N}\}} , where Q i {\displaystyle Q_{i}} is a given equivalence class from the equivalence-class structure induced by attribute set Q {\displaystyle Q} . Then, the dependency of attribute set Q {\displaystyle Q} on attribute set P {\displaystyle P} , γ P ( Q ) {\displaystyle \gamma _{P}(Q)} , is given by γ P ( Q ) = ∑ i = 1 N | P _ Q i | | U | ≤ 1 {\displaystyle \gamma _{P}(Q)={\frac {\sum _{i=1}^{N}\left|{\underline {P}}Q_{i}\right|}{\left|\mathbb {U} \right|}}\leq 1} That is, for each equivalence class Q i {\displaystyle Q_{i}} in [ x ] Q {\displaystyle [x]_{Q}} , we add up the size of its lower approximation by the attributes in P {\displaystyle P} , i.e., P _ Q i {\displaystyle {\underline {P}}Q_{i}} . This approximation (as above, for arbitrary set X {\displaystyle X} ) is the number of objects which on attribute set P {\displaystyle P} can be positively identified as belonging to target set Q i {\displaystyle Q_{i}} . Added across all equivalence classes in [ x ] Q {\displaystyle [x]_{Q}} , the numerator above represents the total number of objects which – based on attribute set P {\displaystyle P} – can be positively categorized according to the classification induced by attributes Q {\displaystyle Q} . The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects. The dependency γ P ( Q ) {\displaystyle \gamma _{P}(Q)} "can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes in P {\displaystyle P} to determine the values of attributes in Q {\displaystyle Q} ". Another, intuitive, way to consider dependency is to take the partition induced by Q {\displaystyle Q} as the target class C {\displaystyle C} , and consider P {\displaystyle P} as the attribute set we wish to use in order to "re-construct" the target class C {\displaystyle C} . If P {\displaystyle P} can completely reconstruct C {\displaystyle C} , then Q {\displaystyle Q} depends totally upon P {\displaystyle P} ; if P {\displaystyle P} results in a poor and perhaps a random reconstruction of C {\displaystyle C} , then Q {\displaystyle Q} does not depend upon P {\displaystyle P} at all. Thus, this measure of dependency expresses the degree of functional (i.e., deterministic) dependency of attribute set Q {\displaystyle Q} on attribute set P {\displaystyle P} ; it is not symmetric. The relationship of this notion of attribute dependency to more traditional information-theoretic (i.e., entropic) notions of attribute dependence has been discussed in a number of sources, e.g. Pawlak, Wong, & Ziarko (1988), Yao & Yao (2002), Wong, Ziarko, & Ye (1986), and Quafafou & Boussouf (2000). == Rule extraction == The category representations discussed above are all extensional in nature; that is, a category or complex class is simply the sum of all its members. To represent a category is, then, just to be able to list or identify all the objects belonging to that category. However, extensional category representations have very limited practical use, because they provide no insight for deciding whether novel (never-before-seen) objects are members of the category. What is generally desired is an intentional description of the category, a representation of the category based on a set of rules that describe the scope of the category. The choice of such rules is not unique, and therein lies the issue of inductive bias. See Version space and Model selection for more about this issue. There are a few rule-extraction methods. We will start from a rule-extraction procedure based on Ziarko & Shan (1995). === Decision matrices === Let us say that we wish to find the minimal set of consistent rules (logical implications) that characterize our sample system. For a set of condition attributes P = { P 1 , P 2 , P 3 , … , P n } {\displaystyle {\mathcal {P}}=\{P_{1},P_{2},P_{3},\dots ,P_{n}\}} and a decision attribute Q , Q ∉ P {\displaystyle Q,Q\notin {\mathcal {P}}} , these rules should have the form P i a P j b … P k c → Q d {\displaystyle P_{i}^{a}P_{j}^{b}\dots P_{k}^{c}\to Q^{d}} , or, spelled out, ( P i = a ) ∧ ( P j = b ) ∧ ⋯ ∧ ( P k = c ) → ( Q = d ) {\displaystyle (P_{i}=a)\land (P_{j}=b)\land \dots \land (P_{k}=c)\to (Q=d)} where { a , b , c , … } {\displaystyle \{a,b,c,\dots \}} are legitimate values from the domains of their respective attributes. This is a form typical of association rules, and the number of items in U {\displaystyle \mathbb {U} } which match the condition/antecedent is called the support for the rule. The method for extracting such rules given in Ziarko & Shan (1995) is to form a decision matrix corresponding to each individual value d {\displaystyle d} of decision attribute Q {\displaystyle Q} . Informally, the decision matrix for value d {\displaystyle d} of decision attribute Q {\displaystyle Q} lists all attribute–value pairs that differ between objects having Q = d {\displaystyle Q=d} and Q ≠ d {\displaystyle Q\neq d} . This is best explained by example (which also avoids a lot of notation). Consider the table above, and let P 4 {\displaystyle P_{4}} be the decision variable (i.e., the variable on the right side of the implications) and let { P 1 , P 2 , P 3 } {\displaystyle \{P_{1},P_{2},P_{3}\}} be the condition variables (on the left side of the implication). We note that the decision variable P 4 {\displaystyle P_{4}} takes on two different values, namely { 1 , 2 } {\displaystyle \{1,2\}} . We treat each case separately. First, we look at the case P 4 = 1 {\displaystyle P_{4}=1} , and we divide up U {\displaystyle \mathbb {U} } into objects that have P 4 = 1 {\displaystyle P_{4}=1} and those that have P 4 ≠ 1 {\displaystyle P_{4}\neq 1} . (Note that objects with P 4 ≠ 1 {\displaystyle P_{4}\neq 1} in this case are simply the objects that have P 4 = 2 {\displaystyle P_{4}=2} , but in general, P 4 ≠ 1 {\displaystyle P_{4}\neq 1} would include all objects having any value for P 4 {\displaystyle P_{4}} other than P 4 = 1 {\displaystyle P_{4}=1} , and there may be several such classes of objects (for example, those having P 4 = 2 , 3 , 4 , e t c . {\displaystyle P_{4}=2,3,4,etc.} ).) In this case, the objects having P 4 = 1 {\displaystyle P_{4}=1} are { O 1 , O 2 , O 3 , O 7 , O 10 } {\displaystyle \{O_{1},O_{2},O_{3},O_{7},O_{10}\}} while the objects which have P 4 ≠ 1 {\displaystyle P_{4}\neq 1} are { O 4 , O 5 , O 6 , O 8 , O 9 } {\displaystyle \{O_{4},O_{5},O_{6},O_{8},O_{9}\}} . The decision matrix for P 4 = 1 {\displaystyle P_{4}=1} lists all the differences between the objects having P 4 = 1 {\displaystyle P_{4}=1} and those having P 4 ≠ 1 {\displaystyle P_{4}\neq 1} ; that is, the decision matrix lists all the differences between { O 1 , O 2 , O 3 , O 7 , O 10 } {\displaystyle \{O_{1},O_{2},O_{3},O_{7},O_{10}\}} and { O 4 , O 5 , O 6 , O 8 , O 9 } {\displaystyle \{O_{4},O_{5},O_{6},O_{8},O_{9}\}} . We put the "positive" objects ( P 4 = 1 {\displaystyle P_{4}=1} ) as the rows, and the "negative" objects P 4 ≠ 1 {\displaystyle P_{4}\neq 1} as the columns. To read this decision matrix, look, for example, at the intersection of row O 3 {\displaystyle O_{3}} and column O 6 {\displaystyle O_{6}} , showing P 1 2 , P 3 0 {\displaystyle P_{1}^{2},P_{3}^{0}} in the cell. This means that with regard to decision value P 4 = 1 {\displaystyle P_{4}=1} , object O 3 {\displaystyle O_{3}} differs from object O 6 {\displaystyle O_{6}} on attributes P 1 {\displaystyle P_{1}} and P 3 {\displaystyle P_{3}} , and the particular values on these attributes for the positive object O 3 {\displaystyle O_{3}} are P 1 = 2 {\displaystyle P_{1}=2} and P 3 = 0 {\displaystyle P_{3}=0} . This tells us that the correct classification of O 3 {\displaystyle O_{3}} as belonging to decision class P 4 = 1 {\displaystyle P_{4}=1} rests on attributes P 1 {\displaystyle P_{1}} and P 3 {\displaystyle P_{3}} ; although one or the other might be dispensable, we know that at least one of these attributes is indispensable. Next, from each decision matrix we form a set of Boolean expressions, one expression for each row of the matrix. The items within each cell are aggregated disjunctively, and the individuals cells are then aggregated conjunctively. Thus, for the above table we have the following five Boolean expressions: { ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ) ∧ ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ) ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ) ∧ ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ) ( P 1 2 ∨ P 3 0 ) ∧ ( P 2 0 ) ∧ ( P 1 2 ∨ P 3 0 ) ∧ ( P 1 2 ∨ P 2 0 ∨ P 3 0 ) ∧ ( P 2 0 ) ( P 1 2 ∨ P 3 0 ) ∧ ( P 2 0 ) ∧ ( P 1 2 ∨ P 3 0 ) ∧ ( P 1 2 ∨ P 2 0 ∨ P 3 0 ) ∧ ( P 2 0 ) ( P 1 2 ∨ P 3 0 ) ∧ ( P 2 0 ) ∧ ( P 1 2 ∨ P 3 0 ) ∧ ( P 1 2 ∨ P 2 0 ∨ P 3 0 ) ∧ ( P 2 0 ) {\displaystyle {\begin{cases}(P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\\(P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\\(P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})\\(P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})\\(P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})\end{cases}}} Each statement here is essentially a highly specific (probably too specific) rule governing the membership in class P 4 = 1 {\displaystyle P_{4}=1} of the corresponding object. For example, the last statement, corresponding to object O 10 {\displaystyle O_{10}} , states that all the following must be satisfied: Either P 1 {\displaystyle P_{1}} must have value 2, or P 3 {\displaystyle P_{3}} must have value 0, or both. P 2 {\displaystyle P_{2}} must have value 0. Either P 1 {\displaystyle P_{1}} must have value 2, or P 3 {\displaystyle P_{3}} must have value 0, or both. Either P 1 {\displaystyle P_{1}} must have value 2, or P 2 {\displaystyle P_{2}} must have value 0, or P 3 {\displaystyle P_{3}} must have value 0, or any combination thereof. P 2 {\displaystyle P_{2}} must have value 0. It is clear that there is a large amount of redundancy here, and the next step is to simplify using traditional Boolean algebra. The statement ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ) ∧ ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ∨ P 3 0 ) ∧ ( P 1 1 ∨ P 2 2 ) {\displaystyle (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})} corresponding to objects { O 1 , O 2 } {\displaystyle \{O_{1},O_{2}\}} simplifies to P 1 1 ∨ P 2 2 {\displaystyle P_{1}^{1}\lor P_{2}^{2}} , which yields the implication ( P 1 = 1 ) ∨ ( P 2 = 2 ) → ( P 4 = 1 ) {\displaystyle (P_{1}=1)\lor (P_{2}=2)\to (P_{4}=1)} Likewise, the statement ( P 1 2 ∨ P 3 0 ) ∧ ( P 2 0 ) ∧ ( P 1 2 ∨ P 3 0 ) ∧ ( P 1 2 ∨ P 2 0 ∨ P 3 0 ) ∧ ( P 2 0 ) {\displaystyle (P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})} corresponding to objects { O 3 , O 7 , O 10 } {\displaystyle \{O_{3},O_{7},O_{10}\}} simplifies to P 1 2 P 2 0 ∨ P 3 0 P 2 0 {\displaystyle P_{1}^{2}P_{2}^{0}\lor P_{3}^{0}P_{2}^{0}} . This gives us the implication ( P 1 = 2 ∧ P 2 = 0 ) ∨ ( P 3 = 0 ∧ P 2 = 0 ) → ( P 4 = 1 ) {\displaystyle (P_{1}=2\land P_{2}=0)\lor (P_{3}=0\land P_{2}=0)\to (P_{4}=1)} The above implications can also be written as the following rule set: { ( P 1 = 1 ) → ( P 4 = 1 ) ( P 2 = 2 ) → ( P 4 = 1 ) ( P 1 = 2 ) ∧ ( P 2 = 0 ) → ( P 4 = 1 ) ( P 3 = 0 ) ∧ ( P 2 = 0 ) → ( P 4 = 1 ) {\displaystyle {\begin{cases}(P_{1}=1)\to (P_{4}=1)\\(P_{2}=2)\to (P_{4}=1)\\(P_{1}=2)\land (P_{2}=0)\to (P_{4}=1)\\(P_{3}=0)\land (P_{2}=0)\to (P_{4}=1)\end{cases}}} It can be noted that each of the first two rules has a support of 1 (i.e., the antecedent matches two objects), while each of the last two rules has a support of 2. To finish writing the rule set for this knowledge system, the same procedure as above (starting with writing a new decision matrix) should be followed for the case of P 4 = 2 {\displaystyle P_{4}=2} , thus yielding a new set of implications for that decision value (i.e., a set of implications with P 4 = 2 {\displaystyle P_{4}=2} as the consequent). In general, the procedure will be repeated for each possible value of the decision variable. === LERS rule induction system === The data system LERS (Learning from Examples based on Rough Sets) may induce rules from inconsistent data, i.e., data with conflicting objects. Two objects are conflicting when they are characterized by the same values of all attributes, but they belong to different concepts (classes). LERS uses rough set theory to compute lower and upper approximations for concepts involved in conflicts with other concepts. Rules induced from the lower approximation of the concept certainly describe the concept, hence such rules are called certain. On the other hand, rules induced from the upper approximation of the concept describe the concept possibly, so these rules are called possible. For rule induction LERS uses three algorithms: LEM1, LEM2, and IRIM. The LEM2 algorithm of LERS is frequently used for rule induction and is used not only in LERS but also in other systems, e.g., in RSES. LEM2 explores the search space of attribute–value pairs. Its input data set is a lower or upper approximation of a concept, so its input data set is always consistent. In general, LEM2 computes a local covering and then converts it into a rule set. We will quote a few definitions to describe the LEM2 algorithm. The LEM2 algorithm is based on an idea of an attribute–value pair block. Let X {\displaystyle X} be a nonempty lower or upper approximation of a concept represented by a decision-value pair ( d , w ) {\displaystyle (d,w)} . Set X {\displaystyle X} depends on a set T {\displaystyle T} of attribute–value pairs t = ( a , v ) {\displaystyle t=(a,v)} if and only if ∅ ≠ [ T ] = ⋂ t ∈ T [ t ] ⊆ X . {\displaystyle \emptyset \neq [T]=\bigcap _{t\in T}[t]\subseteq X.} Set T {\displaystyle T} is a minimal complex of X {\displaystyle X} if and only if X {\displaystyle X} depends on T {\displaystyle T} and no proper subset S {\displaystyle S} of T {\displaystyle T} exists such that X {\displaystyle X} depends on S {\displaystyle S} . Let T {\displaystyle \mathbb {T} } be a nonempty collection of nonempty sets of attribute–value pairs. Then T {\displaystyle \mathbb {T} } is a local covering of X {\displaystyle X} if and only if the following three conditions are satisfied: each member T {\displaystyle T} of T {\displaystyle \mathbb {T} } is a minimal complex of X {\displaystyle X} , ⋃ t ∈ T [ T ] = X , {\displaystyle \bigcup _{t\in \mathbb {T} }[T]=X,} T {\displaystyle \mathbb {T} } is minimal, i.e., T {\displaystyle \mathbb {T} } has the smallest possible number of members. For our sample information system, LEM2 will induce the following rules: { ( P 1 , 1 ) → ( P 4 , 1 ) ( P 5 , 0 ) → ( P 4 , 1 ) ( P 1 , 0 ) → ( P 4 , 2 ) ( P 2 , 1 ) → ( P 4 , 2 ) {\displaystyle {\begin{cases}(P_{1},1)\to (P_{4},1)\\(P_{5},0)\to (P_{4},1)\\(P_{1},0)\to (P_{4},2)\\(P_{2},1)\to (P_{4},2)\end{cases}}} Other rule-learning methods can be found, e.g., in Pawlak (1991), Stefanowski (1998), Bazan et al. (2004), etc. == Incomplete data == Rough set theory is useful for rule induction from incomplete data sets. Using this approach we can distinguish between three types of missing attribute values: lost values (the values that were recorded but currently are unavailable), attribute-concept values (these missing attribute values may be replaced by any attribute value limited to the same concept), and "do not care" conditions (the original values were irrelevant). A concept (class) is a set of all objects classified (or diagnosed) the same way. Two special data sets with missing attribute values were extensively studied: in the first case, all missing attribute values were lost, in the second case, all missing attribute values were "do not care" conditions. In attribute-concept values interpretation of a missing attribute value, the missing attribute value may be replaced by any value of the attribute domain restricted to the concept to which the object with a missing attribute value belongs. For example, if for a patient the value of an attribute Temperature is missing, this patient is sick with flu, and all remaining patients sick with flu have values high or very-high for Temperature when using the interpretation of the missing attribute value as the attribute-concept value, we will replace the missing attribute value with high and very-high. Additionally, the characteristic relation, (see, e.g., Grzymala-Busse & Grzymala-Busse (2007)) enables to process data sets with all three kind of missing attribute values at the same time: lost, "do not care" conditions, and attribute-concept values. == Applications == Rough set methods can be applied as a component of hybrid solutions in machine learning and data mining. They have been found to be particularly useful for rule induction and feature selection (semantics-preserving dimensionality reduction). Rough set-based data analysis methods have been successfully applied in bioinformatics, economics and finance, medicine, multimedia, web and text mining, signal and image processing, software engineering, robotics, and engineering (e.g. power systems and control engineering). Recently the three regions of rough sets are interpreted as regions of acceptance, rejection and deferment. This leads to three-way decision making approach with the model which can potentially lead to interesting future applications. == History == The idea of rough set was proposed by Pawlak (1981) as a new mathematical tool to deal with vague concepts. Comer, Grzymala-Busse, Iwinski, Nieminen, Novotny, Pawlak, Obtulowicz, and Pomykala have studied algebraic properties of rough sets. Different algebraic semantics have been developed by P. Pagliani, I. Duntsch, M. K. Chakraborty, M. Banerjee and A. Mani; these have been extended to more generalized rough sets by D. Cattaneo and A. Mani, in particular. Rough sets can be used to represent ambiguity, vagueness and general uncertainty. == Extensions and generalizations == Since the development of rough sets, extensions and generalizations have continued to evolve. Initial developments focused on the relationship - both similarities and difference - with fuzzy sets. While some literature contends these concepts are different, other literature considers that rough sets are a generalization of fuzzy sets - as represented through either fuzzy rough sets or rough fuzzy sets. Pawlak (1995) considered that fuzzy and rough sets should be treated as being complementary to each other, addressing different aspects of uncertainty and vagueness. Three notable extensions of classical rough sets are: Dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński (2001). The main change in this extension of classical rough sets is the substitution of the indiscernibility relation by a dominance relation, which permits the formalism to deal with inconsistencies typical in consideration of criteria and preference-ordered decision classes. Decision-theoretic rough sets (DTRS) is a probabilistic extension of rough set theory introduced by Yao, Wong, and Lingras (1990). It utilizes a Bayesian decision procedure for minimum risk decision making. Elements are included into the lower and upper approximations based on whether their conditional probability is above thresholds α {\displaystyle \textstyle \alpha } and β {\displaystyle \textstyle \beta } . These upper and lower thresholds determine region inclusion for elements. This model is unique and powerful since the thresholds themselves are calculated from a set of six loss functions representing classification risks. Game-theoretic rough sets (GTRS) is a game theory-based extension of rough set that was introduced by Herbert and Yao (2011). It utilizes a game-theoretic environment to optimize certain criteria of rough sets based classification or decision making in order to obtain effective region sizes. === Rough membership === Rough sets can be also defined, as a generalisation, by employing a rough membership function instead of objective approximation. The rough membership function expresses a conditional probability that x {\displaystyle x} belongs to X {\displaystyle X} given R {\displaystyle \textstyle \mathbb {R} } . This can be interpreted as a degree that x {\displaystyle x} belongs to X {\displaystyle X} in terms of information about x {\displaystyle x} expressed by R {\displaystyle \textstyle \mathbb {R} } . Rough membership primarily differs from the fuzzy membership in that the membership of union and intersection of sets cannot, in general, be computed from their constituent membership as is the case of fuzzy sets. In this, rough membership is a generalization of fuzzy membership. Furthermore, the rough membership function is grounded more in probability than the conventionally held concepts of the fuzzy membership function. === Other generalizations === Several generalizations of rough sets have been introduced, studied and applied to solving problems. Here are some of these generalizations: Rough multisets Fuzzy rough sets extend the rough set concept through the use of fuzzy equivalence classes Alpha rough set theory (α-RST) - a generalization of rough set theory that allows approximation using of fuzzy concepts Intuitionistic fuzzy rough sets Generalized rough fuzzy sets Rough intuitionistic fuzzy sets Soft rough fuzzy sets and soft fuzzy rough sets Composite rough sets == See also == Algebraic semantics Alternative set theory Analog computer Description logic Fuzzy logic Fuzzy set theory Granular computing Near sets Rough fuzzy hybridization Type-2 fuzzy sets and systems Decision-theoretic rough sets Version space Dominance-based rough set approach == References == == Further reading == Gianpiero Cattaneo and Davide Ciucci, "Heyting Wajsberg Algebras as an Abstract Environment Linking Fuzzy and Rough Sets" in J.J. Alpigini et al. (Eds.): RSCTC 2002, LNAI 2475, pp. 77–84, 2002. doi:10.1007/3-540-45813-1_10 Pawlak, Zdzisław (1982). "Rough sets". International Journal of Parallel Programming. 11 (5): 341–356. doi:10.1007/BF01001956. S2CID 9240608. Pawlak, Zdzisław Rough Sets Research Report PAS 431, Institute of Computer Science, Polish Academy of Sciences (1981) Dubois, D.; Prade, H. (1990). "Rough fuzzy sets and fuzzy rough sets". International Journal of General Systems. 17 (2–3): 191–209. doi:10.1080/03081079008935107. Slezak, Dominik; Wroblewski, Jakub; Eastwood, Victoria; Synak, Piotr (2008). "Brighthouse: an analytic data warehouse for ad-hoc queries" (PDF). Proceedings of the VLDB Endowment. 1 (2): 1337–1345. doi:10.14778/1454159.1454174. Ziarko, Wojciech (1998). "Rough sets as a methodology for data mining". Rough Sets in Knowledge Discovery 1: Methodology and Applications. Heidelberg: Physica-Verlag. pp. 554–576. Pawlak, Zdzisław (1999). "Decision rules, Bayes' rule and rough sets". New Direction in Rough Sets, Data Mining, and Granular-soft Computing. Lecture Notes in Computer Science. Vol. 1711. pp. 1–9. doi:10.1007/978-3-540-48061-7_1. ISBN 978-3-540-66645-5. Pawlak, Zdzisław (1981). Rough relations, reports. Vol. 435(3):205–218}. Institute of Computer Science. Orlowska, E. (1987). "Reasoning about vague concepts". Bulletin of the Polish Academy of Sciences. 35: 643–652. Polkowski, L. (2002). "Rough sets: Mathematical foundations". Advances in Soft Computing. Skowron, A. (1996). "Rough sets and vague concepts". Fundamenta Informaticae: 417–431. Zhang J., Wong J-S, Pan Y, Li T. (2015). A parallel matrix-based method for computing approximations in incomplete information systems, IEEE Transactions on Knowledge and Data Engineering, 27(2): 326-339 Burgin M. (1990). Theory of Named Sets as a Foundational Basis for Mathematics, In Structures in mathematical theories: Reports of the San Sebastian international symposium, September 25–29, 1990 (http://www.blogg.org/blog-30140-date-2005-10-26.html) Burgin, M. (2004). Unified Foundations of Mathematics, Preprint Mathematics LO/0403186, p39. (electronic edition: https://arxiv.org/ftp/math/papers/0403/0403186.pdf) Burgin, M. (2011), Theory of Named Sets, Mathematics Research Developments, Nova Science Pub Inc, ISBN 978-1-61122-788-8 Chen H., Li T., Luo C., Horng S-J., Wang G. (2015). A decision-theoretic rough set approach for dynamic data mining. IEEE Transactions on Fuzzy Systems, 23(6): 1958-1970 Chen H., Li T., Luo C., Horng S-J., Wang G. (2014). A rough set-based method for updating decision rules on attribute values' coarsening and refining, IEEE Transactions on Knowledge and Data Engineering, 26(12): 2886-2899 Chen H., Li T., Ruan D., Lin J., Hu C, (2013) A rough-set based incremental approach for updating approximations under dynamic maintenance environments. IEEE Transactions on Knowledge and Data Engineering, 25(2): 274-284 == External links == The International Rough Set Society Rough set tutorial Rough Sets: A Quick Tutorial Rough Set Exploration System Rough Sets in Data Warehousing
Wikipedia/Rough_set_theory
Micrographia: or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses. With Observations and Inquiries Thereupon is a historically significant book by Robert Hooke about his observations through various lenses. It was the first book to include illustrations of insects and plants as seen through microscopes. Published in January 1665, the first major publication of the Royal Society, it became the first scientific best-seller, inspiring a wide public interest in the new science of microscopy. The book originated the biological term cell. == Observations == Hooke most famously describes a fly's eye and a plant cell (where he coined that term because plant cells, which are walled, reminded him of the cells of a monastery). Known for its spectacular copperplate of the miniature world, particularly its fold-out plates of insects, the text itself reinforces the tremendous power of the new microscope. The plates of insects fold out to be larger than the large folio itself, the engraving of the louse in particular folding out to four times the size of the book. Although the book is best known for demonstrating the power of the microscope, Micrographia also describes distant planetary bodies, the wave theory of light, the organic origin of fossils, and other philosophical and scientific interests of its author. Hooke also selected several objects of human origin; among these objects were the jagged edge of a honed razor and the point of a needle, seeming blunt under the microscope. His goal may well have been to contrast the flawed products of mankind with the perfection of nature (and hence, in the spirit of the times, of biblical creation). Gallery == Reception == Published under the aegis of the Royal Society, the popularity of the book helped further the society's image and mission of being England's leading scientific organization. Micrographia's illustrations of the miniature world captured the public's imagination in a radically new way; Samuel Pepys called it "the most ingenious book that ever I read in my life". == Methods == In 2007, Janice Neri, a professor of art history and visual culture, studied Hooke's artistic influences and processes with the help of some newly rediscovered notes and drawings that appear to show some of his work leading up to Micrographia. She observes, "Hooke's use of the term "schema" to identify his plates indicates that he approached his images in a diagrammatic manner and implies the study or visual dissection of the objects portrayed." Identifying Hooke's schema as 'organization tools,' she emphasizes: Hooke built up his images from numerous observations made from multiple vantage points, under varying lighting conditions, and with lenses of differing powers. Similarly his specimens required a great deal of manipulation and preparation in order to make them visible through the microscope. Additionally: "Hooke often enclosed the objects he presented within a round frame, thus offering viewers an evocation of the experience of looking through the lens of a microscope." == Bibliography == Robert Hooke. Micrographia: or, Some physiological descriptions of minute bodies made by magnifying glasses. London: J. Martyn and J. Allestry, 1665. (first edition). == References == == External links == Engraved copperplate illustrations from a first edition of Micrographia: or Some physiological descriptions of minute bodies made by magnifying glasses. With observations and inquiries thereupon (all images freely available for download in a variety of formats from the Science History Institute's Digital Collections) Project Gutenberg Micrographia text Turning the Pages - virtual copy of the book from the National Library of Medicine Micrographia - full digital facsimile at Linda Hall Library Transcribing the Hooke Folio Archived 23 October 2011 at the Wayback Machine Micrographia at the Internet Archive Micrographia public domain audiobook at LibriVox
Wikipedia/Micrographia
In physics (specifically, the kinetic theory of gases), the Einstein relation is a previously unexpected connection revealed independently by William Sutherland in 1904, Albert Einstein in 1905, and by Marian Smoluchowski in 1906 in their works on Brownian motion. The more general form of the equation in the classical case is D = μ k B T , {\displaystyle D=\mu \,k_{\text{B}}T,} where D is the diffusion coefficient; μ is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, μ = vd/F; kB is the Boltzmann constant; T is the absolute temperature. This equation is an early example of a fluctuation-dissipation relation. Note that the equation above describes the classical case and should be modified when quantum effects are relevant. Two frequently used important special forms of the relation are: Einstein–Smoluchowski equation, for diffusion of charged particles: D = μ q k B T q {\displaystyle D={\frac {\mu _{q}\,k_{\text{B}}T}{q}}} Stokes–Einstein–Sutherland equation, for diffusion of spherical particles through a liquid with low Reynolds number: D = k B T 6 π η r {\displaystyle D={\frac {k_{\text{B}}T}{6\pi \,\eta \,r}}} Here q is the electrical charge of a particle; μq is the electrical mobility of the charged particle; η is the dynamic viscosity; r is the Stokes radius of the spherical particle. == Special cases == === Electrical mobility equation (classical case) === For a particle with electrical charge q, its electrical mobility μq is related to its generalized mobility μ by the equation μ = μq/q. The parameter μq is the ratio of the particle's terminal drift velocity to an applied electric field. Hence, the equation in the case of a charged particle is given as D = μ q k B T q , {\displaystyle D={\frac {\mu _{q}\,k_{\text{B}}T}{q}},} where D {\displaystyle D} is the diffusion coefficient ( m 2 s − 1 {\displaystyle \mathrm {m^{2}s^{-1}} } ). μ q {\displaystyle \mu _{q}} is the electrical mobility ( m 2 V − 1 s − 1 {\displaystyle \mathrm {m^{2}V^{-1}s^{-1}} } ). q {\displaystyle q} is the electric charge of particle (C, coulombs) T {\displaystyle T} is the electron temperature or ion temperature in plasma (K). If the temperature is given in volts, which is more common for plasma: D = μ q T Z , {\displaystyle D={\frac {\mu _{q}\,T}{Z}},} where Z {\displaystyle Z} is the charge number of particle (unitless) T {\displaystyle T} is electron temperature or ion temperature in plasma (V). === Electrical mobility equation (quantum case) === For the case of Fermi gas or a Fermi liquid, relevant for the electron mobility in normal metals like in the free electron model, Einstein relation should be modified: D = μ q E F q , {\displaystyle D={\frac {\mu _{q}\,E_{\mathrm {F} }}{q}},} where E F {\displaystyle E_{\mathrm {F} }} is Fermi energy. === Stokes–Einstein–Sutherland equation === In the limit of low Reynolds number, the mobility μ is the inverse of the drag coefficient ζ {\displaystyle \zeta } . A damping constant γ = ζ / m {\displaystyle \gamma =\zeta /m} is frequently used for the inverse momentum relaxation time (time needed for the inertia momentum to become negligible compared to the random momenta) of the diffusive object. For spherical particles of radius r, Stokes' law gives ζ = 6 π η r , {\displaystyle \zeta =6\pi \,\eta \,r,} where η {\displaystyle \eta } is the viscosity of the medium. Thus the Einstein–Smoluchowski relation results into the Stokes–Einstein–Sutherland relation D = k B T 6 π η r . {\displaystyle D={\frac {k_{\text{B}}T}{6\pi \,\eta \,r}}.} This has been applied for many years to estimating the self-diffusion coefficient in liquids, and a version consistent with isomorph theory has been confirmed by computer simulations of the Lennard-Jones system. In the case of rotational diffusion, the friction is ζ r = 8 π η r 3 {\displaystyle \zeta _{\text{r}}=8\pi \eta r^{3}} , and the rotational diffusion constant D r {\displaystyle D_{\text{r}}} is D r = k B T 8 π η r 3 . {\displaystyle D_{\text{r}}={\frac {k_{\text{B}}T}{8\pi \,\eta \,r^{3}}}.} This is sometimes referred to as the Stokes–Einstein–Debye relation. === Semiconductor === In a semiconductor with an arbitrary density of states, i.e. a relation of the form p = p ( φ ) {\displaystyle p=p(\varphi )} between the density of holes or electrons p {\displaystyle p} and the corresponding quasi Fermi level (or electrochemical potential) φ {\displaystyle \varphi } , the Einstein relation is D = μ q p q d p d φ , {\displaystyle D={\frac {\mu _{q}p}{q{\frac {dp}{d\varphi }}}},} where μ q {\displaystyle \mu _{q}} is the electrical mobility (see § Proof of the general case for a proof of this relation). An example assuming a parabolic dispersion relation for the density of states and the Maxwell–Boltzmann statistics, which is often used to describe inorganic semiconductor materials, one can compute (see density of states): p ( φ ) = N 0 e q φ k B T , {\displaystyle p(\varphi )=N_{0}e^{\frac {q\varphi }{k_{\text{B}}T}},} where N 0 {\displaystyle N_{0}} is the total density of available energy states, which gives the simplified relation: D = μ q k B T q . {\displaystyle D=\mu _{q}{\frac {k_{\text{B}}T}{q}}.} === Nernst–Einstein equation === By replacing the diffusivities in the expressions of electric ionic mobilities of the cations and anions from the expressions of the equivalent conductivity of an electrolyte the Nernst–Einstein equation is derived: Λ e = z i 2 F 2 R T ( D + + D − ) . {\displaystyle \Lambda _{e}={\frac {z_{i}^{2}F^{2}}{RT}}(D_{+}+D_{-}).} were R is the gas constant. == Proof of the general case == The proof of the Einstein relation can be found in many references, for example see the work of Ryogo Kubo. Suppose some fixed, external potential energy U {\displaystyle U} generates a conservative force F ( x ) = − ∇ U ( x ) {\displaystyle F(\mathbf {x} )=-\nabla U(\mathbf {x} )} (for example, an electric force) on a particle located at a given position x {\displaystyle \mathbf {x} } . We assume that the particle would respond by moving with velocity v ( x ) = μ ( x ) F ( x ) {\displaystyle v(\mathbf {x} )=\mu (\mathbf {x} )F(\mathbf {x} )} (see Drag (physics)). Now assume that there are a large number of such particles, with local concentration ρ ( x ) {\displaystyle \rho (\mathbf {x} )} as a function of the position. After some time, equilibrium will be established: particles will pile up around the areas with lowest potential energy U {\displaystyle U} , but still will be spread out to some extent because of diffusion. At equilibrium, there is no net flow of particles: the tendency of particles to get pulled towards lower U {\displaystyle U} , called the drift current, perfectly balances the tendency of particles to spread out due to diffusion, called the diffusion current (see drift-diffusion equation). The net flux of particles due to the drift current is J d r i f t ( x ) = μ ( x ) F ( x ) ρ ( x ) = − ρ ( x ) μ ( x ) ∇ U ( x ) , {\displaystyle \mathbf {J} _{\mathrm {drift} }(\mathbf {x} )=\mu (\mathbf {x} )F(\mathbf {x} )\rho (\mathbf {x} )=-\rho (\mathbf {x} )\mu (\mathbf {x} )\nabla U(\mathbf {x} ),} i.e., the number of particles flowing past a given position equals the particle concentration times the average velocity. The flow of particles due to the diffusion current is, by Fick's law, J d i f f u s i o n ( x ) = − D ( x ) ∇ ρ ( x ) , {\displaystyle \mathbf {J} _{\mathrm {diffusion} }(\mathbf {x} )=-D(\mathbf {x} )\nabla \rho (\mathbf {x} ),} where the minus sign means that particles flow from higher to lower concentration. Now consider the equilibrium condition. First, there is no net flow, i.e. J d r i f t + J d i f f u s i o n = 0 {\displaystyle \mathbf {J} _{\mathrm {drift} }+\mathbf {J} _{\mathrm {diffusion} }=0} . Second, for non-interacting point particles, the equilibrium density ρ {\displaystyle \rho } is solely a function of the local potential energy U {\displaystyle U} , i.e. if two locations have the same U {\displaystyle U} then they will also have the same ρ {\displaystyle \rho } (e.g. see Maxwell-Boltzmann statistics as discussed below.) That means, applying the chain rule, ∇ ρ = d ρ d U ∇ U . {\displaystyle \nabla \rho ={\frac {\mathrm {d} \rho }{\mathrm {d} U}}\nabla U.} Therefore, at equilibrium: 0 = J d r i f t + J d i f f u s i o n = − μ ρ ∇ U − D ∇ ρ = ( − μ ρ − D d ρ d U ) ∇ U . {\displaystyle 0=\mathbf {J} _{\mathrm {drift} }+\mathbf {J} _{\mathrm {diffusion} }=-\mu \rho \nabla U-D\nabla \rho =\left(-\mu \rho -D{\frac {\mathrm {d} \rho }{\mathrm {d} U}}\right)\nabla U.} As this expression holds at every position x {\displaystyle \mathbf {x} } , it implies the general form of the Einstein relation: D = − μ ρ d ρ d U . {\displaystyle D=-\mu {\frac {\rho }{\frac {\mathrm {d} \rho }{\mathrm {d} U}}}.} The relation between ρ {\displaystyle \rho } and U {\displaystyle U} for classical particles can be modeled through Maxwell-Boltzmann statistics ρ ( x ) = A e − U ( x ) k B T , {\displaystyle \rho (\mathbf {x} )=Ae^{-{\frac {U(\mathbf {x} )}{k_{\text{B}}T}}},} where A {\displaystyle A} is a constant related to the total number of particles. Therefore d ρ d U = − 1 k B T ρ . {\displaystyle {\frac {\mathrm {d} \rho }{\mathrm {d} U}}=-{\frac {1}{k_{\text{B}}T}}\rho .} Under this assumption, plugging this equation into the general Einstein relation gives: D = − μ ρ d ρ d U = μ k B T , {\displaystyle D=-\mu {\frac {\rho }{\frac {\mathrm {d} \rho }{\mathrm {d} U}}}=\mu k_{\text{B}}T,} which corresponds to the classical Einstein relation. == See also == Smoluchowski factor Conductivity (electrolytic) Stokes radius Ion transport number == References == == External links == Einstein relation calculators ion diffusivity
Wikipedia/Einstein_relation_(kinetic_theory)
The Vicsek model is a mathematical model used to describe active matter. One motivation of the study of active matter by physicists is the rich phenomenology associated to this field. Collective motion and swarming are among the most studied phenomena. Within the huge number of models that have been developed to catch such behavior from a microscopic description, the most famous is the model introduced by Tamás Vicsek et al. in 1995. Physicists have a great interest in this model as it is minimal and describes a kind of universality. It consists in point-like self-propelled particles that evolve at constant speed and align their velocity with their neighbours' one in presence of noise. Such a model shows collective motion at high density of particles or low noise on the alignment. == Model (mathematical description) == As this model aims at being minimal, it assumes that flocking is due to the combination of any kind of self propulsion and of effective alignment. Since velocities of each particle is a constant, the net momentum of the system is not conserved during collisions. An individual i {\displaystyle i} is described by its position r i ( t ) {\displaystyle \mathbf {r} _{i}(t)} and the angle defining the direction of its velocity Θ i ( t ) {\displaystyle \Theta _{i}(t)} at time t {\displaystyle t} . The discrete time evolution of one particle is set by two equations: At each time step Δ t {\displaystyle \Delta t} , each agent aligns with its neighbours within a given distance r {\displaystyle r} with an uncertainty due to a noise η i ( t ) {\displaystyle \eta _{i}(t)} : Θ i ( t + Δ t ) = ⟨ Θ j ⟩ | r i − r j | < r + η i ( t ) {\displaystyle \Theta _{i}(t+\Delta t)=\langle \Theta _{j}\rangle _{|r_{i}-r_{j}|<r}+\eta _{i}(t)} The particle then moves at constant speed v {\displaystyle v} in the new direction: r i ( t + Δ t ) = r i ( t ) + v Δ t ( cos ⁡ Θ i ( t ) sin ⁡ Θ i ( t ) ) {\displaystyle \mathbf {r} _{i}(t+\Delta t)=\mathbf {r} _{i}(t)+v\Delta t{\begin{pmatrix}\cos \Theta _{i}(t)\\\sin \Theta _{i}(t)\end{pmatrix}}} In these equations, ⟨ Θ j ⟩ | r i − r j | < r {\displaystyle \langle \Theta _{j}\rangle _{|r_{i}-r_{j}|<r}} denotes the average direction of the velocities of particles (including particle i {\displaystyle i} ) within a circle of radius r {\displaystyle r} surrounding particle i {\displaystyle i} . The average normalized velocity acts as the order parameter for this system, and is given by v a = 1 N v | ∑ i = 1 N v i | {\displaystyle v_{a}={\frac {1}{Nv}}|\sum _{i=1}^{N}v_{i}|} . The whole model is controlled by three parameters: the density of particles, the amplitude of the noise on the alignment and the ratio of the travel distance v Δ t {\displaystyle v\Delta t} to the interaction range r {\displaystyle r} . From these two simple iteration rules, various continuous theories have been elaborated such as the Toner Tu theory which describes the system at the hydrodynamic level. An Enskog-like kinetic theory, which is valid at arbitrary particle density, has been developed. This theory quantitatively describes the formation of steep density waves, also called invasion waves, near the transition to collective motion. == Phenomenology == This model shows a phase transition from a disordered motion to large-scale ordered motion. At large noise or low density, particles are on average not aligned, and they can be described as a disordered gas. At low noise and large density, particles are globally aligned and move in the same direction (collective motion). This state is interpreted as an ordered liquid. The transition between these two phases is not continuous, indeed the phase diagram of the system exhibits a first order phase transition with a microphase separation. In the co-existence region, finite-size liquid bands emerge in a gas environment and move along their transverse direction. Recently, a new phase has been discovered: a polar ordered Cross sea phase of density waves with inherently selected crossing angle. This spontaneous organization of particles epitomizes collective motion. == Extensions == Since its appearance in 1995 this model has been very popular within the physics community; many scientists have worked on and extended it. For example, one can extract several universality classes from simple symmetry arguments concerning the motion of the particles and their alignment. Moreover, in real systems, many parameters can be included in order to give a more realistic description, for example attraction and repulsion between agents (finite-size particles), chemotaxis (biological systems), memory, non-identical particles, the surrounding liquid. A simpler theory, the Active Ising model, has been developed to facilitate the analysis of the Vicsek model. == References ==
Wikipedia/Vicsek_model
Chapman–Enskog theory provides a framework in which equations of hydrodynamics for a gas can be derived from the Boltzmann equation. The technique justifies the otherwise phenomenological constitutive relations appearing in hydrodynamical descriptions such as the Navier–Stokes equations. In doing so, expressions for various transport coefficients such as thermal conductivity and viscosity are obtained in terms of molecular parameters. Thus, Chapman–Enskog theory constitutes an important step in the passage from a microscopic, particle-based description to a continuum hydrodynamical one. The theory is named for Sydney Chapman and David Enskog, who introduced it independently in 1916 and 1917. == Description == The starting point of Chapman–Enskog theory is the Boltzmann equation for the 1-particle distribution function f ( r , v , t ) {\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)} : ∂ f ∂ t + v ⋅ ∂ f ∂ r + F m ⋅ ∂ f ∂ v = C ^ f , {\displaystyle {\frac {\partial f}{\partial t}}+\mathbf {v} \cdot {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}={\hat {C}}f,} where C ^ {\displaystyle {\hat {C}}} is a nonlinear integral operator which models the evolution of f {\displaystyle f} under interparticle collisions. This nonlinearity makes solving the full Boltzmann equation difficult, and motivates the development of approximate techniques such as the one provided by Chapman–Enskog theory. Given this starting point, the various assumptions underlying the Boltzmann equation carry over to Chapman–Enskog theory as well. The most basic of these requires a separation of scale between the collision duration τ c {\displaystyle \tau _{\mathrm {c} }} and the mean free time between collisions τ f {\displaystyle \tau _{\mathrm {f} }} : τ c ≪ τ f {\displaystyle \tau _{\mathrm {c} }\ll \tau _{\mathrm {f} }} . This condition ensures that collisions are well-defined events in space and time, and holds if the dimensionless parameter γ ≡ r c 3 n {\displaystyle \gamma \equiv r_{\mathrm {c} }^{3}n} is small, where r c {\displaystyle r_{\mathrm {c} }} is the range of interparticle interactions and n {\displaystyle n} is the number density. In addition to this assumption, Chapman–Enskog theory also requires that τ f {\displaystyle \tau _{\mathrm {f} }} is much smaller than any extrinsic timescales τ ext {\displaystyle \tau _{\text{ext}}} . These are the timescales associated with the terms on the left hand side of the Boltzmann equation, which describe variations of the gas state over macroscopic lengths. Typically, their values are determined by initial/boundary conditions and/or external fields. This separation of scales implies that the collisional term on the right hand side of the Boltzmann equation is much larger than the streaming terms on the left hand side. Thus, an approximate solution can be found from C ^ f = 0. {\displaystyle {\hat {C}}f=0.} It can be shown that the solution to this equation is a Gaussian: f = n ( r , t ) ( m 2 π k B T ( r , t ) ) 3 / 2 exp ⁡ [ − m | v − v 0 ( r , t ) | 2 2 k B T ( r , t ) ] , {\displaystyle f=n(\mathbf {r} ,t)\left({\frac {m}{2\pi k_{\text{B}}T(\mathbf {r} ,t)}}\right)^{3/2}\exp \left[-{\frac {m{\left|\mathbf {v} -\mathbf {v} _{0}(\mathbf {r} ,t)\right|}^{2}}{2k_{\text{B}}T(\mathbf {r} ,t)}}\right],} where m {\displaystyle m} is the molecule mass and k B {\displaystyle k_{\text{B}}} is the Boltzmann constant. A gas is said to be in local equilibrium if it satisfies this equation. The assumption of local equilibrium leads directly to the Euler equations, which describe fluids without dissipation, i.e. with thermal conductivity and viscosity equal to 0 {\displaystyle 0} . The primary goal of Chapman–Enskog theory is to systematically obtain generalizations of the Euler equations which incorporate dissipation. This is achieved by expressing deviations from local equilibrium as a perturbative series in Knudsen number Kn {\displaystyle {\text{Kn}}} , which is small if τ f ≪ τ ext {\displaystyle \tau _{\mathrm {f} }\ll \tau _{\text{ext}}} . Conceptually, the resulting hydrodynamic equations describe the dynamical interplay between free streaming and interparticle collisions. The latter tend to drive the gas towards local equilibrium, while the former acts across spatial inhomogeneities to drive the gas away from local equilibrium. When the Knudsen number is of the order of 1 or greater, the gas in the system being considered cannot be described as a fluid. To first order in Kn {\displaystyle {\text{Kn}}} one obtains the Navier–Stokes equations. Second and third orders give rise, respectively, to the Burnett equations and super-Burnett equations. == Mathematical formulation == Since the Knudsen number does not appear explicitly in the Boltzmann equation, but rather implicitly in terms of the distribution function and boundary conditions, a dummy variable ε {\displaystyle \varepsilon } is introduced to keep track of the appropriate orders in the Chapman–Enskog expansion: ∂ f ∂ t + v ⋅ ∂ f ∂ r + F m ⋅ ∂ f ∂ v = 1 ε C ^ f . {\displaystyle {\frac {\partial f}{\partial t}}+\mathbf {v\cdot } {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}={\frac {1}{\varepsilon }}{\hat {C}}f.} Small ε {\displaystyle \varepsilon } implies the collisional term C ^ f {\displaystyle {\hat {C}}f} dominates the streaming term v ⋅ ∂ f ∂ r + F m ⋅ ∂ f ∂ v {\displaystyle \mathbf {v\cdot } {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}} , which is the same as saying the Knudsen number is small. Thus, the appropriate form for the Chapman–Enskog expansion is f = f ( 0 ) + ε f ( 1 ) + ε 2 f ( 2 ) + ⋯ . {\displaystyle f=f^{(0)}+\varepsilon f^{(1)}+\varepsilon ^{2}f^{(2)}+\cdots \ .} Solutions that can be formally expanded in this way are known as normal solutions to the Boltzmann equation. This class of solutions excludes non-perturbative contributions (such as e − 1 / ε {\displaystyle e^{-1/\varepsilon }} ), which appear in boundary layers or near internal shock layers. Thus, Chapman–Enskog theory is restricted to situations in which such solutions are negligible. Substituting this expansion and equating orders of ε {\displaystyle \varepsilon } leads to the hierarchy J ( f ( 0 ) , f ( 0 ) ) = 0 2 J ( f ( 0 ) , f ( n ) ) = ( ∂ ∂ t + v ⋅ ∂ ∂ r + F m ⋅ ∂ ∂ v ) f ( n − 1 ) − ∑ m = 1 n − 1 J ( f ( n ) , f ( n − m ) ) , n > 0 , {\displaystyle {\begin{aligned}J(f^{(0)},f^{(0)})&=0\\2J(f^{(0)},f^{(n)})&=\left({\frac {\partial }{\partial t}}+\mathbf {v\cdot } {\frac {\partial }{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial }{\partial \mathbf {v} }}\right)f^{(n-1)}-\sum _{m=1}^{n-1}J(f^{(n)},f^{(n-m)}),\qquad n>0,\end{aligned}}} where J {\displaystyle J} is an integral operator, linear in both its arguments, which satisfies J ( f , g ) = J ( g , f ) {\displaystyle J(f,g)=J(g,f)} and J ( f , f ) = C ^ f {\displaystyle J(f,f)={\hat {C}}f} . The solution to the first equation is a Gaussian: f ( 0 ) = n ′ ( r , t ) ( m 2 π k B T ′ ( r , t ) ) 3 / 2 exp ⁡ [ − m | v − v 0 ′ ( r , t ) | 2 2 k B T ′ ( r , t ) ] . {\displaystyle f^{(0)}=n'(\mathbf {r} ,t)\left({\frac {m}{2\pi k_{\text{B}}T'(\mathbf {r} ,t)}}\right)^{3/2}\exp \left[-{\frac {m\left|\mathbf {v} -\mathbf {v} '_{0}(\mathbf {r} ,t)\right|^{2}}{2k_{\text{B}}T'(\mathbf {r} ,t)}}\right].} for some functions n ′ ( r , t ) {\displaystyle n'(\mathbf {r} ,t)} , v 0 ′ ( r , t ) {\displaystyle \mathbf {v} '_{0}(\mathbf {r} ,t)} , and T ′ ( r , t ) {\displaystyle T'(\mathbf {r} ,t)} . The expression for f ( 0 ) {\displaystyle f^{(0)}} suggests a connection between these functions and the physical hydrodynamic fields defined as moments of f ( r , v , t ) {\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)} : n ( r , t ) = ∫ f ( r , v , t ) d v n ( r , t ) v 0 ( r , t ) = ∫ v f ( r , v , t ) d v n ( r , t ) T ( r , t ) = ∫ m 3 k B v 2 f ( r , v , t ) d v . {\displaystyle {\begin{aligned}n(\mathbf {r} ,t)&=\int f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} \\n(\mathbf {r} ,t)\mathbf {v} _{0}(\mathbf {r} ,t)&=\int \mathbf {v} f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} \\n(\mathbf {r} ,t)T(\mathbf {r} ,t)&=\int {\frac {m}{3k_{\text{B}}}}v^{2}f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} .\end{aligned}}} From a purely mathematical point of view, however, the two sets of functions are not necessarily the same for ε > 0 {\displaystyle \varepsilon >0} (for ε = 0 {\displaystyle \varepsilon =0} they are equal by definition). Indeed, proceeding systematically in the hierarchy, one finds that similarly to f ( 0 ) {\displaystyle f^{(0)}} , each f ( n ) {\displaystyle f^{(n)}} also contains arbitrary functions of r {\displaystyle \mathbf {r} } and t {\displaystyle t} whose relation to the physical hydrodynamic fields is a priori unknown. One of the key simplifying assumptions of Chapman–Enskog theory is to assume that these otherwise arbitrary functions can be written in terms of the exact hydrodynamic fields and their spatial gradients. In other words, the space and time dependence of f {\displaystyle f} enters only implicitly through the hydrodynamic fields. This statement is physically plausible because small Knudsen numbers correspond to the hydrodynamic regime, in which the state of the gas is determined solely by the hydrodynamic fields. In the case of f ( 0 ) {\displaystyle f^{(0)}} , the functions n ′ ( r , t ) {\displaystyle n'(\mathbf {r} ,t)} , v 0 ′ ( r , t ) {\displaystyle \mathbf {v} '_{0}(\mathbf {r} ,t)} , and T ′ ( r , t ) {\displaystyle T'(\mathbf {r} ,t)} are assumed exactly equal to the physical hydrodynamic fields. While these assumptions are physically plausible, there is the question of whether solutions which satisfy these properties actually exist. More precisely, one must show that solutions exist satisfying ∫ ∑ n = 1 ∞ ε n f ( n ) d v = 0 = ∫ ∑ n = 1 ∞ ε n f ( n ) v 2 d v ∫ ∑ n = 1 ∞ ε n f ( n ) v i d v = 0 , i ∈ { x , y , z } . {\displaystyle {\begin{aligned}\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}\,d\mathbf {v} =0=\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}\mathbf {v} ^{2}\,d\mathbf {v} \\[1ex]\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}v_{i}\,d\mathbf {v} =0,\qquad i\in \{x,y,z\}.\end{aligned}}} Moreover, even if such solutions exist, there remains the additional question of whether they span the complete set of normal solutions to the Boltzmann equation, i.e. do not represent an artificial restriction of the original expansion in ε {\displaystyle \varepsilon } . One of the key technical achievements of Chapman–Enskog theory is to answer both of these questions in the positive. Thus, at least at the formal level, there is no loss of generality in the Chapman–Enskog approach. With these formal considerations established, one can proceed to calculate f ( 1 ) {\displaystyle f^{(1)}} . The result is f ( 1 ) = [ − 1 n ( 2 k B T m ) 1 / 2 A ( v ) ⋅ ∇ ln ⁡ T − 2 n B ( v ) : ∇ v 0 ] f ( 0 ) , {\displaystyle f^{(1)}=\left[-{\frac {1}{n}}\left({\frac {2k_{\text{B}}T}{m}}\right)^{1/2}\mathbf {A} (\mathbf {v} )\cdot \nabla \ln T-{\frac {2}{n}}\mathbb {B(\mathbf {v} )\colon \nabla } \mathbf {v} _{0}\right]f^{(0)},} where A ( v ) {\displaystyle \mathbf {A} (\mathbf {v} )} is a vector and B ( v ) {\displaystyle \mathbb {B} (\mathbf {v} )} a tensor, each a solution of a linear inhomogeneous integral equation that can be solved explicitly by a polynomial expansion. Here, the colon denotes the double dot product, T : T ′ = ∑ i , j T i j T j i ′ {\textstyle \mathbb {T} :\mathbb {T'} =\sum _{i,j}T_{ij}T'_{ji}} for tensors T {\displaystyle \mathbb {T} } , T ′ {\displaystyle \mathbb {T'} } . == Predictions == To first order in the Knudsen number, the heat flux q = m 2 ∫ f ( r , v , t ) v 2 v d v {\textstyle \mathbf {q} ={\frac {m}{2}}\int f(\mathbf {r} ,\mathbf {v} ,t)\,v^{2}\mathbf {v} \,d\mathbf {v} } is found to obey Fourier's law of heat conduction, q = − λ ∇ T , {\displaystyle \mathbf {q} =-\lambda \nabla T,} and the momentum-flux tensor σ = m ∫ ( v − v 0 ) ( v − v 0 ) T f ( r , v , t ) d v {\textstyle \mathbf {\sigma } =m\int (\mathbf {v} -\mathbf {v} _{0})(\mathbf {v} -\mathbf {v} _{0})^{\mathsf {T}}f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} } is that of a Newtonian fluid, σ = p I − μ ( ∇ v 0 + ∇ v 0 T ) + 2 3 μ ( ∇ ⋅ v 0 ) I , {\displaystyle \mathbf {\sigma } =p\mathbb {I} -\mu \left(\nabla \mathbf {v_{0}} +\nabla \mathbf {v_{0}} ^{T}\right)+{\frac {2}{3}}\mu (\nabla \cdot \mathbf {v_{0}} )\mathbb {I} ,} with I {\displaystyle \mathbb {I} } the identity tensor. Here, λ {\displaystyle \lambda } and μ {\displaystyle \mu } are the thermal conductivity and viscosity. They can be calculated explicitly in terms of molecular parameters by solving a linear integral equation; the table below summarizes the results for a few important molecular models ( m {\displaystyle m} is the molecule mass and k B {\displaystyle k_{\text{B}}} is the Boltzmann constant). With these results, it is straightforward to obtain the Navier–Stokes equations. Taking velocity moments of the Boltzmann equation leads to the exact balance equations for the hydrodynamic fields n ( r , t ) {\displaystyle n(\mathbf {r} ,t)} , v 0 ( r , t ) {\displaystyle \mathbf {v} _{0}(\mathbf {r} ,t)} , and T ( r , t ) {\displaystyle T(\mathbf {r} ,t)} : ∂ n ∂ t + ∇ ⋅ ( n v 0 ) = 0 ∂ v 0 ∂ t + v 0 ⋅ ∇ v 0 − F m + 1 n ∇ ⋅ σ = 0 ∂ T ∂ t + v 0 ⋅ ∇ T + 2 3 k B n ( σ : ∇ v 0 + ∇ ⋅ q ) = 0. {\displaystyle {\begin{aligned}{\frac {\partial n}{\partial t}}+\nabla \cdot \left(n\mathbf {v} _{0}\right)&=0\\{\frac {\partial \mathbf {v} _{0}}{\partial t}}+\mathbf {v} _{0}\cdot \nabla \mathbf {v} _{0}-{\frac {\mathbf {F} }{m}}+{\frac {1}{n}}\nabla \cdot \mathbf {\sigma } &=0\\{\frac {\partial T}{\partial t}}+\mathbf {v} _{0}\cdot \nabla T+{\frac {2}{3k_{\text{B}}n}}\left(\mathbf {\sigma :} \nabla \mathbf {v} _{0}+\nabla \cdot \mathbf {q} \right)&=0.\end{aligned}}} As in the previous section the colon denotes the double dot product, T : T ′ = ∑ i , j T i j T j i ′ {\textstyle \mathbb {T} :\mathbb {T'} =\sum _{i,j}T_{ij}T'_{ji}} . Substituting the Chapman–Enskog expressions for q {\displaystyle \mathbf {q} } and σ {\displaystyle \sigma } , one arrives at the Navier–Stokes equations. === Comparison with experiment === An important prediction of Chapman–Enskog theory is that viscosity, μ {\displaystyle \mu } , is independent of density (this can be seen for each molecular model in table 1, but is actually model-independent). This counterintuitive result traces back to James Clerk Maxwell, who inferred it in 1860 on the basis of more elementary kinetic arguments. It is well-verified experimentally for gases at ordinary densities. On the other hand, the theory predicts that μ {\displaystyle \mu } does depend on temperature. For rigid elastic spheres, the predicted scaling is μ ∝ T 1 / 2 {\displaystyle \mu \propto T^{1/2}} , while other models typically show greater variation with temperature. For instance, for molecules repelling each other with force ∝ r − ν {\displaystyle \propto r^{-\nu }} the predicted scaling is μ ∝ T s {\displaystyle \mu \propto T^{s}} , where s = 1 / 2 + 2 / ( ν − 1 ) {\displaystyle s=1/2+2/(\nu -1)} . Taking s = 0.668 {\displaystyle s=0.668} , corresponding to ν ≈ 12.9 {\displaystyle \nu \approx 12.9} , shows reasonable agreement with the experimentally observed scaling for helium. For more complex gases the agreement is not as good, most likely due to the neglect of attractive forces. Indeed, the Lennard-Jones model, which does incorporate attractions, can be brought into closer agreement with experiment (albeit at the cost of a more opaque T {\displaystyle T} dependence; see the Lennard-Jones entry in table 1). For better agreement with experimental data than that which has been obtained using the Lennard-Jones model, the more flexible Mie potential has been used, the added flexibility of this potential allows for accurate prediction of the transport properties of mixtures of a variety of spherically symmetric molecules. Chapman–Enskog theory also predicts a simple relation between thermal conductivity, λ {\displaystyle \lambda } , and viscosity, μ {\displaystyle \mu } , in the form λ = f μ c v {\displaystyle \lambda =f\mu c_{v}} , where c v {\displaystyle c_{v}} is the specific heat at constant volume and f {\displaystyle f} is a purely numerical factor. For spherically symmetric molecules, its value is predicted to be very close to 2.5 {\displaystyle 2.5} in a slightly model-dependent way. For instance, rigid elastic spheres have f ≈ 2.522 {\displaystyle f\approx 2.522} , and molecules with repulsive force ∝ r − 13 {\displaystyle \propto r^{-13}} have f ≈ 2.511 {\displaystyle f\approx 2.511} (the latter deviation is ignored in table 1). The special case of Maxwell molecules (repulsive force ∝ r − 5 {\displaystyle \propto r^{-5}} ) has f = 2.5 {\displaystyle f=2.5} exactly. Since λ {\displaystyle \lambda } , μ {\displaystyle \mu } , and c v {\displaystyle c_{v}} can be measured directly in experiments, a simple experimental test of Chapman–Enskog theory is to measure f {\displaystyle f} for the spherically symmetric noble gases. Table 2 shows that there is reasonable agreement between theory and experiment. == Extensions == The basic principles of Chapman–Enskog theory can be extended to more diverse physical models, including gas mixtures and molecules with internal degrees of freedom. In the high-density regime, the theory can be adapted to account for collisional transport of momentum and energy, i.e. transport over a molecular diameter during a collision, rather than over a mean free path (in between collisions). Including this mechanism predicts a density dependence of the viscosity at high enough density, which is also observed experimentally. Obtaining the corrections used to account for transport during a collision for soft molecules (i.e. Lennard-Jones or Mie molecules) is in general non-trivial, but success has been achieved at applying Barker-Henderson perturbation theory to accurately describe these effects up to the critical density of various fluid mixtures. One can also carry out the theory to higher order in the Knudsen number. In particular, the second-order contribution f ( 2 ) {\displaystyle f^{(2)}} has been calculated by Burnett. In general circumstances, however, these higher-order corrections may not give reliable improvements to the first-order theory, due to the fact that the Chapman–Enskog expansion does not always converge. (On the other hand, the expansion is thought to be at least asymptotic to solutions of the Boltzmann equation, in which case truncating at low order still gives accurate results.) Even if the higher order corrections do afford improvement in a given system, the interpretation of the corresponding hydrodynamical equations is still debated. === Revised Enskog theory === The extension of Chapman–Enskog theory for multicomponent mixtures to elevated densities, in particular, densities at which the covolume of the mixture is non-negligible was carried out in a series of works by E. G. D. Cohen and others, and was coined Revised Enskog theory (RET). The successful derivation of RET followed several previous attempt at the same, but which gave results that were shown to be inconsistent with irreversible thermodynamics. The starting point for developing the RET is a modified form of the Boltzmann Equation for the s {\displaystyle s} -particle velocity distribution function, ( ∂ ∂ t + v i ⋅ ∂ ∂ r + F i m i ⋅ ∂ ∂ v i ) f i = ∑ j S i j ( f i , f j ) {\displaystyle \left({\frac {\partial }{\partial t}}+\mathbf {v} _{i}\cdot {\frac {\partial }{\partial \mathbf {r} }}+{\frac {\mathbf {F} _{i}}{m_{i}}}\cdot {\frac {\partial }{\partial \mathbf {v} _{i}}}\right)f_{i}=\sum _{j}S_{ij}(f_{i},f_{j})} where v i ( r , t ) {\displaystyle \mathbf {v} _{i}(\mathbf {r} ,t)} is the velocity of particles of species i {\displaystyle i} , at position r {\displaystyle \mathbf {r} } and time t {\displaystyle t} , m i {\displaystyle m_{i}} is the particle mass, F i {\displaystyle \mathbf {F} _{i}} is the external force, and S i j ( f i , f j ) = ∭ [ g i j ( σ i j k ) f i ′ ( r ) f j ′ ( r + σ i j k ) − g i j ( − σ i j k ) f i ( r ) f j ( r − σ i j k ) ] d τ {\displaystyle S_{ij}(f_{i},f_{j})=\iiint \left[g_{ij}(\sigma _{ij}\mathbf {k} )\,f_{i}'(\mathbf {r} )\,f_{j}'(\mathbf {r} +\sigma _{ij}\mathbf {k} )-g_{ij}(-\sigma _{ij}\mathbf {k} )\,f_{i}(\mathbf {r} )\,f_{j}(\mathbf {r} -\sigma _{ij}\mathbf {k} )\right]d\tau } The difference in this equation from classical Chapman–Enskog theory lies in the streaming operator S i j {\displaystyle S_{ij}} , within which the velocity distribution of the two particles are evaluated at different points in space, separated by σ i j k {\displaystyle \sigma _{ij}\mathbf {k} } , where k {\displaystyle \mathbf {k} } is the unit vector along the line connecting the two particles centre of mass. Another significant difference comes from the introduction of the factors g i j {\displaystyle g_{ij}} , which represent the enhanced probability of collisions due to excluded volume. The classical Chapman–Enskog equations are recovered by setting σ i j = 0 {\displaystyle \sigma _{ij}=0} and g i j ( σ i j k ) = 1 {\displaystyle g_{ij}(\sigma _{ij}\mathbf {k} )=1} . A point of significance for the success of the RET is the choice of the factors g i j {\displaystyle g_{ij}} , which is interpreted as the pair distribution function evaluated at the contact distance σ i j {\displaystyle \sigma _{ij}} . An important factor to note here is that in order to obtain results in agreement with irreversible thermodynamics, the g i j {\displaystyle g_{ij}} must be treated as functionals of the density fields, rather than as functions of the local density. ==== Results from Revised Enskog theory ==== One of the first results obtained from RET that deviates from the results from the classical Chapman–Enskog theory is the Equation of State. While from classical Chapman–Enskog theory the ideal gas law is recovered, RET developed for rigid elastic spheres yields the pressure equation p n k T = 1 + 2 π n 3 ∑ i ∑ j x i x j σ i j 3 g i j , {\displaystyle {\frac {p}{nkT}}=1+{\frac {2\pi n}{3}}\sum _{i}\sum _{j}x_{i}x_{j}\sigma _{ij}^{3}g_{ij},} which is consistent with the Carnahan-Starling Equation of State, and reduces to the ideal gas law in the limit of infinite dilution (i.e. when n ∑ i , j x i x j σ i j 3 ≪ 1 {\textstyle n\sum _{i,j}x_{i}x_{j}\sigma _{ij}^{3}\ll 1} ) For the transport coefficients: viscosity, thermal conductivity, diffusion and thermal diffusion, RET provides expressions that exactly reduce to those obtained from classical Chapman–Enskog theory in the limit of infinite dilution. However, RET predicts a density dependence of the thermal conductivity, which can be expressed as λ = ( 1 + n α λ ) λ 0 + n 2 T 1 / 2 λ σ {\displaystyle \lambda =(1+n\alpha _{\lambda })\lambda _{0}+n^{2}T^{1/2}\lambda _{\sigma }} where α λ {\displaystyle \alpha _{\lambda }} and λ σ {\displaystyle \lambda _{\sigma }} are relatively weak functions of the composition, temperature and density, and λ 0 {\displaystyle \lambda _{0}} is the thermal conductivity obtained from classical Chapman–Enskog theory. Similarly, the expression obtained for viscosity can be written as μ = ( 1 + n T α μ ) μ 0 + n 2 T 1 / 2 μ σ {\displaystyle \mu =(1+nT\alpha _{\mu })\mu _{0}+n^{2}T^{1/2}\mu _{\sigma }} with α μ {\displaystyle \alpha _{\mu }} and μ σ {\displaystyle \mu _{\sigma }} weak functions of composition, temperature and density, and μ 0 {\displaystyle \mu _{0}} the value obtained from classical Chapman–Enskog theory. For diffusion coefficients and thermal diffusion coefficients the picture is somewhat more complex. However, one of the major advantages of RET over classical Chapman–Enskog theory is that the dependence of diffusion coefficients on the thermodynamic factors, i.e. the derivatives of the chemical potentials with respect to composition, is predicted. In addition, RET does not predict a strict dependence of D ∼ 1 n , D T ∼ 1 n {\displaystyle D\sim {\frac {1}{n}},\quad D_{T}\sim {\frac {1}{n}}} for all densities, but rather predicts that the coefficients will decrease more slowly with density at high densities, which is in good agreement with experiments. These modified density dependencies also lead RET to predict a density dependence of the Soret coefficient, S T = D T D , ( ∂ S T ∂ n ) T ≠ 0 , {\displaystyle S_{T}={\frac {D_{T}}{D}},\quad \left({\frac {\partial S_{T}}{\partial n}}\right)_{T}\neq 0,} while classical Chapman–Enskog theory predicts that the Soret coefficient, like the viscosity and thermal conductivity, is independent of density. ==== Applications ==== While Revised Enskog theory provides many advantages over classical Chapman–Enskog theory, this comes at the price of being significantly more difficult to apply in practice. While classical Chapman–Enskog theory can be applied to arbitrarily complex spherical potentials, given sufficiently accurate and fast integration routines to evaluate the required collision integrals, Revised Enskog Theory, in addition to this, requires knowledge of the contact value of the pair distribution function. For mixtures of hard spheres, this value can be computed without large difficulties, but for more complex intermolecular potentials it is generally non-trivial to obtain. However, some success has been achieved at estimating the contact value of the pair distribution function for Mie fluids (which consists of particles interacting through a generalised Lennard-Jones potential) and using these estimates to predict the transport properties of dense gas mixtures and supercritical fluids. Applying RET to particles interacting through realistic potentials also exposes one to the issue of determining a reasonable "contact diameter" for the soft particles. While these are unambiguously defined for hard spheres, there is still no generally agreed upon value that one should use for the contact diameter of soft particles. == See also == Transport phenomena Kinetic theory of gases Boltzmann equation Navier–Stokes equations Fourier's law Newtonian fluid == Notes == == References == The classic monograph on the topic: Chapman, Sydney; Cowling, T.G. (1970), The Mathematical Theory of Non-Uniform Gases (3rd ed.), Cambridge University Press Contains a technical introduction to normal solutions of the Boltzmann equation: Grad, Harold (1958), "Principles of the Kinetic Theory of Gases", in Flügge, S. (ed.), Encyclopedia of Physics, vol. XII, Springer-Verlag, pp. 205–294
Wikipedia/Revised_Enskog_theory
The shear viscosity (or viscosity, in short) of a fluid is a material property that describes the friction between internal neighboring fluid surfaces (or sheets) flowing with different fluid velocities. This friction is the effect of (linear) momentum exchange caused by molecules with sufficient energy to move (or "to jump") between these fluid sheets due to fluctuations in their motion. The viscosity is not a material constant, but a material property that depends on temperature, pressure, fluid mixture composition, and local velocity variations. This functional relationship is described by a mathematical viscosity model called a constitutive equation which is usually far more complex than the defining equation of shear viscosity. One such complicating feature is the relation between the viscosity model for a pure fluid and the model for a fluid mixture which is called mixing rules. When scientists and engineers use new arguments or theories to develop a new viscosity model, instead of improving the reigning model, it may lead to the first model in a new class of models. This article will display one or two representative models for different classes of viscosity models, and these classes are: Elementary kinetic theory and simple empirical models - viscosity for dilute gas with nearly spherical molecules Power series - simplest approach after dilute gas Equation of state analogy between PVT and T η {\displaystyle \eta } P Corresponding state model - scaling a variable with its value at the critical point Friction force theory - internal sliding surface analogy to a sliding box on an inclined surface Multi- and one-parameter version of friction force theory Transition state analogy - molecular energy needed to squeeze into a vacancy analogous to molecules locking into each other in a chemical reaction Free volume theory - molecular energy needed to jump into a vacant position in the neighboring surface Significant structure theory - based on Eyring's concept of liquid as a blend of solid-like and gas-like behavior / features Selected contributions from these development directions is displayed in the following sections. This means that some known contributions of research and development directions are not included. For example, is the group contribution method applied to a shear viscosity model not displayed. Even though it is an important method, it is thought to be a method for parameterization of a selected viscosity model, rather than a viscosity model in itself. The microscopic or molecular origin of fluids means that transport coefficients like viscosity can be calculated by time correlations which are valid for both gases and liquids, but it is computer intensive calculations. Another approach is the Boltzmann equation which describes the statistical behaviour of a thermodynamic system not in a state of equilibrium. It can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport, but it is computer intensive simulations. From Boltzmann's equation one may also analytically derive (analytical) mathematical models for properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation. The mathematics is so complicated for polar and non-spherical molecules that it is very difficult to get practical models for viscosity. The purely theoretical approach will therefore be left out for the rest of this article, except for some visits related to dilute gas and significant structure theory. == Use, definition and dependence == The classic Navier-Stokes equation is the balance equation for momentum density for an isotropic, compressional and viscous fluid that is used in fluid mechanics in general and fluid dynamics in particular: ρ [ ∂ u ∂ t + u ⋅ ∇ u ] = − ∇ P + ∇ [ ζ ( ∇ ⋅ u ) ] + ∇ ⋅ [ η ( ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ) ] + ρ g {\displaystyle \rho \left[{\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} \right]=-\nabla P+\nabla [\zeta (\nabla \cdot \mathbf {u} )]+\nabla \cdot \left[\eta \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{T}-{\frac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right)\right]+\rho \mathbf {g} } On the right hand side is (the divergence of) the total stress tensor σ {\displaystyle {\boldsymbol {\sigma }}} which consists of a pressure tensor ( − P I ) {\displaystyle \left(-P\mathbf {I} \right)} and a dissipative (or viscous or deviatoric) stress tensor τ d {\displaystyle {\boldsymbol {\tau }}_{d}} . The dissipative stress consists of a compression stress tensor τ c {\displaystyle {\boldsymbol {\tau }}_{c}} (term no. 2) and a shear stress tensor τ s {\displaystyle {\boldsymbol {\tau }}_{s}} (term no. 3). The rightmost term ρ g {\displaystyle \rho \mathbf {g} } is the gravitational force which is the body force contribution, and ρ {\displaystyle \rho } is the mass density, and u {\displaystyle \mathbf {u} } is the fluid velocity. σ = − P I + τ d = − P I + τ c + τ s {\displaystyle {\boldsymbol {\sigma }}=-P\mathbf {I} +{\boldsymbol {\tau }}_{d}=-P\mathbf {I} +{\boldsymbol {\tau }}_{c}+{\boldsymbol {\tau }}_{s}} For fluids, the spatial or Eularian form of the governing equations is preferred to the material or Lagrangian form, and the concept of velocity gradient is preferred to the equivalent concept of strain rate tensor. Stokes assumptions for a wide class of fluids therefore says that for an isotropic fluid the compression and shear stresses are proportional to their velocity gradients, C {\displaystyle \mathbf {C} } and S 0 {\displaystyle \mathbf {S} _{0}} respectively, and named this class of fluids for Newtonian fluids. The classic defining equation for volume viscosity ζ {\displaystyle \zeta } and shear viscosity η {\displaystyle \eta } are respectively: τ c = 3 ζ C {\displaystyle {\boldsymbol {\tau }}_{c}=3\zeta \mathbf {C} } τ s = 2 η S 0 {\displaystyle {\boldsymbol {\tau }}_{s}=2\eta \mathbf {S} _{0}} The classic compression velocity "gradient" is a diagonal tensor that describes a compressing (alt. expanding) flow or attenuating sound waves: C = 1 3 ( ∇ ⋅ u ) I {\displaystyle \mathbf {C} ={\frac {1}{3}}\left(\nabla \!\cdot \!\mathbf {u} \right)\mathbf {I} } The classic Cauchy shear velocity gradient, is a symmetric and traceless tensor that describes a pure shear flow (where pure means excluding normal outflow which in mathematical terms means a traceless matrix) around e.g. a wing, propeller, ship hull or in e.g. a river, pipe or vein with or without bends and boundary skin: S 0 = S − 1 3 ( ∇ ⋅ u ) I {\displaystyle \mathbf {S} _{0}=\mathbf {S} -{\frac {1}{3}}\left(\nabla \!\cdot \!\mathbf {u} \right)\mathbf {I} } where the symmetric gradient matrix with non-zero trace is S = 1 2 [ ∇ u + ( ∇ u ) T ] {\displaystyle \mathbf {S} ={\frac {1}{2}}\left[\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right]} How much the volume viscosity contributes to the flow characteristics in e.g. a choked flow such as convergent-divergent nozzle or valve flow is not well known, but the shear viscosity is by far the most utilized viscosity coefficient. The volume viscosity will now be abandoned, and the rest of the article will focus on the shear viscosity. Another application of shear viscosity models is Darcy's law for multiphase flow. u a = − η a − 1 K r a ⋅ K ⋅ ( ∇ P a − ρ a g ) {\displaystyle \mathbf {u} _{a}=-\eta _{a}^{-1}\mathbf {K} _{ra}\cdot \mathbf {K} \cdot \left(\nabla P_{a}-\rho _{a}\mathbf {g} \right)} where a = water, oil, gas and K {\displaystyle \mathbf {K} } and K r a {\displaystyle \mathbf {K} _{ra}} are absolute and relative permeability, respectively. These 3 (vector) equations models flow of water, oil and natural gas in subsurface oil and gas reservoirs in porous rocks. Although the pressures changes are big, the fluid phases will flow slowly through the reservoir due to the flow restriction caused by the porous rock. The above definition is based on a shear-driven fluid motion that in its most general form is modelled by a shear stress tensor and a velocity gradient tensor. The fluid dynamics of a shear flow is, however, very well illustrated by the simple Couette flow. In this experimental layout, the shear stress τ s {\displaystyle {\boldsymbol {\tau }}_{s}} and the shear velocity gradient S 0 {\displaystyle \mathbf {S} _{0}} (where now S 0 = S {\displaystyle \mathbf {S} _{0}=\mathbf {S} } ) takes the simple form: τ = η S where τ = F A and S = d u d y = u m a x y m a x {\displaystyle \tau =\eta S\quad {\text{where}}\quad \tau ={\frac {F}{A}}\quad {\text{and}}\quad S={du_{} \over dy}={u_{max} \over y_{max}}} Inserting these simplifications gives us a defining equation that can be used to interpret experimental measurements: F A = η d u d y = η u m a x y m a x {\displaystyle {\frac {F}{A}}=\eta {du_{} \over dy}=\eta {u_{max} \over y_{max}}} where A {\displaystyle A} is the area of the moving plate and the stagnant plate, y {\displaystyle y} is the spatial coordinate normal to the plates. In this experimental setup, value for the force F {\displaystyle F} is first selected. Then a maximum velocity u m a x {\displaystyle u_{max}} is measured, and finally both values are entered in the equation to calculate viscosity. This gives one value for the viscosity of the selected fluid. If another value of the force is selected, another maximum velocity will be measured. This will result in another viscosity value if the fluid is a non-Newtonian fluid such as paint, but it will give the same viscosity value for a Newtonian fluid such as water, petroleum oil or gas. If another parameter like temperature, T {\displaystyle T} , is changed, and the experiment is repeated with the same force, a new value for viscosity will be the calculated, for both non-Newtonian and Newtonian fluids. The great majority of material properties varies as a function of temperature, and this goes for viscosity also. The viscosity is also a function of pressure and, of course, the material itself. For a fluid mixture, this means that the shear viscosity will also vary according to the fluid composition. To map the viscosity as a function of all these variables require a large sequence of experiments that generates an even larger set of numbers called measured data, observed data or observations. Prior to, or at the same time as, the experiments is a material property model (or short material model) proposed to describe or explain the observations. This mathematical model is called the constitutive equation for shear viscosity. It is usually an explicit function that contains some empirical parameters that is adjusted in order to match the observations as good as the mathematical function is capable to do. For a Newtonian fluid, the constitutive equation for shear viscosity is generally a function of temperature, pressure, fluid composition: η = f ( T , P , w ) where w = x , y , z , 1 p u r e f l u i d {\displaystyle \eta =f(T,P,\mathbf {w} )\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} ,1_{purefluid}} where x {\displaystyle \mathbf {x} } is the liquid phase composition with molfraction x i {\displaystyle x_{i}} for fluid component i, and y {\displaystyle \mathbf {y} } and z {\displaystyle \mathbf {z} } are the gas phase and total fluid compositions, respectively. For a non-Newtonian fluid (in the sense of a generalized Newtonian fluid), the constitutive equation for shear viscosity is also a function of the shear velocity gradient: η = f ( T , P , w , S 0 ) where w = x , y , z , 1 p u r e f l u i d {\displaystyle \eta =f(T,P,\mathbf {w} ,\mathbf {S} _{0})\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} ,1_{purefluid}} The existence of the velocity gradient in the functional relationship for non-Newtonian fluids says that viscosity is generally not an equation of state, so the term constitutional equation will in general be used for viscosity equations (or functions). The free variables in the two equations above, also indicates that specific constitutive equations for shear viscosity will be quite different from the simple defining equation for shear viscosity that is shown further up. The rest of this article will show that this is certainly true. Non-Newtonian fluids will therefore be abandoned, and the rest of this article will focus on Newtonian fluids. == Dilute gas limit and scaled variables == === Elementary kinetic theory === In textbooks on elementary kinetic theory one can find results for dilute gas modeling that have widespread use. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. This non-equilibrium flow is superimposed on a Maxwell–Boltzmann equilibrium distribution of molecular motions. Let σ {\displaystyle \sigma } be the collision cross section of one molecule colliding with another. The number density C {\displaystyle C} is defined as the number of molecules per (extensive) volume C = N / V {\displaystyle C=N/V} . The collision cross section per volume or collision cross section density is C σ {\displaystyle C\sigma } , and it is related to the mean free path l {\displaystyle l} by l = 1 2 C σ {\displaystyle l={\frac {1}{{\sqrt {2}}C\sigma }}} Combining the kinetic equations for molecular motion with the defining equation of shear viscosity gives the well known equation for shear viscosity for dilute gases: η 0 = 2 3 π ⋅ m k B T σ = 2 3 π ⋅ M R T σ N A {\displaystyle \eta _{0}={\frac {2}{3{\sqrt {\pi }}}}\cdot {\frac {\sqrt {mk_{B}T}}{\sigma }}={\frac {2}{3{\sqrt {\pi }}}}\cdot {\frac {\sqrt {MRT}}{\sigma N_{A}}}} where k B ⋅ N A = R and M = m ⋅ N A {\displaystyle k_{B}\cdot N_{A}=R\quad {\text{and}}\quad M=m\cdot N_{A}} where k B {\displaystyle k_{B}} is the Boltzmann constant, N A {\displaystyle N_{A}} is the Avogadro constant, R {\displaystyle R} is the gas constant, M {\displaystyle M} is the molar mass and m {\displaystyle m} is the molecular mass. The equation above presupposes that the gas density is low (i.e. the pressure is low), hence the subscript zero in the variable η 0 {\displaystyle \eta _{0}} . This implies that the kinetic translational energy dominates over rotational and vibrational molecule energies. The viscosity equation displayed above further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic hard core particles of spherical shape. This assumption of particles being like billiard balls with radius r {\displaystyle r} , implies that the collision cross section of one molecule can be estimated by σ = π ( 2 r ) 2 = π d 2 for monomolecular gases and monoparticle beam experiments {\displaystyle \sigma =\pi \left(2r_{}\right)^{2}=\pi d^{2}\qquad \qquad \qquad \,\quad {\text{for monomolecular gases and monoparticle beam experiments }}} σ i j = π ( r i + r j ) 2 = π 4 ( d i + d j ) 2 for binary collision in gas mixtures and dissimilar bullet / target particles {\displaystyle \sigma _{ij}=\pi \left(r_{i}+r_{j}\right)^{2}={\frac {\pi }{4}}\left(d_{i}+d_{j}\right)^{2}\quad {\text{for binary collision in gas mixtures and dissimilar bullet / target particles}}} But molecules are not hard particles. For a reasonably spherical molecule the interaction potential is more like the Lennard-Jones potential or even more like the Morse potential. Both have a negative part that attracts the other molecule from distances much longer than the hard core radius, and thus models the van der Waals forces. The positive part models the repulsive forces as the electron clouds of the two molecules overlap. The radius for zero interaction potential is therefore appropriate for estimating (or defining) the collision cross section in kinetic gas theory, and the r-parameter (conf. r , r i {\displaystyle r,r_{i}} ) is therefore called kinetic radius. The d-parameter (where d = 2 r , d i = 2 r i {\displaystyle d=2r,d_{i}=2r_{i}} ) is called kinetic diameter. The macroscopic collision cross section σ ⋅ N A {\displaystyle \sigma \cdot N_{A}} is often associated with the critical molar volume V c {\displaystyle V_{c}} , and often without further proof or supporting arguments, by σ N A ∝ V c 2 / 3 or σ N A = 2 3 π ⋅ K r v − 1 V c 2 / 3 {\displaystyle \sigma N_{A}\propto V_{c}^{2/3}\quad {\text{or}}\quad \sigma N_{A}={\frac {2}{3{\sqrt {\pi }}}}\cdot K_{rv}^{-1}V_{c}^{2/3}} where K r v {\displaystyle K_{rv}} is molecular shape parameter that is taken as an empirical tuning parameter, and the pure numerical part is included in order to make the final viscosity formula more suitably for practical use. Inserting this interpretation of σ N A {\displaystyle \sigma N_{A}} , and use of reduced temperature T r {\displaystyle T_{r}} , gives η 0 = T r K r v D r v where T r = T / T c and {\displaystyle \eta _{0}={\sqrt {T_{r}}}K_{rv}D_{rv}\quad {\text{where}}\quad T_{r}=T/T_{c}\quad {\text{and}}} D r v = ( M R T c ) 1 / 2 V c − 2 / 3 = R 1 / 2 D v {\displaystyle D_{rv}=\left(MRT_{c}\right)^{1/2}V_{c}^{-2/3}=R^{1/2}D_{v}} which implies that the empirical parameter K r v {\displaystyle K_{rv}} is dimensionless, and that D r v {\displaystyle D_{rv}} and η 0 {\displaystyle \eta _{0}} have the same units. The parameter D r v {\displaystyle D_{rv}} is a scaling parameter that involves the gas constant R {\displaystyle R} and the critical molar volume V c {\displaystyle V_{c}} , and it used to scale the viscosity. In this article the viscosity scaling parameter will frequently be denoted by D x y z {\displaystyle D_{xyz}} which involve one or more of the parameters R {\displaystyle R} , V c {\displaystyle V_{c}} , P c {\displaystyle P_{c}} in addition to critical temperature T c {\displaystyle T_{c}} and molar mass M {\displaystyle M} . Incomplete scaling parameters, such as the parameter D v {\displaystyle D_{v}} above where the gas constant R {\displaystyle R} is absorbed into the empirical constant, will often be encountered in practice. In this case the viscosity equation becomes η 0 = T r K v D v {\displaystyle \eta _{0}={\sqrt {T_{r}}}K_{v}D_{v}} where the empirical parameter K v {\displaystyle K_{v}} is not dimensionless, and a proposed viscosity model for dense fluid will not be dimensionless if D v {\displaystyle D_{v}} is the common scaling factor. Notice that η 0 = T r K r v D r v = T r K v D v ⟹ K v = R 1 / 2 K r v {\displaystyle \eta _{0}={\sqrt {T_{r}}}K_{rv}D_{rv}={\sqrt {T_{r}}}K_{v}D_{v}\implies K_{v}=R^{1/2}K_{rv}} Inserting the critical temperature in the equation for dilute viscosity gives η 0 c = K r v D r v = K v D v {\displaystyle \eta _{0c}=K_{rv}D_{rv}=K_{v}D_{v}} The default values of the parameters K r v {\displaystyle K_{rv}} and K v {\displaystyle K_{v}} should be fairly universal values, although K v {\displaystyle K_{v}} depends on the unit system. However, the critical molar volume in the scaling parameters D r v {\displaystyle D_{rv}} and D v {\displaystyle D_{v}} is not easily accessible from experimental measurements, and that is a significant disadvantage. The general equation of state for a real gas is usually written as P V = Z R T ⟹ P c V c = Z c R T c {\displaystyle PV=ZRT\implies P_{c}V_{c}=Z_{c}RT_{c}} where the critical compressibility factor Z c {\displaystyle Z_{c}} , which reflects the volumetric deviation of the real gases from the ideal gas, is also not easily accessible from laboratory experiments. However, critical pressure and critical temperature are more accessible from measurements. It should be added that critical viscosity is also not readily available from experiments. Uyehara and Watson (1944) proposed to absorb a universal average value of Z c {\displaystyle Z_{c}} (and the gas constant R {\displaystyle R} ) into a default value of the tuning parameter K p {\displaystyle K_{p}} as a practical solution of the difficulties of getting experimental values for V c {\displaystyle V_{c}} and/or Z c {\displaystyle Z_{c}} . The visocity model for a dilute gas is then η 0 = T r K p D p where T r = T / T c and {\displaystyle \eta _{0}={\sqrt {T_{r}}}K_{p}D_{p}\quad {\text{where}}\quad T_{r}=T/T_{c}\quad {\text{and}}} D p = T c − 1 / 6 P c 2 / 3 M 1 / 2 {\displaystyle D_{p}=T_{c}^{-1/6}P_{c}^{2/3}M^{1/2}} By inserting the critical temperature in the formula above, the critical viscosity is calculated as η 0 c = K p D p {\displaystyle \eta _{0c}=K_{p}D_{p}} Based on an average critical compressibility factor of Z ¯ c = 0.275 {\displaystyle {\bar {Z}}_{c}=0.275} and measured critical viscosity values of 60 different molecule types, Uyehara and Watson (1944) determined an average value of K p {\displaystyle K_{p}} to be K ¯ p = 7.7 ⋅ 1.01325 2 / 3 ≈ 7.77 for [ η 0 ] = μ P and [ P c ] = b a r {\displaystyle {\bar {K}}_{p}=7.7\cdot 1.01325^{2/3}\approx 7.77\quad {\text{for}}\quad \left[\eta _{0}\right]=\mu P\quad {\text{and}}\quad \left[P_{c}\right]=bar} The cubic equation of state (EOS) are very popular equations that are sufficiently accurate for most industrial computations both in vapor-liquid equilibrium and molar volume. Their weakest points are perhaps molar volum in the liquid region and in the critical region. Accepting the cubic EOS, the molar hard core volume b {\displaystyle b} can be calculated from the turning point constraint at the critical point. This gives b = Ω b R T c P c which is similar to V c = Z ¯ c R T c P c {\displaystyle b=\Omega _{b}{\frac {RT_{c}}{P_{c}}}\quad {\text{which is similar to}}\quad V_{c}={\bar {Z}}_{c}{\frac {RT_{c}}{P_{c}}}} where the constant Ω b {\displaystyle \Omega _{b}} is a universal constant that is specific for the selected variant of the cubic EOS. This says that using D p {\displaystyle D_{p}} , and disregarding fluid component variations of Z c {\displaystyle Z_{c}} , is in practice equivalent to say that the macroscopic collision cross section is proportional to the hard core molar volume rather than the critical molar volume. In a fluid mixture like a petroleum gas or oil there are lots of molecule types, and within this mixture there are families of molecule types (i.e. groups of fluid components). The simplest group is the n-alkanes which are long chains of CH2-elements. The more CH2-elements, or carbon atoms, the longer molecule. Critical viscosity and critical thermodynamic properties of n-alkanes therefore show a trend, or functional behaviour, when plotted against molecular mass or number of carbon atoms in the molecule (i.e. carbon number). Parameters in equations for properties like viscosity usually also show such trend behaviour. This means that η 0 c j = K p j D p j ≠ K ¯ p D p j for many or most fluid components j {\displaystyle \eta _{0cj}=K_{pj}D_{pj}\neq {\bar {K}}_{p}D_{pj}\quad {\text{for many or most fluid components j }}} This says that the scaling parameter D p {\displaystyle D_{p}} alone is not a true or complete scaling factor unless all fluid components have a fairly similar (and preferably spherical) shape. The most important result of this kinetic derivation is perhaps not the viscosity formula, but the semi-empirical parameter D p {\displaystyle D_{p}} that is used extensively throughout the industry and applied science communities as a scaling factor for (shear) viscosity. The literature often reports the reciprocal parameter and denotes it as ξ {\displaystyle \xi } . The dilute gas viscosity contribution to the total viscosity of a fluid will only be important when predicting the viscosity of vapors at low pressures or the viscosity of dense fluids at high temperatures. The viscosity model for dilute gas, that is shown above, is widely used throughout the industry and applied science communities. Therefore, many researchers do not specify a dilute gas viscosity model when they propose a total viscosity model, but leave it to the user to select and include the dilute gas contribution. Some researchers do not include a separate dilute gas model term, but propose an overall gas viscosity model that cover the entire pressure and temperature range they investigated. In this section our central macroscopic variables and parameters and their units are temperature T {\displaystyle T} [K], pressure P {\displaystyle P} [bar], molar mass M {\displaystyle M} [g/mol], low density (low pressure or dilute) gas viscosity η 0 {\displaystyle \eta _{0}} [μP]. It is, however, common in the industry to use another unit for liquid and high density gas viscosity η {\displaystyle \eta } [cP]. === Kinetic theory === From Boltzmann's equation Chapman and Enskog derived a viscosity model for a dilute gas. η 0 × 10 6 = 2.6693 M T σ 2 Ω ( T ∗ ) where T ∗ = k B T / ε {\displaystyle \eta _{0}\times 10^{6}=2.6693{\frac {\sqrt {MT}}{\sigma ^{2}\Omega \left(T^{*}\right)}}\quad {\text{where}}\quad T^{*}=k_{B}T/\varepsilon } where ε {\displaystyle \varepsilon } is (the absolute value of) the energy-depth of the potential well (see e.g. Lennard-Jones interaction potential). The term Ω ( T ∗ ) {\displaystyle \Omega (T^{*})} is called the collision integral, and it occurs as a general function of temperature that the user must specify, and that is not a simple task. This illustrates the situation for the molecular or statistical approach: The (analytical) mathematics gets incredible complex for polar and non-spherical molecules making it very difficult to achieve practical models for viscosity based on a statistical approach. The purely statistical approach will therefore be left out in the rest of this article. === Empirical correlation === Zéberg-Mikkelsen (2001) proposed empirical models for gas viscosity of fairly spherical molecules that is displayed in the section on Friction Force theory and its models for dilute gases and simple light gases. These simple empirical correlations illustrate that empirical methods competes with the statistical approach with respect to gas viscosity models for simple fluids (simple molecules). === Kinetic theory with empirical extension === The gas viscosity model of Chung et alios (1988) is combination of the Chapman–Enskog(1964) kinetic theory of viscosity for dilute gases and the empirical expression of Neufeld et alios (1972) for the reduced collision integral, but expanded empirical to handle polyatomic, polar and hydrogen bonding fluids over a wide temperature range. This viscosity model illustrates a successful combination of kinetic theory and empiricism, and it is displayed in the section of Significant structure theory and its model for the gas-like contribution to the total fluid viscosity. === Trend functions and scaling === In the section with models based on elementary kinetic theory, several variants of scaling the viscosity equation was discussed, and they are displayed below for fluid component i, as a service to the reader. η 0 i = T r i K r v i D r v i where D r v i = M i R T c i ⋅ V c i − 2 / 3 {\displaystyle \eta _{0i}={\sqrt {T_{ri}}}K_{rvi}D_{rvi}\quad {\text{where}}\quad D_{rvi}={\sqrt {M_{i}RT_{ci}}}\cdot V_{ci}^{-2/3}} η 0 i = T r i K v i D v i where D v i = M i T c i ⋅ V c i − 2 / 3 {\displaystyle \eta _{0i}={\sqrt {T_{ri}}}K_{vi}D_{vi}\ \ \,\quad {\text{where}}\quad D_{vi}\ \ ={\sqrt {M_{i}T_{ci}}}\cdot V_{ci}^{-2/3}} η 0 i = T r i K p i D p i where D p i = M i 1 / 2 P c i 2 / 3 ⋅ T c i − 1 / 6 {\displaystyle \eta _{0i}={\sqrt {T_{ri}}}K_{pi}D_{pi}\ \ \,\quad {\text{where}}\quad D_{pi}\ =M_{i}^{1/2}P_{ci}^{2/3}\cdot T_{ci}^{-1/6}} Zéberg-Mikkelsen (2001) proposed an empirical correlation for the V c i {\displaystyle V_{ci}} parameter for n-alkanes, which is V c i − 1 = A + B ⋅ P c i R T c i ⟺ V c i = R T c i A R T c i + B P c i {\displaystyle V_{ci}^{-1}=A+B\cdot {\frac {P_{ci}}{RT_{ci}}}\iff V_{ci}={\frac {RT_{ci}}{ART_{ci}+BP_{ci}}}} A = 0.000235751 m o l / c m 3 and B = 3.42770 {\displaystyle A=0.000235751\ mol/cm^{3}\quad {\text{and}}\quad B=3.42770} The critical molar volume of component i V c i {\displaystyle V_{ci}} is related to the critical mole density ρ n c i {\displaystyle \rho _{nci}} and critical mole concentration c c i {\displaystyle c_{ci}} by the equation V c i − 1 = ρ n c i = c c i {\displaystyle V_{ci}^{-1}=\rho _{nci}=c_{ci}} . From the above equation for V c i − 1 {\displaystyle V_{ci}^{-1}} it follows that Z c i = P c i A R T c i + B P c i ⟺ Z c i R T c i P c i V c i = 1 {\displaystyle Z_{ci}={\frac {P_{ci}}{ART_{ci}+BP_{ci}}}\iff {\frac {Z_{ci}RT_{ci}}{P_{ci}V_{ci}}}=1} where Z c i {\displaystyle Z_{ci}} is the compressibility factor for component i, which is often used as an alternative to V c i {\displaystyle V_{ci}} . By establishing a trend function for the parameter V c i {\displaystyle V_{ci}} for a homologous series, groups or families of molecules, parameter values for unknown fluid components in the homologous group can be found by interpolation and extrapolation, and parameter values can easily re-generateat at later need. Use of trend functions for parameters of homologous groups of molecules have greatly enhanced the usefulness of viscosity equations (and thermodynamic EOSs) for fluid mixtures such as petroleum gas and oil. Uyehara and Watson (1944) proposed a correlation for critical viscosity (for fluid component i) for n-alkanes using their average parameter K ¯ p {\displaystyle {\bar {K}}_{p}} and the classical pressure dominated scaling parameter D p i {\displaystyle D_{pi}} : η c i = K ¯ p D p i {\displaystyle \eta _{ci}={\bar {K}}_{p}D_{pi}} K ¯ p = 7.7 ⋅ 1.01325 2 / 3 ≈ 7.77 for [ η 0 ] = μ P and [ P c ] = b a r {\displaystyle \ \ {\bar {K}}_{p}\,=7.7\cdot 1.01325^{2/3}\approx 7.77\quad {\text{for}}\quad \left[\eta _{0}\right]=\mu P\quad {\text{and}}\quad \left[P_{c}\right]=bar} Zéberg-Mikkelsen (2001) proposed an empirical correlation for critical viscosity ηci parameter for n-alkanes, which is η c i = C ⋅ P c i M i D {\displaystyle \eta _{ci}=C\cdot P_{ci}M_{i}^{D}} C = 0.597556 μ P / b a r ⋅ ( g / m o l ) − D and D = 0.601652 {\displaystyle \ C=0.597556\ \mu P/bar\cdot (g/mol)^{-D}\quad {\text{and}}\quad D=0.601652} The unit equations for the two constitutive equations above by Zéberg-Mikkelsen (2001) are [ P c ] = b a r and [ V c ] = [ R T c / P c ] = c m 3 / m o l and [ T ] = K and [ Z c ] = 1 and [ η c ] = μ P {\displaystyle [P_{c}]=bar\quad {\text{and}}\quad [V_{c}]=[RT_{c}/P_{c}]=cm^{3}/mol\quad {\text{and}}\quad [T]=K\quad {\text{and}}\quad [Z_{c}]=1\quad {\text{and}}\quad [\eta _{c}]=\mu P} Inserting the critical temperature in the three viscosity equations from elementary kinetic theory gives three parameter equations. η c i = K r v i D r v i = K v i D v i = K p i D p i or {\displaystyle \eta _{ci}=K_{rvi}D_{rvi}=K_{vi}D_{vi}=K_{pi}D_{pi}\quad {\text{or}}\quad } K r v i = η c i D r v i and K v i = η c i D v i and K p i = η c i D p i {\displaystyle K_{rvi}={\frac {\eta _{ci}}{D_{rvi}}}\quad {\text{and}}\quad K_{vi}={\frac {\eta _{ci}}{D_{vi}}}\quad {\text{and}}\quad K_{pi}={\frac {\eta _{ci}}{D_{pi}}}} The three viscosity equations now coalesce to a single viscosity equation η 0 i = T r i η c i = T η c i T c i {\displaystyle \eta _{0i}={\sqrt {T_{ri}}}\eta _{ci}={\sqrt {T}}{\frac {\eta _{ci}}{\sqrt {T_{ci}}}}} because a nondimensional scaling is used for the entire viscosity equation. The standard nondimensionality reasoning goes like this: Creating nondimensional variables (with subscript D) by scaling gives η D i = η 0 i η c i and T D i = T T c i = T r i ⟹ η D i η c i = T D i K p i D p i {\displaystyle \eta _{Di}={\frac {\eta _{0i}}{\eta _{ci}}}\quad {\text{and}}\quad T_{Di}={\frac {T}{T_{ci}}}=T_{ri}\implies \eta _{Di}\eta _{ci}={\sqrt {T_{Di}}}K_{pi}D_{pi}} Claiming nondimensionality gives K p i D p i η c i = 1 ⟺ K p i = η c i D p i ⟹ η D i = T D i {\displaystyle {\frac {K_{pi}D_{pi}}{\eta _{ci}}}=1\iff K_{pi}={\frac {\eta _{ci}}{D_{pi}}}\implies \eta _{Di}={\sqrt {T_{Di}}}} The collision cross section and the critical molar volume which are both difficult to access experimentally, are avoided or circumvented. On the other hand, the critical viscosity has appeared as a new parameter, and critical viscosity is just as difficult to access experimentally as the other two parameters. Fortunately, the best viscosity equations have become so accurate that they justify calculation in the critical point, especially if the equation is matched to surrounding experimental data points. == Classic mixing rules == === Classic mixing rules for gas === Wilke (1950) derived a mixing rule based on kinetic gas theory η g m i x = ∑ i = 1 N η g i 1 + 1 y i ∑ j = 1 , j ≠ i N y j φ i j {\displaystyle \eta _{gmix}=\sum _{i=1}^{N}{\frac {\eta _{gi}}{1+{\frac {1}{y_{i}}}\sum _{j=1,j\neq i}^{N}y_{j}\varphi _{ij}}}} φ i j = [ 1 + η 0 i η 0 j 2 ⋅ M j M i 4 ] 2 4 2 2 1 + M i M j 2 {\displaystyle \varphi _{ij}={\frac {\left[1+{\sqrt[{2}]{\frac {\eta _{0i}}{\eta _{0j}}}}\cdot {\sqrt[{4}]{\frac {M_{j}}{M_{i}}}}\right]^{2}}{{\frac {4}{\sqrt[{2}]{2}}}{\sqrt[{2}]{1+{\frac {M_{i}}{M_{j}}}}}}}} The Wilke mixing rule is capable of describing the correct viscosity behavior of gas mixtures showing a nonlinear and non-monotonical behavior, or showing a characteristic bump shape, when the viscosity is plotted versus mass density at critical temperature, for mixtures containing molecules of very different sizes. Due to its complexity, it has not gained widespread use. Instead, the slightly simpler mixing rule proposed by Herning and Zipperer (1936), is found to be suitable for gases of hydrocarbon mixtures. === Classic mixing rules for liquid === The classic Arrhenius (1887). mixing rule for liquid mixtures is ln ⁡ η l m i x = ∑ i = 1 N x i ln ⁡ η l i {\displaystyle \ln \eta _{lmix}=\sum _{i=1}^{N}x_{i}\ln \eta _{li}} where η l m i x {\displaystyle \eta _{lmix}} is the viscosity of the liquid mixture, η l i {\displaystyle \eta _{li}} is the viscosity (equation) for fluid component i when flowing as a pure fluid, and x i {\displaystyle x_{i}} is the molfraction of component i in the liquid mixture. The Grunberg-Nissan (1949) mixing rule extends the Arrhenius rule to ln ⁡ η l m i x = ∑ i = 1 N x i ln ⁡ η l i + ∑ i = 1 N ∑ j = 1 N x i x j d i j {\displaystyle \ln \eta _{lmix}=\sum _{i=1}^{N}x_{i}\ln \eta _{li}+\sum _{i=1}^{N}\sum _{j=1}^{N}x_{i}x_{j}d_{ij}} where d i j {\displaystyle d_{ij}} are empiric binary interaction coefficients that are special for the Grunberg-Nissan theory. Binary interaction coefficients are widely used in cubic EOS where they often are used as tuning parameters, especially if component j is an uncertain component (i.e. have uncertain parameter values). Katti-Chaudhri (1964) mixing rule is ln ⁡ ( η l m i x V l m i x ) = ∑ i = 1 N x i ln ⁡ ( η l i V l i ) {\displaystyle \ln \left(\eta _{lmix}V_{lmix}\right)=\sum _{i=1}^{N}x_{i}\ln \left(\eta _{li}V_{li}\right)} where V l i {\displaystyle V_{li}} is the partial molar volume of component i, and V l m i x {\displaystyle V_{lmix}} is the molar volume of the liquid phase and comes from the vapor-liquid equilibrium (VLE) calculation or the EOS for single phase liquid. A modification of the Katti-Chaudhri mixing rule is ln ⁡ ( η l m i x V ) = ∑ i = 1 N z i ln ⁡ ( η l i V l i ) + Δ G E R T {\displaystyle \ln \left(\eta _{lmix}V\right)=\sum _{i=1}^{N}z_{i}\ln \left(\eta _{li}V_{li}\right)+{\frac {\Delta G^{E}}{RT}}} Δ G E = ∑ i = 1 N ∑ j = 1 N z i z j E i j {\displaystyle \Delta G^{E}=\sum _{i=1}^{N}\sum _{j=1}^{N}z_{i}z_{j}E_{ij}} where G E {\displaystyle G^{E}} is the excess activation energy of the viscous flow, and E i j {\displaystyle E_{ij}} is the energy that is characteristic of intermolecular interactions between component i and component j, and therefore is responsible for the excess energy of activation for viscous flow. This mixing rule is theoretically justified by Eyring's representation of the viscosity of a pure fluid according to Glasstone et alios (1941). The quantity η l i V l i {\displaystyle \eta _{li}V_{li}} has been obtained from the time-correlation expression for shear viscosity by Zwanzig (1965). == Power series == Very often one simply selects a known correlation for the dilute gas viscosity η 0 {\displaystyle \eta _{0}} , and subtracts this contribution from the total viscosity which is measured in the laboratory. This gives a residual viscosity term, often denoted Δ η {\displaystyle \Delta \eta } , which represents the contribution of the dense fluid, η d f {\displaystyle \eta _{df}} . η d f = η − η 0 ⟺ η = η 0 + η d f {\displaystyle \eta _{df}=\eta -\eta _{0}\quad \iff \quad \eta =\eta _{0}+\eta _{df}} The dense fluid viscosity is thus defined as the viscosity in excess of the dilute gas viscosity. This technique is often used in developing mathematical models for both purely empirical correlations and models with a theoretical support. The dilute gas viscosity contribution becomes important when the zero density limit (i.e. zero pressure limit) is approached. It is also very common to scale the dense fluid viscosity by the critical viscosity, or by an estimate of the critical viscosity, which is a characteristic point far into the dense fluid region. The simplest model of the dense fluid viscosity is a (truncated) power series of reduced mole density or pressure. Jossi et al. (1962) presented such a model based on reduced mole density, but its most widespread form is the version proposed by Lohrenz et al. (1964) which is displayed below. [ η d f D p + 10 − 4 ] 1 / 4 = L B C {\displaystyle \left[{\frac {\eta _{df}}{D_{p}}}+10^{-4}\right]^{1/4}=LBC_{}} The LBC-function is then expanded in a (truncated) power series with empirical coefficients as displayed below. L B C = L B C ( ρ n r ) = ∑ i = 1 5 a i ρ n r i − 1 {\displaystyle LBC_{}=LBC_{}\left(\rho _{nr}\right)=\sum _{i=1}^{5}a_{i}\rho _{nr}^{i-1}} The final viscosity equation is thus η = η 0 − 10 − 4 D p + D p L 4 {\displaystyle \eta =\eta _{0}-10^{-4}D_{p}+D_{p}L_{}^{4}} η 0 = η 0 ( T ) {\displaystyle \eta _{0}=\eta _{0}\left(T\right)} D p = T c − 1 / 6 P c 2 / 3 M n 1 / 2 {\displaystyle D_{p}=T_{c}^{-1/6}P_{c}^{2/3}M_{n}^{1/2}} Local nomenclature list: === Mixture === η m i x = η 0 m i x − 10 − 4 D p m i x + D p m i x L m i x 4 {\displaystyle \eta _{mix}=\eta _{0mix}-10^{-4}D_{pmix}+D_{pmix}L_{mix}^{4}} L B C m i x = L B C m i x ( c r m i x ) = ∑ i = 1 5 a i c r m i x i − 1 {\displaystyle LBC_{mix}=LBC_{mix}\left(c_{rmix}\right)=\sum _{i=1}^{5}a_{i}c_{rmix}^{i-1}} D p m i x = T c m i x − 1 / 6 P c m i x 2 / 3 M m i x 1 / 2 {\displaystyle D_{pmix}=T_{cmix}^{-1/6}P_{cmix}^{2/3}M_{mix}^{1/2}} η 0 m i x = η 0 m i x ( T ) {\displaystyle \eta _{0mix}=\eta _{0mix}\left(T\right)} The formula for η 0 {\displaystyle \eta _{0}} that was chosen by LBC, is displayed in the section called Dilute gas contribution. === Mixing rules === T c m i x = ∑ i z i T c i {\displaystyle T_{cmix}=\sum _{i}z_{i}T_{ci}} M m i x = M n = ∑ i z i M i {\displaystyle M_{mix}=M_{n}=\sum _{i}z_{i}M_{i}} P c m i x = ∑ i z i P c i {\displaystyle P_{cmix}=\sum _{i}z_{i}P_{ci}} ρ n c m i x − 1 = V c m i x = ∑ i z i V c i + z C 7 + ⋅ V c C 7 + i < C 7 + {\displaystyle \rho _{ncmix}^{-1}=V_{cmix}=\sum _{i}z_{i}V_{ci}+z_{C7+}\cdot V_{cC7+}\quad i<C7+} The subscript C7+ refers to the collection of hydrocarbon molecules in a reservoir fluid with oil and/or gas that have 7 or more carbon atoms in the molecule. The critical volume of C7+ fraction has unit ft3/lb mole, and it is calculated by V c C 7 + = 21.573 + 0.015122 ⋅ M C 7 + − 27.656 ⋅ S G C 7 + + 0.070615 ⋅ M C 7 + S G C 7 + {\displaystyle V_{cC7+}=21.573+0.015122\cdot M_{C7+}-27.656\cdot SG_{C7+}+0.070615\cdot M_{C7+}SG_{C7+}} where S G C 7 + {\displaystyle SG_{C7+}} is the specific gravity of the C7+ fraction. T c i for i ≥ C 7 + or T c C 7 + is taken from EOS characterization {\displaystyle T_{ci}\quad {\text{for}}\quad i\geq C7+\quad {\text{or}}\quad T_{cC7+}\quad {\text{is taken from EOS characterization}}} M i for i ≥ C 7 + or M C 7 + is taken from EOS characterization {\displaystyle M_{i}\quad {\text{for}}\quad i\geq C7+\quad {\text{or}}\quad M_{C7+}\quad {\text{is taken from EOS characterization}}} P c i for i ≥ C 7 + or P c C 7 + is taken from EOS characterization {\displaystyle P_{ci}\quad {\text{for}}\quad i\geq C7+\quad {\text{or}}\quad P_{cC7+}\quad {\text{is taken from EOS characterization}}} The molar mass M i {\displaystyle M_{i}} (or molecular mass) is normally not included in the EOS formula, but it usually enters the characterization of the EOS parameters. === EOS === From the equation of state the molar volume of the reservoir fluid (mixture) is calculated. V m i x = V m i x ( T , P ) for 1 mole fluid {\displaystyle V_{mix}=V_{mix}(T,P)\quad {\text{for 1 mole fluid}}} The molar volume V {\displaystyle V} is converted to mole density ρ n {\displaystyle \rho _{n}} (also called mole concentration and denoted c {\displaystyle c} ), and then scaled to be reduced mole density ρ n r {\displaystyle \rho _{nr}} . ρ n m i x = 1 / V m i x a n d ρ n c m i x = 1 / V c m i x a n d ρ n r m i x = V c m i x / V m i x = ρ n m i x / ρ n c m i x {\displaystyle \rho _{nmix}=1/V_{mix}\quad and\quad \rho _{ncmix}=1/V_{cmix}\quad and\quad \rho _{nrmix}=V_{cmix}/V_{mix}=\rho _{nmix}/\rho _{ncmix}} === Dilute gas contribution === The correlation for dilute gas viscosity of a mixture is taken from Herning and Zipperer (1936) and is η 0 m i x ( T ) = ∑ i z i η 0 i ( T r i ) M i 1 / 2 ∑ j z j M j 1 / 2 i , j < C 7 + {\displaystyle \eta _{0mix}\left(T\right)={\frac {\sum _{i}z_{i}\eta _{0i}\left(T_{ri}\right)M_{i}^{1/2}}{\sum _{j}z_{j}M_{j}^{1/2}}}\quad i,j<C7+} The correlation for dilute gas viscosity of the individual components is taken from Stiel and Thodos (1961) and is η 0 i ( T r i ) = { 34 × 10 − 5 ⋅ D p i T r i 0.94 if T r i ⩽ 1.5 17.78 × 10 − 5 ⋅ D p i ( 4.58 ⋅ T r i − 1.67 ) 5 / 8 if T r i > 1.5 {\displaystyle \eta _{0i}\left(T_{ri}\right)={\begin{cases}34\times 10^{-5}\cdot D_{pi}T_{ri}^{0.94}&{\text{if}}\quad T_{ri}\leqslant 1.5\\17.78\times 10^{-5}\cdot D_{pi}\left(4.58\cdot T_{ri}-1.67\right)^{5/8}&{\text{if}}\quad T_{ri}>1.5\end{cases}}} where D p i = T c i − 1 / 6 P c i 2 / 3 M i 1 / 2 i < C 7 + {\displaystyle D_{pi}=T_{ci}^{-1/6}P_{ci}^{2/3}M_{i}^{1/2}\quad i<C7+} T r i = T T c i i < C 7 + {\displaystyle T_{ri}={\frac {T}{T_{ci}}}\quad i<C7+} == Corresponding state principle == The principle of corresponding states (CS principle or CSP) was first formulated by van der Waals, and it says that two fluids (subscript a and z) of a group (e.g. fluids of non-polar molecules) have approximately the same reduced molar volume (or reduced compressibility factor) when compared at the same reduced temperature and reduced pressure. In mathematical terms this is V a ( P r a , T r a ) V c a = V z ( P r z , T r z ) V c z ⟺ V a ( P a , T a ) = V c a V c z ⋅ V z ( P z = P a P c z P c a , T z = T a T c z T c a ) {\displaystyle {\frac {V_{a}\left(P_{ra},T_{ra}\right)}{V_{ca}}}={\frac {V_{z}\left(P_{rz},T_{rz}\right)}{V_{cz}}}\iff V_{a}\left(P_{a},T_{a}\right)={\frac {V_{ca}}{V_{cz}}}\cdot V_{z}\left(P_{z}={\frac {P_{a}P_{cz}}{P_{ca}}},T_{z}={\frac {T_{a}T_{cz}}{T_{ca}}}\right)} When the common CS principle above is applied to viscosity, it reads η ( P , T ) = η c η c z ⋅ η z ( P z , T z ) ≈ K p D p K p z D p z ⋅ η z ( P z , T z ) {\displaystyle \eta \left(P,T\right)={\frac {\eta _{c}}{\eta _{cz}}}\cdot \eta _{z}\left(P_{z},T_{z}\right)\approx {\frac {K_{p}D_{p}}{K_{pz}D_{pz}}}\cdot \eta _{z}\left(P_{z},T_{z}\right)} Note that the CS principle was originally formulated for equilibrium states, but it is now applied on a transport property - viscosity, and this tells us that another CS formula may be needed for viscosity. In order to increase the calculation speed for viscosity calculations based on CS theory, which is important in e.g. compositional reservoir simulations, while keeping the accuracy of the CS method, Pedersen et al. (1984, 1987, 1989) proposed a CS method that uses a simple (or conventional) CS formula when calculating the reduced mass density that is used in the rotational coupling constants (displayed in the sections below), and a more complex CS formula, involving the rotational coupling constants, elsewhere. === Mixture === The simple corresponding state principle is extended by including a rotational coupling coefficient α {\displaystyle \alpha } as suggested by Tham and Gubbins (1970). The reference fluid is methane, and it is given the subscript z. η m i x ( P , T ) = ( T c m i x T c z ) − 1 / 6 ⋅ ( P c m i x P c z ) 2 / 3 ⋅ ( M m i x M z ) 1 / 2 ⋅ α c m i x α c z ⋅ η z ( P z , T z ) {\displaystyle \eta _{mix}\left(P,T\right)=\left({\frac {T_{cmix}}{T_{cz}}}\right)^{-1/6}\cdot \left({\frac {P_{cmix}}{P_{cz}}}\right)^{2/3}\cdot \left({\frac {M_{mix}}{M_{z}}}\right)^{1/2}\cdot {\frac {\alpha _{cmix}}{\alpha _{cz}}}\cdot \eta _{z}\left(P_{z},T_{z}\right)} P z = P ⋅ P c z α z P c m i x α m i x {\displaystyle P_{z}={\frac {P\cdot P_{cz}\alpha _{z}}{P_{cmix}\alpha _{mix}}}} T z = T ⋅ T c z α z T c m i x α m i x {\displaystyle T_{z}={\frac {T\cdot T_{cz}\alpha _{z}}{T_{cmix}\alpha _{mix}}}} === Mixing rules === The interaction terms for critical temperature and critical volume are T c i j = ( T c i T c j ) 1 / 2 {\displaystyle T_{cij}=\left(T_{ci}T_{cj}\right)^{1/2}} V c i j = 1 8 ( V c i 1 / 3 + V c j 1 / 3 ) 3 {\displaystyle V_{cij}={\frac {1}{8}}\left(V_{ci}^{1/3}+V_{cj}^{1/3}\right)^{3}} The parameter V c i {\displaystyle V_{ci}} is usually uncertain or not available. One therefore wants to avoid this parameter. Replacing Z c i {\displaystyle Z_{ci}} with the generic average parameter Z ¯ c {\displaystyle {\bar {Z}}_{c}} for all components, gives V c i = R Z c i T c i / P c i = R ¯ z c T c i / P c i where R ¯ z c = R Z ¯ c {\displaystyle V_{ci}=RZ_{ci}T_{ci}/P_{ci}={\bar {R}}_{zc}T_{ci}/P_{ci}\quad {\text{where}}\quad {\bar {R}}_{zc}=R{\bar {Z}}_{c}} V c i j = 1 8 R z c ( ( T c i P c i ) 1 / 3 + ( T c j P c j ) 1 / 3 ) 3 {\displaystyle V_{cij}={\frac {1}{8}}R_{zc}\left(\left({\frac {T_{ci}}{P_{ci}}}\right)^{1/3}+\left({\frac {T_{cj}}{P_{cj}}}\right)^{1/3}\right)^{3}} T c m i x = ∑ i ∑ j z i z j V c i j T c i j ∑ i ∑ j z i z j V c i j {\displaystyle T_{cmix}={\frac {\sum _{i}\sum _{j}z_{i}z_{j}V_{cij}T_{cij}}{\sum _{i}\sum _{j}z_{i}z_{j}V_{cij}}}} The above expression for V c i j {\displaystyle V_{cij}} is now inserted into the equation for T c m i x {\displaystyle T_{cmix}} . This gives the following mixing rule T c m i x = ∑ i ∑ j z i z j ( ( T c i P c i ) 1 / 3 + ( T c j P c j ) 1 / 3 ) 3 ( T c i T c j ) 1 / 2 ∑ i ∑ j z i z j ( ( T c i P c i ) 1 / 3 + ( T c j P c j ) 1 / 3 ) 3 {\displaystyle T_{cmix}={\frac {\sum _{i}\sum _{j}z_{i}z_{j}\left(\left({\frac {T_{ci}}{P_{ci}}}\right)^{1/3}+\left({\frac {T_{cj}}{P_{cj}}}\right)^{1/3}\right)^{3}\left(T_{ci}T_{cj}\right)^{1/2}}{\sum _{i}\sum _{j}z_{i}z_{j}\left(\left({\frac {T_{ci}}{P_{ci}}}\right)^{1/3}+\left({\frac {T_{cj}}{P_{cj}}}\right)^{1/3}\right)^{3}}}} Mixing rule for the critical pressure of the mixture is established in a similar way. P c m i x = R z c T c m i x / V c m i x {\displaystyle P_{cmix}=R_{zc}T_{cmix}/V_{cmix}} V c m i x = ∑ i ∑ j z i z j V c i j {\displaystyle V_{cmix}=\sum _{i}\sum _{j}z_{i}z_{j}V_{cij}} P c m i x = 8 ∑ i ∑ j z i z j ( ( T c i P c i ) 1 / 3 + ( T c j P c j ) 1 / 3 ) 3 ( T c i T c j ) 1 / 2 ( ∑ i ∑ j z i z j ( ( T c i P c i ) 1 / 3 + ( T c j P c j ) 1 / 3 ) 3 ) 2 {\displaystyle P_{cmix}={\frac {8\sum _{i}\sum _{j}z_{i}z_{j}\left(\left({\frac {T_{ci}}{P_{ci}}}\right)^{1/3}+\left({\frac {T_{cj}}{P_{cj}}}\right)^{1/3}\right)^{3}\left(T_{ci}T_{cj}\right)^{1/2}}{\left(\sum _{i}\sum _{j}z_{i}z_{j}\left(\left({\frac {T_{ci}}{P_{ci}}}\right)^{1/3}+\left({\frac {T_{cj}}{P_{cj}}}\right)^{1/3}\right)^{3}\right)^{2}}}} The mixing rule for molecular weight is much simpler, but it is not entirely intuitive. It is an empirical combination of the more intuitive formulas with mass weighting M ¯ w {\displaystyle {\overline {M}}_{w}} and mole weighting M ¯ n {\displaystyle {\overline {M}}_{n}} . M m i x = 1.304 × 10 − 4 ( M ¯ w 2.303 − M ¯ n 2.303 ) + M ¯ n {\displaystyle M_{mix}=1.304\times 10^{-4}\left({\overline {M}}_{w}^{2.303}-{\overline {M}}_{n}^{2.303}\right)+{\overline {M}}_{n}} M ¯ w = ∑ i z i M i 2 ∑ j z j M j a n d M ¯ n = ∑ i z i M i {\displaystyle {\overline {M}}_{w}={\frac {\sum _{i}z_{i}M_{i}^{2}}{\sum _{j}z_{j}M_{j}}}\quad and\quad {\overline {M}}_{n}=\sum _{i}z_{i}M_{i}} The rotational coupling parameter for the mixture is α m i x = 1 + 7.378 × 10 − 3 ρ r z α 1.847 M m i x 0.5173 {\displaystyle \alpha _{mix}=1+7.378\times 10^{-3}\rho _{rz\alpha }^{1.847}M_{mix}^{0.5173}} === Reference fluid === The accuracy of the final viscosity of the CS method needs a very accurate density prediction of the reference fluid. The molar volume of the reference fluid methane is therefore calculated by a special EOS, and the Benedict-Webb-Rubin (1940) equation of state variant suggested by McCarty (1974), and abbreviated BWRM, is recommended by Pedersen et al. (1987) for this purpose. This means that the fluid mass density in a grid cell of the reservoir model may be calculated via e.g. a cubic EOS or by an input table with unknown establishment. In order to avoid iterative calculations, the reference (mass) density used in the rotational coupling parameters is therefore calculated using a simpler corresponding state principle which says that P z α = P ⋅ P c z P c m i x and T z α = T ⋅ T c z T c m i x ⇒ V z α = V ( T z α , P z α ) for 1 mole methane {\displaystyle P_{z\alpha }={\frac {P\cdot P_{cz}}{P_{cmix}}}\quad {\text{and}}\quad T_{z\alpha }={\frac {T\cdot T_{cz}}{T_{cmix}}}\quad \Rightarrow \quad V_{z\alpha }=V(T_{z\alpha },P_{z\alpha })\quad {\text{for 1 mole methane}}} The molar volume is used to calculate the mass concentration, which is called (mass) density, and then scaled to be reduced density which is equal to reciprocal of reduced molar volume because there is only on component (molecule type). In mathematical terms this is ρ z α = M z / V z α a n d ρ c z = M z / V c z ⇒ ρ r z α = ρ z α / ρ c z = V c z / V z α {\displaystyle \rho _{z\alpha }=M_{z}/V_{z\alpha }\quad and\quad \rho _{cz}=M_{z}/V_{cz}\quad \Rightarrow \quad \rho _{rz\alpha }=\rho _{z\alpha }/\rho _{cz}=V_{cz}/V_{z\alpha }} The formula for the rotational coupling parameter of the mixture is shown further up, and the rotational coupling parameter for the reference fluid (methane) is α z = 1 + 0.031 ρ r z α 1.847 {\displaystyle \alpha _{z}=1+0.031\rho _{rz\alpha }^{1.847}} The methane mass density used in viscosity formulas is based on the extended corresponding state, shown at the beginning of this chapter on CS-methods. Using the BWRM EOS, the molar volume of the reference fluid is calculated as V z = V ( T z , P z ) for 1 mole methane {\displaystyle V_{z}=V(T_{z},P_{z})\quad {\text{for 1 mole methane}}} Once again, the molar volume is used to calculate the mass concentration, or mass density, but the reference fluid is a single component fluid, and the reduced density is independent of the relative molar mass. In mathematical terms this is ρ z = M z / V z a n d ρ c z = M z / V c z ⇒ ρ r z = ρ z / ρ c z = V c z / V z {\displaystyle \rho _{z}=M_{z}/V_{z}\quad and\quad \rho _{cz}=M_{z}/V_{cz}\quad \Rightarrow \quad \rho _{rz}=\rho _{z}/\rho _{cz}=V_{cz}/V_{z}} The effect of a changing composition of e.g. the liquid phase is related to the scaling factors for viscosity, temperature and pressure, and that is the corresponding state principle. The reference viscosity correlation of Pedersen et al. (1987) is η z ( ρ z , T z ) = η 0 ( T z ) + η ^ 1 ( T z ) ρ z + F 1 Δ η ′ ( ρ z , T z ) + F 2 Δ η ″ ( ρ z , T z ) {\displaystyle \eta _{z}\left(\rho _{z},T_{z}\right)=\eta _{0}(T_{z})+{\hat {\eta }}_{1}(T_{z})\rho _{z}+F_{1}\Delta \eta '(\rho _{z},T_{z})+F_{2}\Delta \eta ''(\rho _{z},T_{z})} The formulas for η 0 ( T z ) {\displaystyle \eta _{0}(T_{z})} , η ^ 1 ( T z ) {\displaystyle {\hat {\eta }}_{1}(T_{z})} , Δ η ′ ( ρ z , T z ) {\displaystyle \Delta \eta '(\rho _{z},T_{z})} are taken from Hanley et al. (1975). The dilute gas contribution is η 0 ( T z ) = ∑ i = 1 9 g i T z i − 3 4 {\displaystyle \eta _{0}\left(T_{z}\right)=\textstyle \sum _{i=1}^{9}g_{i}T_{z}^{\frac {i-3}{4}}} The temperature dependent factor of the first density contribution is η ^ 1 ( T z ) = h 1 − h 2 [ h 3 − l n ( T z h 4 ) ] 2 {\displaystyle {\hat {\eta }}_{1}\left(T_{z}\right)=h_{1}-h_{2}\left\lbrack h_{3}-ln\left({\frac {T_{z}}{h_{4}}}\right)\right\rbrack ^{2}} The dense fluid term is Δ η ′ ( ρ z , T z ) = e j 1 + j 4 / T z × [ e x p [ ρ z 0.1 ( j 2 + j 3 / T z 3 / 2 ) + θ r z ρ z 0.5 ( j 5 + j 6 / T z + j 7 / T z 2 ) ] − 1 ] {\displaystyle \Delta \eta '\left(\rho _{z},T_{z}\right)=e^{j_{1}+j_{4}/T_{z}}\times \lbrack exp{\lbrack \rho _{z}^{0.1}(j_{2}+j_{3}/T_{z}^{3/2})+\theta _{rz}\rho _{z}^{0.5}\left(j_{5}+j_{6}/T_{z}+j_{7}/T_{z}^{2}\right)\rbrack }-1\rbrack } where exponential function is written both as e x {\displaystyle e^{x}} and as e x p [ x ] {\displaystyle exp{\lbrack x\rbrack }} . The molar volume of the reference fluid methane, which is used to calculate the mass density in the viscosity formulas above, is calculated at a reduced temperature that is proportional to the reduced temperature of the mixture. Due to the high critical temperatures of heavier hydrocarbon molecules, the reduced temperature of heavier reservoir oils (i.e. mixtures) can give a transferred reduced methane temperature that is in the neighborhood of the freezing temperature of methane. This is illustrated using two fairly heavy hydrocarbon molecules, in the table below. The selected temperatures are a typical oil or gas reservoir temperature, the reference temperature of the International Standard Metric Conditions for Natural Gas (and similar fluids) and the freezing temperature of methane ( T f z {\displaystyle T_{fz}} ). Pedersen et al. (1987) added a fourth term, that is correcting the reference viscosity formula at low reduced temperatures. The temperature functions F 1 {\displaystyle F_{1}} and F 2 {\displaystyle F_{2}} are weight factors. Their correction term is Δ η ″ ( ρ z , T z ) = e k 1 + k 4 / T z × [ e x p [ ρ z 0.1 ( k 2 + k 3 / T z 3 / 2 ) + θ r z ρ z 0.5 ( k 5 + k 6 / T z + k 7 / T z 2 ) ] − 1 ] {\displaystyle \Delta \eta ''\left(\rho _{z},T_{z}\right)=e^{k_{1}+k_{4}/T_{z}}\times \lbrack exp{\lbrack \rho _{z}^{0.1}(k_{2}+k_{3}/T_{z}^{3/2})+\theta _{rz}\rho _{z}^{0.5}\left(k_{5}+k_{6}/T_{z}+k_{7}/T_{z}^{2}\right)\rbrack }-1\rbrack } θ r z = ( ρ z − ρ c z ) / ρ c z = ρ r z − 1 {\displaystyle \theta _{rz}=\left(\rho _{z}-\rho _{cz}\right)/\rho _{cz}=\rho _{rz}-1} F 1 = H T A N + 1 2 {\displaystyle F_{1}={\frac {HTAN+1}{2}}} F 2 = 1 − H T A N 2 {\displaystyle F_{2}={\frac {1-HTAN}{2}}} H T A N = t a n h ( Δ T z ) = e ( Δ T z ) − e ( − Δ T z ) e ( Δ T z ) + e ( − Δ T z ) {\displaystyle HTAN=tanh\left(\Delta T_{z}\right)={\frac {e^{\left(\Delta T_{z}\right)}-e^{\left(-\Delta T_{z}\right)}}{e^{\left(\Delta T_{z}\right)}+e^{\left(-\Delta T_{z}\right)}}}} Δ T z = T z − T f z {\displaystyle \Delta T_{z}=T_{z}-T_{fz}} == Equation of state analogy == Phillips (1912) plotted temperature T {\displaystyle T} versus viscosity η {\displaystyle \eta } for different isobars for propane, and observed a similarity between these isobaric curves and the classic isothermal curves of the P V T {\displaystyle PVT} surface. Later, Little and Kennedy (1968) developed the first viscosity model based on analogy between T η P {\displaystyle T\eta P} and P V T {\displaystyle PVT} using van der Waals EOS. Van der Waals EOS was the first cubic EOS, but the cubic EOS has over the years been improved and now make up a widely used class of EOS. Therefore, Guo et al. (1997) developed two new analogy models for viscosity based on PR EOS (Peng and Robinson 1976) and PRPT EOS (Patel and Teja 1982) respectively. The following year T.-M. Guo (1998) modified the PR based viscosity model slightly, and it is this version that will be presented below as a representative of EOS analogy models for viscosity. PR EOS is displayed on the next line. P = R T V − b e o s − a e o s V ( V + b e o s ) + b e o s ( V − b e o s ) {\displaystyle P={\frac {RT}{V-b_{eos}}}-{\frac {a_{eos}}{V(V+b_{eos})+b_{eos}(V-b_{eos})}}} The viscosity equation of Guo (1998) is displayed on the next line. T = r P η − d − a η ( η + b ) + b ( η − b ) {\displaystyle T={\frac {rP}{\eta -d}}-{\frac {a}{\eta \left(\eta +b\right)+b\left(\eta -b\right)}}} To prepare for the mixing rules, the viscosity equation is re-written for a single fluid component i. T = r i P η i − d i − a i η i ( η i + b i ) + b i ( η i − b i ) {\displaystyle T={\frac {r_{i}P}{\eta _{i}-d_{i}}}-{\frac {a_{i}}{\eta _{i}\left(\eta _{i}+b_{i}\right)+b_{i}\left(\eta _{i}-b_{i}\right)}}} Details of how the composite elements of the equation are related to basic parameters and variables, is displayed below. a i = 0.45724 r c i 2 P c i 2 T c i {\displaystyle a_{i}=0.45724{\frac {r_{ci}^{2}P_{ci}^{2}}{T_{ci}}}} b i = 0.07780 r c i P c i T c i {\displaystyle b_{i}=0.07780{\frac {r_{ci}P_{ci}}{T_{ci}}}} r i = r c i τ i ( T r i , P r i ) {\displaystyle r_{i}=r_{ci}\tau _{i}\left(T_{ri},P_{ri}\right)} d i = b i ϕ i ( T r i , P r i ) {\displaystyle d_{i}=b_{i}\phi _{i}\left(T_{ri},P_{ri}\right)} r c i = η c i T c i P c i Z c i {\displaystyle r_{ci}={\frac {\eta _{ci}T_{ci}}{P_{ci}Z_{ci}}}} η c i = K p D p i where K p = 7.7 ⋅ 10 4 and D p i = T c i − 1 / 6 M i 1 / 2 P c i 2 / 3 {\displaystyle \eta _{ci}=K_{p}D_{pi}\quad {\text{where}}\quad K_{p}=7.7\cdot 10^{4}\quad {\text{and}}\quad D_{pi}=T_{ci}^{-1/6}M_{i}^{1/2}P_{ci}^{2/3}} τ i = τ i ( T r i , P r i ) = ( 1 + Q 1 i ( T r i P r i − 1 ) ) − 2 {\displaystyle \tau _{i}=\tau _{i}\left(T_{ri},P_{ri}\right)=\left(1+Q_{1i}\left({\sqrt {T_{ri}P_{ri}}}-1\right)\right)^{-2}} ϕ i = ϕ i ( T r i , P r i ) = exp ⁡ [ Q 2 i ( T r i − 1 ) ] + Q 3 i ( P r i − 1 ) 2 {\displaystyle \phi _{i}=\phi _{i}\left(T_{ri},P_{ri}\right)=\exp \left[Q_{2i}\left({\sqrt {T_{ri}}}-1\right)\right]+Q_{3i}\left({\sqrt {P_{ri}}}-1\right)^{2}} Q 1 i = { 0.829599 + 0.350857 ω i − 0.747680 ω i 2 , if ω i < 0.3 0.956763 + 0.192829 ω i − 0.303189 ω i 2 , if ω i ≥ 0.3 {\displaystyle Q_{1i}={\begin{cases}0.829599+0.350857\,\omega _{i}-0.747680\,\omega _{i}^{2},&{\text{if }}&\omega _{i}<0.3\\0.956763+0.192829\,\omega _{i}-0.303189\,\omega _{i}^{2},&{\text{if }}&\omega _{i}\geq 0.3\end{cases}}} Q 2 i = { 1.94546 − 3.19777 ω i + 2.80193 ω i 2 , if ω i < 0.3 − 0.258789 − 37.1071 ω i + 20.5510 ω i 2 , if ω i ≥ 0.3 {\displaystyle Q_{2i}={\begin{cases}\;\;\;1.94546\;\,-3.19777\,\omega _{i}+2.80193\,\omega _{i}^{2},&\;{\text{if }}&\omega _{i}<0.3\\-0.258789-37.1071\,\omega _{i}+20.5510\,\omega _{i}^{2},&\;{\text{if }}&\omega _{i}\geq 0.3\end{cases}}} Q 3 i = { 0.299757 + 2.20855 ω i − 6.64959 ω i 2 , if ω i < 0.3 5.16307 − 12.8207 ω i + 11.0109 ω i 2 , if ω i ≥ 0.3 {\displaystyle Q_{3i}={\begin{cases}0.299757+2.20855\,\omega _{i}-6.64959\,\omega _{i}^{2},&&{\text{if }}&\omega _{i}<0.3\\5.16307\;\;-12.8207\,\omega _{i}+11.0109\,\omega _{i}^{2},&&{\text{if }}&\omega _{i}\geq 0.3\end{cases}}} === Mixture === T = r m i x P η m i x − d m i x − a m i x η m i x ( η m i x + b m i x ) + b m i x ( η m i x − b m i x ) {\displaystyle T={\frac {r_{mix}P}{\eta _{mix}-d_{mix}}}-{\frac {a_{mix}}{\eta _{mix}\left(\eta _{mix}+b_{mix}\right)+b_{mix}\left(\eta _{mix}-b_{mix}\right)}}} === Mixing rules === a m i x = ∑ i = 1 z i a i {\displaystyle a_{mix}=\sum _{i=1}z_{i}a_{i}} b m i x = ∑ i = 1 z i b i {\displaystyle b_{mix}=\sum _{i=1}z_{i}b_{i}} d m i x = ∑ i = 1 ∑ i = 1 z i z i d i d i ( 1 − k i j ) {\displaystyle d_{mix}=\sum _{i=1}\sum _{i=1}z_{i}z_{i}{\sqrt {d_{i}d_{i}}}\left(1-k_{ij}\right)} r m i x = ∑ i = 1 z i r i {\displaystyle r_{mix}=\sum _{i=1}z_{i}r_{i}} == Friction force theory == === Multi-parameter friction force theory === The multi-parameter version of the friction force theory (short FF theory and FF model), also called friction theory (short F-theory), was developed by Quiñones-Cisneros et al. (2000, 2001a, 2001b and Z 2001, 2004, 2006), and its basic elements, using some well known cubic EOSs, are displayed below. It is a common modeling technique to accept a viscosity model for dilute gas ( η 0 {\displaystyle \eta _{0}} ), and then establish a model for the dense fluid viscosity η d f {\displaystyle \eta _{df}} . The FF theory states that for a fluid under shear motion, the shear stress τ {\displaystyle \tau } (i.e. the dragging force) acting between two moving layers can be separated into a term τ 0 {\displaystyle \tau _{0}} caused by dilute gas collisions, and a term τ d f {\displaystyle \tau _{df}} caused by friction in the dense fluid. η = η 0 + η d f and τ = τ 0 + τ d f {\displaystyle \eta _{}=\eta _{0}+\eta _{df}\quad {\text{and}}\quad \tau _{}=\tau _{0}+\tau _{df}} The dilute gas viscosity (i.e. the limiting viscosity behavior as the pressure, normal stress, goes to zero) and the dense fluid viscosity (the residual viscosity) can be calculated by τ 0 = η 0 d u d y and τ d f = η d f d u d y {\displaystyle \tau _{0}=\eta _{0}{\frac {du}{dy}}\quad {\text{and}}\quad \tau _{df}=\eta _{df}{\frac {du}{dy}}} where du/dy d u / d y {\displaystyle du/dy} is the local velocity gradient orthogonal to the direction of flow. Thus η 0 = τ 0 d u / d y and η d f = τ d f d u / d y {\displaystyle \eta _{0}={\frac {\tau _{0}}{du/dy}}\quad {\text{and}}\quad \eta _{df}={\frac {\tau _{df}}{du/dy}}} The basic idea of QZS (2000) is that internal surfaces in a Couette flow acts like (or is analogue to) mechanical slabs with friction forces acting on each surface as they slide past each other. According to the Amontons-Coulomb friction law in classical mechanics, the ratio between the kinetic friction force F {\displaystyle F} and the normal force N {\displaystyle N} is given by ζ = F N = A τ d f A σ = τ d f σ {\displaystyle \zeta ={\frac {F}{N}}={\frac {A\tau _{df}}{A\sigma }}={\frac {\tau _{df}}{\sigma }}} where ζ {\displaystyle \zeta } is known as the kinetic friction coefficient, A is the area of the internal flow surface, τ {\displaystyle \tau } is the shear stress and σ {\displaystyle \sigma } is the normal stress (or pressure P {\displaystyle P} ) between neighboring layers in the Couette flow. η d f = τ d f d u / d y = ζ σ d u / d y {\displaystyle \eta _{df}={\frac {\tau _{df}}{du/dy}}={\frac {\zeta \sigma }{du/dy}}} The FF theory of QZS says that when a fluid is brought to have shear motion, the attractive and repulsive intermolecular forces will contribute to amplify or diminish the mechanical properties of the fluid. The friction shear stress term τ d f {\displaystyle \tau _{df}} of the dense fluid can therefore be considered to consist of an attractive friction shear contribution τ d f a t t {\displaystyle \tau _{dfatt}} and a repulsive friction shear contribution τ d f r e p {\displaystyle \tau _{dfrep}} . Inserting this gives us η d f = τ d f r e p + τ d f a t t d u / d y = ζ P d u / d y {\displaystyle \eta _{df}={\frac {\tau _{dfrep}+\tau _{dfatt}}{du/dy}}={\frac {\zeta P}{du/dy}}} The well known cubic equation of states (SRK, PR and PRSV EOS), can be written in a general form as P = R T V − b − a V 2 + u b V + w b 2 {\displaystyle P={\frac {RT}{V-b}}-{\frac {a}{V^{2}+ubV+wb^{2}}}} The parameter pair (u,w)=(1,0) gives the SRK EOS, and (u,w)=(2,-1) gives both the PR EOS and the PRSV EOS because they differ only in the temperature and composition dependent parameter / function a. Input variables are, in our case, pressure (P), temperature (T) and for mixtures also fluid composition which can be single phase (or total) composition z = [ z 1 , ⋯ , z N ] {\displaystyle \mathbf {z} =\left[z_{1},\cdots ,z_{N}\right]} , vapor (gas) composition y = [ y 1 , ⋯ , y N ] {\displaystyle \mathbf {y} =\left[y_{1},\cdots ,y_{N}\right]} or liquid (in our example oil) composition x = [ x 1 , ⋯ , x N ] {\displaystyle \mathbf {x} =\left[x_{1},\cdots ,x_{N}\right]} . Output is the molar volume of the phase (V). Since the cubic EOS is not perfect, the molar volume is more uncertain than the pressure and temperature values. The EOS consists of two parts that are related to van der Waals forces, or interactions, that originates in the static electric fields of the colliding parts /spots of the two (or more) colliding molecules. The repulsive part of the EOS is usually modeled as a hard core behavior of molecules, hence the symbol (Ph), and the attractive part (Pa) is based on the attractive interaction between molecules (conf. van der Waals force). The EOS can therefore be written as P = P h − P a {\displaystyle P=P_{h}-P_{a}} Assume that the molar volume (V) is known from EOS calculations, and prior vapor-liquid equilibrium (VLE) calculations for mixtures. Then the two functions P h {\displaystyle P_{h}} and P a {\displaystyle P_{a}} can be utilized, and these functions are expected to be a more accurate and robust than the molar volume (V) itself. These functions are P h = P h ( V , T , w ) = R T V − b where w = x , y , z , 1 p u r e f l u i d {\displaystyle P_{h}=P_{h}(V,T,\mathbf {w} )={\frac {RT}{V-b}}\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} ,1_{purefluid}} P a = P a ( V , T , w ) = a V 2 + u b V + w b 2 where w = x , y , z , 1 p u r e f l u i d {\displaystyle P_{a}=P_{a}(V,T,\mathbf {w} )={\frac {a}{V^{2}+ubV+wb^{2}}}\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} ,1_{purefluid}} The friction theory therefore assumes that the residual attractive stress τ f a t t {\displaystyle \tau _{fatt}} and the residual repulsive stress τ f r e p {\displaystyle \tau _{frep}} are functions of the attractive pressure term P a {\displaystyle P_{a}} and the repulsive pressure term P h {\displaystyle P_{h}} , respectively. τ d f a t t = F ( T , P a , w ) and τ d f r e p = F ( T , P h , w ) and w = x , y , z , 1 p u r e f l u i d {\displaystyle \tau _{dfatt}=F(T,P_{a},\mathbf {w} )\quad {\text{and}}\quad \tau _{dfrep}=F(T,P_{h},\mathbf {w} )\quad {\text{and}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} ,1_{purefluid}} The first attempt is, of course, to try a linear function in the pressure terms / functions. η d f = K a P a + K h P h {\displaystyle \eta _{df}=K_{a}P_{a}+K_{h}P_{h}} All K {\displaystyle K} coefficients are in general functions of temperature and composition, and they are called friction functions. In order to achieve high accuracy over a wide pressure and temperature ranges, it turned out that a second order term was needed even for non-polar molecules types such as hydrocarbon fluids in oil and gas reservoirs, in order to achieve a high accuracy at very high pressures. A test with a presumably difficult 3-component mixture of non-polar molecule types needed a third order power to achieve high accuracy at the most extreme super-critical pressures. η = η 0 + K a P a + K h P h + K h 2 P h 2 + K h 3 P h 3 {\displaystyle \eta =\eta _{0}+K_{a}P_{a}+K_{h}P_{h}+K_{h2}P_{h}^{2}+K_{h3}P_{h}^{3}} This article will concentrate on the second order version, but the third order term will be included whenever possible in order to show the total set of formulas. As an introduction to mixture notation, the above equation is repeated for component i in a mixture. η i = η 0 i + K a i P a i + K h i P h i + K h 2 i P h i 2 + K h 3 i P h i 3 {\displaystyle \eta _{i}=\eta _{0i}+K_{ai}P_{ai}+K_{hi}P_{hi}+K_{h2i}P_{hi}^{2}+K_{h3i}P_{hi}^{3}} The unit equations for the central variables in the multi-parameter FF-model is [ P c ] = b a r and [ T ] = K and [ η ] = μ P {\displaystyle [P_{c}]=bar\quad {\text{and}}\quad [T]=K\quad {\text{and}}\quad [\eta ]=\mu P} ==== Friction functions ==== Friction functions for fluid component i in the 5 parameter model for pure n-alkane molecules are presented below. K a i = B a 1 i exp ⁡ ( Γ i − 1 ) + B a 2 i [ exp ⁡ ( 2 Γ i − 2 ) − 1 ] {\displaystyle K_{ai}=B_{a1i}\exp \left(\Gamma _{i}-1\right)+B_{a2i}\left[\exp \left(2\Gamma _{i}-2\right)-1\right]} K h i = B h 1 i exp ⁡ ( Γ i − 1 ) + B h 2 i [ exp ⁡ ( 2 Γ i − 2 ) − 1 ] {\displaystyle K_{hi}=B_{h1i}\exp \left(\Gamma _{i}-1\right)+B_{h2i}\left[\exp \left(2\Gamma _{i}-2\right)-1\right]} K h 2 i = B h 22 i [ exp ⁡ ( 2 Γ i ) − 1 ] {\displaystyle K_{h2i}=B_{h22i}\left[\exp \left(2\Gamma _{i}\right)-1\right]} Γ i = T c i / T {\displaystyle \Gamma _{i}=T_{ci}/T} Friction functions for fluid component i in the 7- and 8-parameter models are presented below. K a i = B a 0 i + B a 1 i [ exp ⁡ ( Γ i − 1 ) − 1 ] + B a 2 i [ exp ⁡ ( 2 Γ i − 2 ) − 1 ] {\displaystyle K_{ai}=B_{a0i}+B_{a1i}\left[\exp \left(\Gamma _{i}-1\right)-1\right]+B_{a2i}\left[\exp \left(2\Gamma _{i}-2\right)-1\right]} K h i = B h 0 i + B h 1 i [ exp ⁡ ( Γ i − 1 ) − 1 ] + B h 2 i [ exp ⁡ ( 2 Γ i − 2 ) − 1 ] {\displaystyle K_{hi}=B_{h0i}+B_{h1i}\left[\exp \left(\Gamma _{i}-1\right)-1\right]+B_{h2i}\left[\exp \left(2\Gamma _{i}-2\right)-1\right]} K h 2 i = B h 22 i [ exp ⁡ ( 2 Γ i ) − 1 ] {\displaystyle K_{h2i}=B_{h22i}\left[\exp \left(2\Gamma _{i}\right)-1\right]} K h 3 i = B h 32 i [ exp ⁡ ( 2 Γ i ) − 1 ] ( Γ i − 1 ) 3 {\displaystyle K_{h3i}=B_{h32i}\left[\exp \left(2\Gamma _{i}\right)-1\right]\left(\Gamma _{i}-1\right)^{3}} Γ i = T c i / T {\displaystyle \Gamma _{i}=T_{ci}/T} The empirical constants in the friction functions are called friction constants. Friction constants for some n-alkanes in the 5 parameter model using SRK and PRSV EOS (and thus PR EOS) is presented in tables below. Friction constants for some n-alkanes in the 7 parameter model using PRSV EOS are also presented in a table below. The constant d 2 {\displaystyle d_{2}} for three fluid components are presented below in the last table of this table-series. ==== Mixture ==== P d y n = P = R T V e o s − b e o s − a e o s V e o s 2 + u b e o s V e o s + w b e o s 2 {\displaystyle P_{dyn}=P={\frac {RT}{V_{eos}-b_{eos}}}-{\frac {a_{eos}}{V_{eos}^{2}+ub_{eos}V_{eos}+wb_{eos}^{2}}}} In the single phase regions, the mole volume of the fluid mixture is determined by the input variables are pressure (P), temperature (T) and (total) fluid composition z {\displaystyle \mathbf {z} } . In the two-phase gas-liquid region a vapor-liquid equilibrium (VLE) calculation splits the fluid into a vapor (gas) phase with composition y {\displaystyle \mathbf {y} } and phase mixture molfraction ng and a liquid phase (in our example oil) with composition x {\displaystyle \mathbf {x} } and phase mixture molfraction no. For liquid phase, vapor phase and single phase fluid the relation to VLE and EOS variables are P h m i x = P h e o s ( V e o s , T , w ) = R T V e o s − b e o s where w = x , y , z {\displaystyle P_{hmix}=P_{heos}\left(V_{eos},T,\mathbf {w} \right)={\frac {RT}{V_{eos}-b_{eos}}}\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} } P a m i x = P a e o s ( V e o s , T , w ) = a e o s V e o s 2 + u b e o s V e o s + w b e o s 2 where w = x , y , z {\displaystyle P_{amix}=P_{aeos}\left(V_{eos},T,\mathbf {w} \right)={\frac {a_{eos}}{V_{eos}^{2}+ub_{eos}V_{eos}+wb_{eos}^{2}}}\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} } In a compositional reservoir simulator the pressure is calculated dynamically for each grid cell and each timestep. This gives dynamic pressures for vapor and liquid (oil) or single phase fluid. Assuming zero capillary pressure between hydrocarbon liquid (oil) and gas, the simulator software code will give a single dynamic pressure P d y n {\displaystyle P_{dyn}} which applies to both the vapor mixture and the liquid (oil) mixture. In this case the reservoir simulator software code may use P a m i x = P h m i x − P d y n and P h m i x = P h e o s ( V e o s , T , w ) = R T V e o s − b e o s where w = x , y , z {\displaystyle P_{amix}=P_{hmix}-P_{dyn}\quad {\text{and}}\quad P_{hmix}=P_{heos}(V_{eos},T,\mathbf {w} )={\frac {RT}{V_{eos}-b_{eos}}}\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} } or P h m i x = P d y n + P a m i x and P a m i x = P a e o s ( V e o s , T , w ) = a e o s V e o s 2 + u b e o s V e o s + w b e o s 2 where w = x , y , z {\displaystyle P_{hmix}=P_{dyn}+P_{amix}\quad {\text{and}}\quad P_{amix}=P_{aeos}(V_{eos},T,\mathbf {w} )={\frac {a_{eos}}{V_{eos}^{2}+ub_{eos}V_{eos}+wb_{eos}^{2}}}\quad {\text{where}}\quad \mathbf {w} =\mathbf {x} ,\mathbf {y} ,\mathbf {z} } The friction model for viscosity of a mixture is η m i x = η 0 m i x + η d f m i x {\displaystyle \eta _{mix}=\eta _{0mix}+\eta _{dfmix}} η m i x = η 0 m i x + K a m i x P a m i x + K h m i x P h m i x + K h 2 m i x P h m i x 2 + K h 3 m i x P h m i x 3 {\displaystyle \eta _{mix}=\eta _{0mix}+K_{amix}P_{amix}+K_{hmix}P_{hmix}+K_{h2mix}P_{hmix}^{2}+K_{h3mix}P_{hmix}^{3}} The cubic power term is only needed when molecules with a fairly rigid 2-D structure are included in the mixture, or the user requires a very high accuracy at exemely high pressures. The standard model includes only linear and quadratic terms in the pressure functions. ==== Mixing rules ==== ln ⁡ ( η 0 m i x ) = ∑ i = 1 N z i ln ⁡ ( η 0 i ) or η 0 m i x = ∏ i = 1 N η 0 i z i {\displaystyle \ln \left(\eta _{0mix}\right)=\sum _{i=1}^{N}z_{i}\ln(\eta _{0i})\quad {\text{or}}\quad \eta _{0mix}=\prod _{i=1}^{N}\eta _{0i}^{z_{i}}} K q m i x = ∑ i = 1 N W i K q i where q = a , h , h 2 {\displaystyle K_{qmix}=\sum _{i=1}^{N}W_{i}K_{qi}\quad {\text{where}}\quad q=a,h,h2} ln ⁡ ( K h 3 m i x ) = ∑ i = 1 N z i ln ⁡ ( K h 3 i ) or K h 3 m i x = ∏ i = 1 N K h 3 i z i {\displaystyle \ln \left(K_{h3mix}\right)=\sum _{i=1}^{N}z_{i}\ln \left(K_{h3i}\right)\quad {\text{or}}\quad K_{h3mix}=\prod _{i=1}^{N}K_{h3i}^{z_{i}}} where the empirical weight fraction is W i = z i M i ε ⋅ M M where M M = ∑ j = 1 N z j M j ε {\displaystyle W_{i}={\frac {z_{i}}{M_{i}^{\varepsilon }\cdot MM}}\quad {\text{where}}\quad MM=\sum _{j=1}^{N}{\frac {z_{j}}{M_{j}^{\varepsilon }}}} The recommended values for ε {\displaystyle \varepsilon } are ε = 0.15 {\displaystyle \quad \varepsilon =0.15\quad \;\;} gave best performance for SRK EOS ε = 0.075 {\displaystyle \quad \varepsilon =0.075\quad } gave best performance for PRSV EOS These values are established from binary mixtures of n-alkanes using a 5-parameter viscosity model, and they seems to be used for 7- and 8-parameter models also. The motivation for this weight parameter W i {\displaystyle W_{i}} , and thus the ε {\displaystyle \varepsilon } -parameter, is that in asymmetric mixtures like CH4 - C10H12, the lightest component tends to decrease the viscosity of the mixture more than linearly when plotted versus molfraction of the light component (or the heavy component). The friction coefficients of some selected fluid components is presented in the tables below for the 5,7 and 8-parameter models. For convenience are critical viscosities also included in the tables. . === One-parameter friction force theory === The one-parameter version of the friction force theory (FF1 theory and FF1 model) was developed by Quiñones-Cisneros et al. (2000, 2001a, 2001b and Z 2001, 2004), and its basic elements, using some well known cubic EOSs, are displayed below. The first step is to define the reduced dense fluid (or frictional) viscosity for a pure (i.e. single component) fluid by dividing by the critical viscosity. The same goes for the dilute gas viscosity. η d f r = η d f η c and η 0 r = η 0 η c {\displaystyle \eta _{dfr}={\frac {\eta _{df}}{\eta _{c}}}\quad {\text{and}}\quad \eta _{0r}={\frac {\eta _{0}}{\eta _{c}}}} The second step is to replace the attractive and repulsive pressure functions by reduced pressure functions. This will of course, affect the friction functions also. New friction functions are therefore introduced. They are called reduced friction functions, and they are of a more universal nature. The reduced frictional viscosity is η d f r = K a r ( P a P c ) + K h r ( P h P c ) + K h 2 r ( P h P c ) 2 {\displaystyle \eta _{dfr}=K_{ar}\left({\frac {P_{a}}{P_{c}}}\right)+K_{hr}\left({\frac {P_{h}}{P_{c}}}\right)+K_{h2r}\left({\frac {P_{h}}{P_{c}}}\right)^{2}} Returning to the unreduced frictional viscosity and rephrasinge the formula, gives η d f = η c K a r P c P a + η c K h r P c P h + η c K h 2 r P c 2 P h 2 {\displaystyle \eta _{df}={\frac {\eta _{c}K_{ar}}{P_{c}}}P_{a}+{\frac {\eta _{c}K_{hr}}{P_{c}}}P_{h}+{\frac {\eta _{c}K_{h2r}}{P_{c}^{2}}}P_{h}^{2}} Critical viscosity is seldom measured and attempts to predict it by formulas are few. For a pure fluid, or component i in a fluid mixture, a formula from kinetic theory is often used to estimate critical viscosity. η c i = K v i D v i where D v i = M i 1 / 2 T c i 1 / 2 V c i − 2 / 3 {\displaystyle \eta _{ci}=K_{vi}D_{vi}\quad {\text{where}}\quad D_{vi}=M_{i}^{1/2}T_{ci}^{1/2}V_{ci}^{-2/3}} where K v i {\displaystyle K_{vi}} is a constant, and critical molar volume Vci is assumed to be proportional to the collision cross section. The critical molar volume Vci is significantly more uncertain than the parameters Pci and Tci. To get rid of Vci, the critical compressibility factor Zci is often replaced by a universal average value. This gives η c i = K p D p i where D p i = M i 1 / 2 P c i 2 / 3 T c i − 1 / 6 {\displaystyle \eta _{ci}=K_{p}D_{pi}\quad {\text{where}}\quad D_{pi}=M_{i}^{1/2}P_{ci}^{2/3}T_{ci}^{-1/6}} where K p {\displaystyle K_{p}} is a constant. Based on an average critical compressibility factor of Zc = 0.275 and measured critical viscosity values of 60 different molecule types, Uyehara and Watson (1944) determined an average value of Kp to be K p = 7.7 ⋅ 1.01325 2 / 3 ≈ 7.77 {\displaystyle K_{p}=7.7\cdot 1.01325^{2/3}\approx 7.77} Zéberg-Mikkelsen (2001) proposed an empirical correlation for Vci, with parameters for n-alkanes, which is V c i − 1 = A + B ⋅ P c i R T c i ⟺ V c i = R T c i A R T c i + B P c i {\displaystyle V_{ci}^{-1}=A+B\cdot {\frac {P_{ci}}{RT_{ci}}}\iff V_{ci}={\frac {RT_{ci}}{ART_{ci}+BP_{ci}}}} where V c i − 1 = ρ n c i = c c i {\displaystyle V_{ci}^{-1}=\rho _{nci}=c_{ci}} . From the above equation and the definition of the compressibility factor it follows that Z c i = P c i A R T c i + B P c i ⟺ Z c i R T c i P c i V c i = 1 {\displaystyle Z_{ci}={\frac {P_{ci}}{ART_{ci}+BP_{ci}}}\iff {\frac {Z_{ci}RT_{ci}}{P_{ci}V_{ci}}}=1} Zéberg-Mikkelsen (2001) also proposed an empirical correlation for ηci, with parameters for n-alkanes, which is η c i = C ⋅ P c i M i D {\displaystyle \eta _{ci}=C\cdot P_{ci}M_{i}^{D}} The unit equations for the two constitutive equations above by Zéberg-Mikkelsen (2001) are [ P c ] = b a r and [ V c ] = [ R T c / P c ] = c m 3 / m o l and [ T ] = K and [ η c ] = μ P {\displaystyle [P_{c}]=bar\quad {\text{and}}\quad [V_{c}]=[RT_{c}/P_{c}]=cm^{3}/mol\quad {\text{and}}\quad [T]=K\quad {\text{and}}\quad [\eta _{c}]=\mu P} The next step is to split the formulas into formulas for well defined components (designated by subscript d) with respect critical viscosity and formulas for uncertain components (designated by subscript u) where critical viscosity is estimated using D p i {\displaystyle D_{pi}} and the universal constant K p {\displaystyle K_{p}} which will be treated as a tuning parameter for the current mixture. The dense fluid viscosity (for fluid component i in a mixture) is then written as η d f i = η d f d i + η d f u i = η d f d i + K p u F u i {\displaystyle \eta _{dfi}=\eta _{dfdi}+\eta _{dfui}=\eta _{dfdi}+K_{pu}F_{ui}} The formulas from friction theory is then related to well defined and uncertain fluid components. The result is η d f d i = η c i K a r i P c i P a i + η c i K h r i P c i P h i + η c i K h 2 r i P c i 2 P h i 2 for i = 1 , … , m {\displaystyle \eta _{dfdi}={\frac {\eta _{ci}K_{ari}}{P_{ci}}}P_{ai}+{\frac {\eta _{ci}K_{hri}}{P_{ci}}}P_{hi}+{\frac {\eta _{ci}K_{h2ri}}{P_{ci}^{2}}}P_{hi}^{2}\quad {\text{for}}\quad i=1,\ldots ,m} F u i = D p i K a r i P c i P a i + D p i K h r i P c i P h i + D p i K h 2 r i P c i 2 P h i 2 for i = m + 1 , … , N {\displaystyle F_{ui}={\frac {D_{pi}K_{ari}}{P_{ci}}}P_{ai}+{\frac {D_{pi}K_{hri}}{P_{ci}}}P_{hi}+{\frac {D_{pi}K_{h2ri}}{P_{ci}^{2}}}P_{hi}^{2}\quad {\text{for}}\quad i=m+1,\ldots ,N} D p i = M i 1 / 2 P c i 2 / 3 T c i − 1 / 6 {\displaystyle D_{pi}=M_{i}^{1/2}P_{ci}^{2/3}T_{ci}^{-1/6}} However, in order to obtain the characteristic critical viscosity of the heavy pseudocomponents, the following modification of the Uyehara and Watson (1944) expression for the critical viscosity can be used. The frictional (or residual) viscosity is then written as η c i = K p D p i where K p = 7.9483 {\displaystyle \eta _{ci}=K_{p}D_{pi}\quad {\text{where}}\quad K_{p}=7.9483} The unit equations are [ η ] = [ η c ] = μ P {\displaystyle \left[\eta \right]=\left[\eta _{c}\right]=\mu P} and [ P ] = [ P c ] = b a r {\displaystyle \left[P\right]=\left[P_{c}\right]=bar} and [ T ] = [ T c ] = K {\displaystyle \left[T\right]=\left[T_{c}\right]=K} . ==== Reduced friction functions ==== K q r i = B q r c + B q r 00 ( Γ i − 1 ) + ∑ m = 1 2 ∑ n = 0 m B q r m n ψ i n [ exp ⁡ ( m Γ i − m ) − 1 ] where q = a , h {\displaystyle K_{qri}=B_{qrc}+B_{qr00}\left(\Gamma _{i}-1\right)+\sum _{m=1}^{2}\sum _{n=0}^{m}B_{qrmn}\psi _{i}^{n}\left[\exp(m\Gamma _{i}-m)-1\right]\quad {\text{where}}\quad q=a,h} K h 2 r i = B h 2 r c + B h 2 r 21 ψ i [ exp ⁡ ( 2 Γ i ) − 1 ] ( Γ i − 1 ) 2 {\displaystyle K_{h2ri}=B_{h2rc}+B_{h2r21}\psi _{i}\left[\exp(2\Gamma _{i})-1\right]\left(\Gamma _{i}-1\right)^{2}} ψ i = R T c i P c i and Γ i = T c i T {\displaystyle \psi _{i}={\frac {RT_{ci}}{P_{ci}}}\quad {\text{and}}\quad \Gamma _{i}={\frac {T_{ci}}{T}}} The unit equation of ψ i {\displaystyle \psi _{i}} is [ ψ i ] = c m 3 / m o l {\displaystyle \left[\psi _{i}\right]=cm^{3}/mol} . The 1-parameter model have been developed based on single component fluids in the series from methane to n-octadecane (C1H4 to C18H38). The empirical parameters in the reduced friction functions above are treated as universal constants, and they are listed in the following table. For convenience are critical viscosities included in the tables for models with 5- and 7-parameters that was presented further up. . ==== Mixture ==== The mixture viscosity is given by η m i x = η d m i x + η u m i x = η d m i x + K p u F u m i x {\displaystyle \eta _{mix}=\eta _{dmix}+\eta _{umix}=\eta _{dmix}+K_{pu}F_{umix}} The mixture viscosity of well defined components is given by η d m i x = η 0 d m i x + K a d m i x P a m i x + K h d m i x P h m i x + K h 2 d m i x P h m i x 2 + K h 3 d m i x P h m i x 3 {\displaystyle \eta _{dmix}=\eta _{0dmix}+K_{admix}P_{amix}+K_{hdmix}P_{hmix}+K_{h2dmix}P_{hmix}^{2}+K_{h3dmix}P_{hmix}^{3}} The mixture viscosity function of uncertain components is given by F u m i x = η 0 u m i x + K a u m i x P a m i x + K h u m i x P h m i x + K h 2 u m i x P h m i x 2 + K h 3 u m i x P h m i x 3 {\displaystyle F_{umix}=\eta _{0umix}+K_{aumix}P_{amix}+K_{humix}P_{hmix}+K_{h2umix}P_{hmix}^{2}+K_{h3umix}P_{hmix}^{3}} The mixture viscosity can be tuned to measured viscosity data by optimizing (regressing) the parameter K p u {\displaystyle K_{pu}} . where the mixture friction coefficients are obtained by eq(I.7.45) through eq(I.7.47) and P a {\displaystyle P_{a}} and P h {\displaystyle P_{h}} are the attractive and repulsive pressure term of the mixture. ==== Mixing rules ==== The mixing rules for the well defined components are ln ⁡ ( η 0 d m i x ) = ∑ i = 1 m z i ln ⁡ ( η 0 i ) or η 0 m i x = ∏ i = 1 m η 0 i z i {\displaystyle \ln \left(\eta _{0dmix}\right)=\sum _{i=1}^{m}z_{i}\ln(\eta _{0i})\quad {\text{or}}\quad \eta _{0mix}=\prod _{i=1}^{m}\eta _{0i}^{z_{i}}} K q r d m i x = ∑ i = 1 m W i η c i K q r i P c i where q = a , h {\displaystyle K_{qrdmix}=\sum _{i=1}^{m}W_{i}{\frac {\eta _{ci}K_{qri}}{P_{ci}}}\quad {\text{where}}\quad q=a,h} K q p r d m i x = ∑ i = 1 m W i η c i K q r p i P c i p where q = a , h and p = 2 , 3 {\displaystyle K_{qprdmix}=\sum _{i=1}^{m}W_{i}{\frac {\eta _{ci}K_{qrpi}}{P_{ci}^{p}}}\quad {\text{where}}\quad q=a,h\quad {\text{and}}\quad p=2,3} QZS recommends to drop the dilute gas term for the uncertain fluid components which are usually the heavier (hydrocarbon) components. The formula is kept here for consistency. The mixing rules for the uncertain components are ln ⁡ ( η 0 u m i x ) = ∑ i = m + 1 N z i ln ⁡ ( η 0 i ) or η 0 m i x = ∏ i = m + 1 N η 0 i z i {\displaystyle \ln \left(\eta _{0umix}\right)=\sum _{i=m+1}^{N}z_{i}\ln(\eta _{0i})\quad {\text{or}}\quad \eta _{0mix}=\prod _{i=m+1}^{N}\eta _{0i}^{z_{i}}} K q r u m i x = ∑ i = m + 1 N W i D p i K q r i P c i where q = a , h {\displaystyle K_{qrumix}=\sum _{i=m+1}^{N}W_{i}{\frac {D_{pi}K_{qri}}{P_{ci}}}\quad {\text{where}}\quad q=a,h} K q p r u m i x = ∑ i = m + 1 N W i D p i K q p r i P c i p where q = a , h and p = 2 , 3 {\displaystyle K_{qprumix}=\sum _{i=m+1}^{N}W_{i}{\frac {D_{pi}K_{qpri}}{P_{ci}^{p}}}\quad {\text{where}}\quad q=a,h\quad {\text{and}}\quad p=2,3} ε = 0.30 when SRK, PR or PRSV EOS is used {\displaystyle \varepsilon =0.30\quad {\text{when SRK, PR or PRSV EOS is used}}} ==== Dilute gas limit ==== Zéberg-Mikkelsen (2001) proposed an empirical model for dilute gas viscosity of fairly spherical molecules as follows η 0 = d g 1 T + d g 2 T d g 3 {\displaystyle \eta _{0}=d_{g1}{\sqrt {T}}+d_{g2}T^{d_{g3}}} or η 0 = D g 1 T r + D g 2 T r D g 3 {\displaystyle \eta _{0}=D_{g1}{\sqrt {T_{r}}}+D_{g2}T_{r}^{D_{g3}}} D g 1 = d g 1 ⋅ T c and D g 2 = d g 2 ⋅ T c d g 3 and D g 3 = d g 3 {\displaystyle D_{g1}=d_{g1}\cdot {\sqrt {T_{c}}}\quad {\text{and}}\quad D_{g2}=d_{g2}\cdot T_{c}^{d_{g3}}\quad {\text{and}}\quad D_{g3}=d_{g3}} The unit equations for viscosity and temperature are [ η 0 ] = μ P and [ T ] = K {\displaystyle \left[\eta _{0}\right]=\mu P\quad {\text{and}}\quad \left[T\right]=K} The second term is a correction term for high temperatures. Note that most d g 2 {\displaystyle d_{g2}} parameters are negative. . ==== Light gases ==== Zéberg-Mikkelsen (2001) proposed a FF-model for light gas viscosity as follows η l g = η 0 + K a P a + K h P h + K h 2 P h 2 {\displaystyle \eta _{lg}=\eta _{0}+K_{a}P_{a}+K_{h}P_{h}+K_{h2}P_{h}^{2}} The friction functions for light gases are simple K a = B a 0 {\displaystyle K_{a}=B_{a0}} K h = B h 0 {\displaystyle K_{h}=B_{h0}} K h 2 = B h 20 T r 2 {\displaystyle K_{h2}={\frac {B_{h20}}{T_{r}^{2}}}} The FF-model for light gas is valid for low, normal, critical and super critical conditions for these gases. Although the FF-model for viscosity of dilute gas is recommended, any accurate viscosity model for dilute gas can also be used with good results. The unit equations for viscosity and temperature are [ η l g ] = μ P and [ T ] = K {\displaystyle \left[\eta _{lg}\right]=\mu P\quad {\text{and}}\quad \left[T\right]=K} . == Transition state analogy == This article started with viscosity for mixtures by displaying equations for dilute gas based on elementary kinetic theory, hard core (kinetic) theory and proceeded to selected theories (and models) that aimed at modeling viscosity for dense gases, dense fluids and supercritical fluids. Many or most of these theories where based on a philosophy of how gases behaves with molecules flying around, colliding with other molecules and exchanging (linear) momentum and thus creating viscosity. When the fluid became liquid, the models started to deviate from measurements because a small error in the calculated molar volume from the EOS is related to a large change in pressure and vica versa, and thus also in viscosity. The article has now come to the other end where theories (or models) are based on a philosophy of how a liquid behaves and give rise to viscosity. Since molecules in a liquid are much closer to each other, one may wonder how often a molecule in one sliding fluid surface finds a free volume in the neighboring sliding surface that is big enough for the molecule to jump into it. This may be rephrased as: when do a molecule have enough energy in its fluctuating movements to squeeze into a small open volume in the neighboring sliding surface, similar to a molecule that collides with another molecule and locks into it in a chemical reaction, and thus creates a new compound, as modeled in the transition state theory (TS theory and TS model). === Free volume theory === The free volume theory (short FV theory and FV model) originates from Doolittle (1951) who proposed that viscosity is related to the free volume fraction f ν {\displaystyle f_{\nu }} in a way that is analogous to the Arrhenius equation. The viscosity model of Doolittle (1951) is η = A exp ⁡ [ B f ν ] where f ν = V − b b {\displaystyle \eta =A\exp \left[{\frac {B}{f_{\nu }}}\right]\quad {\text{where}}\quad f_{\nu }={\frac {V-b}{b}}} where V {\displaystyle V} is the molar volume and b {\displaystyle b} is the molar hard core volume. There where, however, little activity on the FV theory until Allal et al. (1996, 2001a) proposed a relation between the free volume fraction and parameters (and/or variables) at the molecular level of the fluid (also called the microstructure of the fluid). The 1996-model became the start of a period with high research activity where different models were put forward. The surviving model was presented by Allal et al. (2001b), and this model will be displayed below. The viscosity model is composed of a dilute gas contribution η 0 {\displaystyle \eta _{0}} (or η d g {\displaystyle \eta _{dg}} ) and a dense-fluid contribution η d f {\displaystyle \eta _{df}} (or dense-state contribution η d s {\displaystyle \eta _{ds}} or Δ η {\displaystyle \Delta \eta } ). η = η 0 + η d f {\displaystyle \eta =\eta _{0}+\eta _{df}} Allal et al. (2001b) showed that the dense-fluid contribution to viscosity can be related to the friction coefficient ζ {\displaystyle \zeta } of the sliding fluid surface, and Dulliens (1963) has shown that the self-diffusion coefficient D {\displaystyle D} is related to the friction coefficient of an internal fluid surface. These two relations are shown here: η d f = ρ N A L p 2 ζ M and D = k B T ζ {\displaystyle \eta _{df}={\frac {\rho N_{A}L_{p}^{2}\zeta }{M}}\quad {\text{and}}\quad D={\frac {k_{B}T}{\zeta }}} By eliminating the friction coefficient ζ {\displaystyle \zeta } , Boned et al. (2004) expressed the characteristic length L p {\displaystyle L_{p}} as L p 2 = D M η d f ρ N A k B T = D M η d f ρ R T {\displaystyle L_{p}^{2}={\frac {DM\eta _{df}}{\rho N_{A}k_{B}T}}={\frac {DM\eta _{df}}{\rho RT}}} The right hand side corresponds to the so-called Dullien invariant which was derived by Dullien (1963, 1972). A result from this is that the characteristic length L p {\displaystyle L_{p}} is interpreted as the average momentum transfer distance to a molecule that will enter a free volume site and collide with a neighboring molecule. The friction coefficient ζ {\displaystyle \zeta } is modeled by Allal et alios (2001b) as ζ = ζ 0 exp ⁡ [ B f ν ] and ζ 0 = E N A L d ( M 3 R T ) 1 / 2 {\displaystyle \zeta =\zeta _{0}\exp \left[{\frac {B}{f_{\nu }}}\right]\quad {\text{and}}\quad \zeta _{0}={\frac {E}{N_{A}L_{d}}}\left({\frac {M}{3RT}}\right)^{1/2}} The free volume fraction is now related to the energy E by f ν = ( R T E ) 3 / 2 and E = E 0 + P V and E 0 = α ρ {\displaystyle f_{\nu }=\left({\frac {RT}{E}}\right)^{3/2}\quad {\text{and}}\quad E=E_{0}+PV\quad {\text{and}}\quad E_{0}=\alpha \rho } where ρ = M V {\displaystyle {\text{where}}\quad \rho ={\frac {M}{V}}} where E {\displaystyle E} is the total energy a molecule must use in order to diffuse into a vacant volume, and P V {\displaystyle PV} is connected to the work (or energy) necessary to form or expand a vacant volume available for diffusion of a molecule. The energy E 0 {\displaystyle E_{0}} is the barrier energy that the molecule must overcome in order to diffuse, and it is modeled to be proportional to mass density in order to improve match of measured viscosity data. Note that the sensitive term V − b {\displaystyle V-b} in the denominator of Doolittle's (1951) model has disappeared, making the viscosity model of Allal et alios (2001b) more robust to numerical calculations of liquid molar volume by an imperfect EOS. The pre-exponential factor A is now a function and becomes A = L c ρ ( α ρ + P V ) 3 M R T where L c = L p 2 L d {\displaystyle A={\frac {L_{c}\rho (\alpha \rho +PV)}{\sqrt {3MRT}}}\quad {\text{where}}\quad L_{c}={\frac {L_{p}^{2}}{L_{d}}}} The viscosity model proposed by Allal et al.(2001b) is thus η = η 0 + A exp ⁡ [ B ( α ρ + P V R T ) 3 / 2 ] {\displaystyle \eta =\eta _{0}+A\exp \left[B\left({\frac {\alpha \rho +PV}{RT}}\right)^{3/2}\right]} A digression is that the self-diffusion coefficient of Boned et al. (2004) becomes D = R T L d α ρ + P V 3 R T M exp ⁡ [ − B ( α ρ + P V R T ) 3 / 2 ] {\displaystyle D={\frac {RTL_{d}}{\alpha \rho +PV}}{\sqrt {\frac {3RT}{M}}}\exp \left[-B\left({\frac {\alpha \rho +PV}{RT}}\right)^{3/2}\right]} Local nomenclature list: ==== Mixture ==== The mixture viscosity is η m i x = η 0 m i x + η d f m i x {\displaystyle \eta _{mix}=\eta _{0mix}+\eta _{dfmix}} The dilute gas viscosity η 0 {\displaystyle \eta _{0}} is taken from Chung et al.(1988) which is displayed in the section on SS theory. The dense fluid contribution to viscosity in FV theory is η d f m i x = L c m i x ρ e o s ( α m i x ρ e o s + P V e o s ) 3 R T M m i x exp ⁡ [ B m i x ( α m i x ρ e o s + P V e o s R T ) 3 / 2 ] {\displaystyle \eta _{dfmix}={\frac {L_{cmix}\rho _{eos}(\alpha _{mix}\rho _{eos}+PV_{eos})}{\sqrt {3RTM_{mix}}}}\exp {\left[B_{mix}\left({\frac {\alpha _{mix}\rho _{eos}+PV_{eos}}{RT}}\right)^{3/2}\right]}} where α , B , L c {\displaystyle \alpha {\text{,}}\,B{\text{,}}\,L_{c}\,} are three characteristic parameters of the fluid w.r.t. viscosity calculations. For fluid mixtures are these three parameters calculated using mixing rules. If the self-diffusion coefficient is included in the governing equations, probably via the diffusion equation, use of four characteristic parameters (i.e. use of Lp and Ld instead of Lc) will give a consistent flow model, but flow studies that involves the diffusion equation belongs a small class of special studies. The unit for the viscosity is [Pas], when all other units are kept in SI units. ==== Mixing rules ==== At the end of the intensive research period Allal et al. (2001c) and Canet (2001) proposed two different set of mixing rules, and according to Almasi (2015) there has been no agreement in the literature about which are the best mixing rules. Almasi (2015) therefore recommended the classic linear mole weighted mixing rules which are displayed below for a mixture of N fluid components. M m i x = M n = ∑ i = 1 N z i M i {\displaystyle M_{mix}=M_{n}=\sum _{i=1}^{N}z_{i}M_{i}} α m i x = ∑ i = 1 N z i α i {\displaystyle \alpha _{mix}=\sum _{i=1}^{N}z_{i}\alpha _{i}} B m i x = ∑ i = 1 N z i B i {\displaystyle B_{mix}=\sum _{i=1}^{N}z_{i}B_{i}} L c m i x = ∑ i = 1 N z i L c i {\displaystyle L_{cmix}=\sum _{i=1}^{N}z_{i}L_{ci}} The three characteristic viscosity parameters α i , B i , L c i {\displaystyle \,\alpha _{i},\,B_{i},L_{ci}\,} are usually established by optimizing the viscosity formula against measured viscosity data for pure fluids (i.e. single component fluids). ==== Trend functions ==== The three characteristic viscosity parameters α , B , L c {\displaystyle \,\alpha _{},\,B_{},L_{c}\,} are usually established by optimizing the viscosity formula against measured viscosity data for pure fluids (i.e. single component fluids). Data for these parameters can then be stored in databases together with data for other chemical and physical material properties and information. This happens more often if use of the equation becomes widespread. Hydrocarbon molecules is a huge group of molecules that has several subgroups which itself contains molecules of the same basic structure, but with different lengths. The alkanes is the simplest of these groups. A material property of molecules in such a group normally shows up as a function when plotted against another material property. A mathematical function is then selected based physical/chemical knowledge, experience and intuition, and the empirical parameters (i.e. constants) in the function are determined by curve fitting. Such a function is called a trend or trend function, and the group of molecule types is called a homologous series. Llovell et al. (2013a, 2013b) proposed trend functions for the three FV parameters α , B , L c {\displaystyle \alpha ,\,B,\,L_{c}\,} for alkanes. Oliveira et al. (2014) proposed trend functions for the FV parameters for fatty acid methyl esters (FAME) and fatty acid ethyl esters (FAEE), both including compounds with up to three unsaturated bonds, which are displayed below. α = a 0 + a 1 M {\displaystyle \alpha =a_{0}+a_{1}M} B = b 0 + b 1 M + b 2 M 2 {\displaystyle B=b_{0}+b_{1}M+b_{2}M^{2}} L c = c 0 + c 1 M {\displaystyle L_{c}=c_{0}+c_{1}M} The molar mass M [g/mol] (or molecular mass / weight) associated with the parameters used in curve fitting process (where a i {\displaystyle a_{i}} , b i {\displaystyle b_{i}} , and c i {\displaystyle c_{i}} are empirical parameters) corresponds to carbon numbers in the range 8-24 and 8-20 for FAME and FAEE respectively. === Significant structure theory === Viscosity models based on significant structure theory, a designation originating from Eyring, (short SS theory and SS model) has in the first two decades of the 2000s evolved in a development relay. It starting with Macías-Salinas et al.(2003), continued with a significant contribution from Cruz-Reyes et al.(2005), followed by a third stage of development by Macías-Salinas et al.(2013), whose model is displayed here. The SS theories have three basic assumptions: A liquid behaves similar to a solid in many aspects, e.g. a sensitive relation between molar volume (or mass density) and pressure; position and distance between molecules is like a quasi-lattice with "fluidized vacancies" of molecular size distributed randomly throughout the quazi-lattice. The vacancies are assumed to have molecular size and move freely throughout the quasi-lattise structure. The fluid viscosity is calculated from two components which is a gas-like and a solid-like contribution, and both contributions contain all molecule types occurring in the fluid phase. A molecule that jumps from one sliding surface to a vacant site in the neighboring surface, is said to display gas-like behavior. A molecule that remains on its site in the sliding surface for some time, is said to display solid-like behavior. Collisions between molecules from neighboring layers are equivalent to molecules jumping to vacant sites, and these events within viscosity modeling are analogous to chemical reactions between colliding molecules within TS theory. The fraction of gas-like molecules X g l {\displaystyle X_{gl}} and solid-like molecules X s l {\displaystyle X_{sl}} are X g l = ( V − V s ) / V and X s l = V s / V and V s ≈ b {\displaystyle X_{gl}=(V-V_{s})/V\quad {\text{and}}\quad X_{sl}=V_{s}/V\quad {\text{and}}\quad V_{s}\approx b} where V {\displaystyle V} is the molar volume of the phase in question, V s {\displaystyle V_{s}} is the molar volume of solid-like molecules and b {\displaystyle b} is the molar hard core volume. The viscosity of the fluid is a mixture of these two classes of molecules η = X g l η g l + X s l η s l {\displaystyle \eta =X_{gl}\eta _{gl}+X_{sl}\eta _{sl}} ==== Gas-like contribution ==== The gas-like viscosity contribution is taken from the viscosity model of Chung et al.(1984, 1988), which is based on the Chapman–Enskog(1964) kinetic theory of viscosity for dilute gases and the empirical expression of Neufeld et al.(1972) for the reduced collision integral, but expanded empirical to handle polyatomic, polar and hydrogen bonding fluids over a wide temperature range. The viscosity model of Chung et al.(1988) is η g l = 40.785 M T ∗ V c 2 / 3 Ω ∗ ∗ F c with unit equation [ η g l ] = μ P {\displaystyle \eta _{gl}=40.785{\frac {\sqrt {MT^{*}}}{V_{c}^{2/3}\Omega ^{*}}}*F_{c}\quad {\text{with unit equation}}\quad [\eta _{gl}]=\mu P} Ω ∗ = 1.16145 ( T ∗ ) 0.14874 + 0.52487 e x p ( 0.7732 T ∗ ) + 2.16178 e x p ( 2.43787 T ∗ ) − 6.435 × 10 − 4 ( T ∗ ) 0.14874 ∗ s i n [ 18.0323 ( T ∗ ) 0.7683 − 7.27371 ] {\displaystyle \Omega ^{*}={\frac {1.16145}{(T^{*})^{0.14874}}}+{\frac {0.52487}{exp(0.7732T^{*})}}+{\frac {2.16178}{exp(2.43787T^{*})}}-6.435\times 10^{-4}(T^{*})^{0.14874}*sin\left[18.0323(T^{*})^{0.7683}-7.27371\right]} where T ∗ = 1.2593 ∗ T / T c and F c = 1 − 0.2756 ω + 0.059035 μ r 4 + κ {\displaystyle T^{*}=1.2593*T/T_{c}\quad {\text{and}}\quad F_{c}=1-0.2756\omega +0.059035\mu _{r}^{4}+\kappa } Local nomenclature list: ==== Solid-like contribution ==== In the 2000s, the development of the solid-like viscosity contribution started with Macías-Salinas et al.(2003) who used the Eyring equation in TS theory as an analogue to the solid-like viscosity contribution, and as a generalization of the first exponential liquid viscosity model proposed by Reynolds(1886). The Eyring equation models irreversible chemical reactions at constant pressure, and the equation therefore uses Gibbs activation energy, Δ G ‡ {\displaystyle \Delta G^{\ddagger }} , to model the transition state energy that the system uses to move matter (i.e. separate molecules) from the initial state to the final state (i.e. the new compound). In the Couette flow, the system moves matter from one sliding surface to another, due to fluctuating internal energy, and probably also due to pressure and the pressure gradient. Besides, the pressure effect on viscosity is somewhat different for systems in a medium pressure range than it is for systems in a very high pressure range. Cruz-Reyes et al.(2005) uses Helmholtz energy (F = U-TS = G-PV) as potential in the exponential function. This gives η s l = A ∗ e x p [ − Δ G ‡ − P V R T ] {\displaystyle \eta _{sl}=A*exp{\left[-{\frac {\Delta G^{\ddagger }-PV}{RT}}\right]}} Cruz-Reyes et al.(2005) states that the Gibbs activation energy is negative proportional to the internal energy of vaporization (and thus calculated at a point on the freezing curve), but Macías-Salinas et al.(2013) changes that to be the residual internal energy, Δ U r {\displaystyle \Delta U^{r}} , at the general pressure and temperature of the system. One could alternatively use the grand potential ( Ω {\displaystyle \Omega } = U-TS-G = -PV, sometimes called Landau energy or potential) in the exponential function and argue that the Couette flow is not a homogeneous system, such that a term with the residual internal energy must be added. Both arguments gives the proposed solid-like contribution which is η s l = A ∗ e x p [ − α Δ U r − P V R T ] = A ∗ e x p [ − α Δ U r R T + Z ] {\displaystyle \eta _{sl}=A*exp\left[-{\frac {\alpha \Delta U^{r}-PV}{RT}}\right]=A*exp\left[-{\frac {\alpha \Delta U^{r}}{RT}}+Z\right]} The pre-exponential factor A {\displaystyle A} is taken as A = R T V − b ∗ 1 ν {\displaystyle A={\frac {RT}{V-b}}*{\frac {1}{\nu }}} The jumping frequency of a molecule that jumps from its initial position to a vacant site, ν {\displaystyle \nu } , is made dependent on the number of vacancies, X g l {\displaystyle X_{gl}} , and pressure in order to extend the applicability of η s l {\displaystyle \eta _{sl}} to much wider ranges of temperature and pressure than a constant jumping frequency would do. The final jumping frequency model is ν = X g l − 1 ∗ 10 12 ( ν 0 + ν 1 P ) = V V − b ∗ 10 12 ( ν 0 + ν 1 P ) {\displaystyle \nu =X_{gl}^{-1}*10^{12}\left(\nu _{0}+\nu _{1}P\right)={\frac {V}{V-b}}*10^{12}\left(\nu _{0}+\nu _{1}P\right)} A recurrent problem for viscosity models is the calculation of liquid molar volume for a given pressure using an EOS that is not perfect. This calls for introduction of some empirical parameters. Use of adjustable proportionality factors for both the residual internal energy and the Z-factor is a natural choice. The sensitivity of P versus V-b values for liquids makes it natural to introduce an empirical exponent (power) to the dimensionless Z-factor. The empirical power turns out to be very effective in the high pressure (high Z-factor) region. The solid-like viscosity contribution proposed by Macías-Salinas et al.(2013) is then η s l = R T V ∗ 1 10 12 ( ν 0 + ν 1 P ) ∗ e x p [ − α Δ U r R T ] ∗ e x p [ β 0 Z β 1 ] {\displaystyle \eta _{sl}={\frac {RT}{V}}*{\frac {1}{10^{12}\left(\nu _{0}+\nu _{1}P\right)}}*exp\left[-\alpha {\frac {\Delta U^{r}}{RT}}\right]*exp\left[\beta _{0}Z^{\beta _{1}}\right]} Local nomenclature list: ==== Mixture ==== η m i x = V m i x − b m i x V m i x ∗ η g l m i x + b m i x V m i x ∗ η s l m i x {\displaystyle \eta _{mix}={\frac {V_{mix}-b_{mix}}{V_{mix}}}*\eta _{gl}^{mix}+{\frac {b_{mix}}{V_{mix}}}*\eta _{sl}^{mix}} η g l m i x = F ( T c m i x , M c m i x , V c m i x , ω m i x , μ r m i x ; T ) {\displaystyle \eta _{gl}^{mix}=F(T_{cmix},M_{cmix},V_{cmix},\omega _{mix},\mu _{rmix};T)} η s l m i x = F ( V m i x , Δ U m i x r , Z m i x ; P , T ) {\displaystyle \eta _{sl}^{mix}=F(V_{mix},\Delta U_{mix}^{r},Z_{mix};P,T)} In order to clarify the mathematical statements above, the solid-like contribution for a fluid mixture is displayed in more details below. η s l m i x = R T V m i x ∗ 1 10 12 ( ν 0 + ν 1 P ) ∗ e x p [ − α Δ U m i x r R T ] ∗ e x p [ β 0 Z m i x β 1 ] {\displaystyle \eta _{sl}^{mix}={\frac {RT}{V_{mix}}}*{\frac {1}{10^{12}\left(\nu _{0}+\nu _{1}P\right)}}*exp\left[-\alpha {\frac {\Delta U_{mix}^{r}}{RT}}\right]*exp\left[\beta _{0}Z_{mix}^{\beta _{1}}\right]} ==== Mixing rules ==== The variables V m i x , Δ U m i x r , Z m i x {\displaystyle V_{mix},\Delta U_{mix}^{r},Z_{mix}} and all EOS parameters for a fluid mixture are taken from the EOS (conf. W) and the mixing rules used by the EOS (conf. Q). More details on this is displayed below. A fluid of n mole in the single phase region where the total fluid composition is z {\displaystyle \mathbf {z} } [molefractions]: Q m i x = Q e o s ( z ) and W m i x = W e o s ( P , T , z ) {\displaystyle Q_{mix}=Q_{eos}(\mathbf {z} )\quad {\text{and}}\quad W_{mix}=W_{eos}(P,T,\mathbf {z} )} Gas phase of ng mole in two-phase region where the gas composition is y {\displaystyle \mathbf {y} } [molefractions]: Q m i x = Q e o s ( y ) and W m i x = W e o s ( P , T , y ) {\displaystyle Q_{mix}=Q_{eos}(\mathbf {y} )\quad {\text{and}}\quad W_{mix}=W_{eos}(P,T,\mathbf {y} )} Liquid phase of nl mole in two-phase region where the liquid composition is x {\displaystyle \mathbf {x} } [molefractions]: Q m i x = Q e o s ( x ) and W m i x = W e o s ( P , T , x ) {\displaystyle Q_{mix}=Q_{eos}(\mathbf {x} )\quad {\text{and}}\quad W_{mix}=W_{eos}(P,T,\mathbf {x} )} where n = n l + n g and n z i = n l x i + n g y i and i = 1 , … , N {\displaystyle n=n_{l}+n_{g}\quad {\text{and}}\quad nz_{i}=n_{l}x_{i}+n_{g}y_{i}\quad {\text{and}}\quad i=1,\ldots ,N} Q = T c , M , V c , ω , b and W = V , Δ U r , Z {\displaystyle Q=T_{c},M,V_{c},\omega ,b\quad {\text{and}}\quad W=V,\Delta U^{r},Z} Since nearly all input to this viscosity model is provided by the EOS and the equilibrium calculations, this SS model (or TS model) for viscosity should be very simple to use for fluid mixtures. The viscosity model also have some empirical parameters that can be used as tuning parameters to compensate for imperfect EOS models and secure high accuracy also for fluid mixtures. == See also == == References ==
Wikipedia/Viscosity_models_for_mixtures
In metaphysics, a universal is what particular things have in common, namely characteristics or qualities. In other words, universals are repeatable or recurrent entities that can be instantiated or exemplified by many particular things. For example, suppose there are two chairs in a room, each of which is green. These two chairs share the quality of "chairness", as well as "greenness" or the quality of being green; in other words, they share two "universals". There are three major kinds of qualities or characteristics: types or kinds (e.g. mammal), properties (e.g. short, strong), and relations (e.g. father of, next to). These are all different types of universals. Paradigmatically, universals are abstract (e.g. humanity), whereas particulars are concrete (e.g. the personhood of Socrates). However, universals are not necessarily abstract and particulars are not necessarily concrete. For example, one might hold that numbers are particular yet abstract objects. Likewise, some philosophers, such as D. M. Armstrong, consider universals to be concrete. Most do not consider classes to be universals, although some prominent philosophers do, such as John Bigelow. == Problem of universals == The problem of universals is an ancient problem in metaphysics on the existence of universals. The problem arises from attempts to account for the phenomenon of similarity or attribute agreement among things. For example, grass and Granny Smith apples are similar or agree in attribute, namely in having the attribute of greenness. The issue is how to account for this sort of agreement in attribute among things. There are many philosophical positions regarding universals. Taking "beauty" as an example, four positions are: Idealism: beauty is a property constructed in the mind, so it exists only in descriptions of things. Platonic extreme realism: beauty is a property that exists in an ideal form independently of any mind or thing. Aristotelian moderate realism or conceptualism: beauty is a property of things (fundamentum in re) that the mind abstracts from these beautiful things. Nominalism: there are no universals, only individuals. Taking a broader view, the main positions are generally considered classifiable as: extreme realism, nominalism (sometimes simply named "anti-realism" with regard to universals), moderate realism, and idealism. Extreme Realists posit the existence of independent, abstract universals to account for attribute agreement. Nominalists deny that universals exist, claiming that they are not necessary to explain attribute agreement. Conceptualists posit that universals exist only in the mind, or when conceptualized, denying the independent existence of universals, but accepting they have a fundamentum in re. Complications which arise include the implications of language use and the complexity of relating language to ontology. == Particular == A universal may have instances, known as its particulars. For example, the type dog (or doghood) is a universal, as are the property red (or redness) and the relation betweenness (or being between). Any particular dog, red thing, or object that is between other things is not a universal, however, but is an instance of a universal. That is, a universal type (doghood), property (redness), or relation (betweenness) inheres in a particular object (a specific dog, red thing, or object between other things). == Platonic realism == Platonic realism holds universals to be the referents of general terms, such as the abstract, nonphysical, non-mental entities to which words such as "sameness", "circularity", and "beauty" refer. Particulars are the referents of proper names, such as "Phaedo," or of definite descriptions that identify single objects, such as the phrase, "that person over there". Other metaphysical theories may use the terminology of universals to describe physical entities. Plato's examples of what we might today call universals included mathematical and geometrical ideas such as a circle and natural numbers as universals. Plato's views on universals did, however, vary across several different discussions. In some cases, Plato spoke as if the perfect circle functioned as the form or blueprint for all copies and for the word definition of circle. In other discussions, Plato describes particulars as "participating" in the associated universal. Contemporary realists agree with the thesis that universals are multiply-exemplifiable entities. Examples include by D. M. Armstrong, Nicholas Wolterstorff, Reinhardt Grossmann, Michael Loux. == Nominalism == Nominalists hold that universals are not real mind-independent entities but either merely concepts (sometimes called "conceptualism") or merely names. Nominalists typically argue that properties are abstract particulars (like tropes) rather than universals. JP Moreland distinguishes between "extreme" and "moderate" nominalism. Examples of nominalists include Buddhist logicians and apoha theorists, the medieval philosophers Roscelin of Compiègne and William of Ockham and contemporary philosophers W. V. O. Quine, Wilfrid Sellars, D. C. Williams, and Keith Campbell. == Ness-ity-hood principle == The ness-ity-hood principle is used mainly by English-speaking philosophers to generate convenient, concise names for universals or properties. According to the Ness-Ity-Hood Principle, a name for any universal may be formed by taking the name of the predicate and adding the suffix "ness", "ity", or "hood". For example, the universal that is distinctive of left-handers may be formed by taking the predicate "left-handed" and adding "ness", which yields the name "left-handedness". The principle is most helpful in cases where there is not an established or standard name of the universal in ordinary English usage: What is the name of the universal distinctive of chairs? "Chair" in English is used not only as a subject (as in "The chair is broken"), but also as a predicate (as in "That is a chair"). So to generate a name for the universal distinctive of chairs, take the predicate "chair" and add "ness", which yields "chairness". == See also == == Notes == == References == Feldman, Fred (2005). "The Open Question Argument: What It Isn't; and What It Is", Philosophical Issues 15, Normativity. Loux, Michael J. (1998). Metaphysics: A Contemporary Introduction, N.Y.: Routledge. Loux, Michael J. (2001). "The Problem of Universals" in Metaphysics: Contemporary Readings, Michael J. Loux (ed.), N.Y.: Routledge, pp. 3–13. MacLeod, M. & Rubenstein, E. (2006). "Universals", The Internet Encyclopedia of Philosophy, J. Fieser & B. Dowden (eds.). (link) Moreland, J. P. (2001). Universals, McGill-Queen's University Press/Acumen. Price, H. H. (1953). "Universals and Resemblance", Ch. 1 of Thinking and Experience, Hutchinson's University Library. Rodriguez-Pereyra, Gonzalo (2008). "Nominalism in Metaphysics", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). (link) == Further reading == Aristotle, Categories (link) Aristotle, Metaphysics (link) Armstrong, D. M. (1989). Universals: An Opinionated Introduction, Westview Press. (link) Bolton, M., “Universals, Essences, and Abstract Entities”, in: D. Garber, M. Ayers, red., The Cambridge History of Seventeenth-Century Philosophy (Cambridge: Cambridge University Press, 1998), vol. I, pp. 178–211 Lewis, D. (1983), "New work for a theory of universals". Australasian Journal of Philosophy. Vol. 61, No. 4. Libera, Alain de (2005), Der Universalienstreit. Von Platon bis zum Ende des Mittelalters, München, Wilhelm Fink Verlag, 2005 Plato, Phaedo (link) Plato, Republic (esp. books V, VI, VII and X) (link) Plato, Parmenides (link) Plato, Sophist (link) Quine, W. V. O. (1961). "On What There is," in From a Logical Point of View, 2nd/ed. N.Y: Harper and Row. Russell, Bertrand (1912). "The World of Universals," in The Problems of Philosophy, Oxford University Press. Russell, Bertrand (1912b). "On the Relation of Universals and Particulars" (link) Swoyer, Chris (2000). "Properties", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). (link) Williams, D. C. (1953). "On the Elements of Being", Review of Metaphysics, vol. 17. (link) == External links == Chrysippus – Stanford Encyclopedia of Philosophy Chrysippus – Internet Encyclopedia of Philosophy
Wikipedia/Universal_(metaphysics)