text
stringlengths
559
401k
source
stringlengths
13
121
In special relativity, four-momentum (also called momentum–energy or momenergy) is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with relativistic energy E and three-momentum p = (px, py, pz) = γmv, where v is the particle's three-velocity and γ the Lorentz factor, is p = ( p 0 , p 1 , p 2 , p 3 ) = ( E c , p x , p y , p z ) . {\displaystyle p=\left(p^{0},p^{1},p^{2},p^{3}\right)=\left({\frac {E}{c}},p_{x},p_{y},p_{z}\right).} The quantity mv of above is the ordinary non-relativistic momentum of the particle and m its rest mass. The four-momentum is useful in relativistic calculations because it is a Lorentz covariant vector. This means that it is easy to keep track of how it transforms under Lorentz transformations. == Minkowski norm == Calculating the Minkowski norm squared of the four-momentum gives a Lorentz invariant quantity equal (up to factors of the speed of light c) to the square of the particle's proper mass: p ⋅ p = η μ ν p μ p ν = p ν p ν = − E 2 c 2 + | p | 2 = − m 2 c 2 {\displaystyle p\cdot p=\eta _{\mu \nu }p^{\mu }p^{\nu }=p_{\nu }p^{\nu }=-{E^{2} \over c^{2}}+|\mathbf {p} |^{2}=-m^{2}c^{2}} where the following denote: p {\textstyle p} , the four-momentum vector of a particle, p ⋅ p {\textstyle p\cdot p} , the Minkowski inner product of the four-momentum with itself, p μ {\textstyle p^{\mu }} and p ν {\textstyle p^{\nu }} , the contravariant components of the four-momentum vector, p ν {\textstyle p_{\nu }} , the covariant form, E {\textstyle E} , the energy of the particle, c {\textstyle c} , the speed of light, | p | {\textstyle |\mathbf {p} |} , the magnitude of the four-momentum vector, m {\textstyle m} , the invariant mass (rest) of the particle, and η μ ν = ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle \eta _{\mu \nu }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}} is the metric tensor of special relativity with metric signature for definiteness chosen to be (–1, 1, 1, 1). The negativity of the norm reflects that the momentum is a timelike four-vector for massive particles. The other choice of signature would flip signs in certain formulas (like for the norm here). This choice is not important, but once made it must for consistency be kept throughout. The Minkowski norm is Lorentz invariant, meaning its value is not changed by Lorentz transformations/boosting into different frames of reference. More generally, for any two four-momenta p and q, the quantity p ⋅ q is invariant. == Relation to four-velocity == For a massive particle, the four-momentum is given by the particle's invariant mass m multiplied by the particle's four-velocity, p μ = m u μ , {\displaystyle p^{\mu }=mu^{\mu },} where the four-velocity u is u = ( u 0 , u 1 , u 2 , u 3 ) = γ v ( c , v x , v y , v z ) , {\displaystyle u=\left(u^{0},u^{1},u^{2},u^{3}\right)=\gamma _{v}\left(c,v_{x},v_{y},v_{z}\right),} and γ v := 1 1 − v 2 c 2 {\displaystyle \gamma _{v}:={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}} is the Lorentz factor (associated with the speed v {\displaystyle v} ), c is the speed of light. == Derivation == There are several ways to arrive at the correct expression for four-momentum. One way is to first define the four-velocity u = dx/dτ and simply define p = mu, being content that it is a four-vector with the correct units and correct behavior. Another, more satisfactory, approach is to begin with the principle of least action and use the Lagrangian framework to derive the four-momentum, including the expression for the energy. One may at once, using the observations detailed below, define four-momentum from the action S. Given that in general for a closed system with generalized coordinates qi and canonical momenta pi, p i = ∂ S ∂ q i = ∂ S ∂ x i , E = − ∂ S ∂ t = − c ⋅ ∂ S ∂ x 0 , {\displaystyle p_{i}={\frac {\partial S}{\partial q_{i}}}={\frac {\partial S}{\partial x_{i}}},\quad E=-{\frac {\partial S}{\partial t}}=-c\cdot {\frac {\partial S}{\partial x^{0}}},} it is immediate (recalling x0 = ct, x1 = x, x2 = y, x3 = z and x0 = −x0, x1 = x1, x2 = x2, x3 = x3 in the present metric convention) that p μ = ∂ S ∂ x μ = ( − E c , p ) {\displaystyle p_{\mu }={\frac {\partial S}{\partial x^{\mu }}}=\left(-{E \over c},\mathbf {p} \right)} is a covariant four-vector with the three-vector part being the canonical momentum. The action S is given by S = − m c ∫ d s = ∫ L d t , L = − m c 2 1 − v 2 c 2 , {\displaystyle S=-mc\int ds=\int Ldt,\quad L=-mc^{2}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}},} where L is the relativistic Lagrangian for a free particle. From this, δ S = [ − m u μ δ x μ ] t 1 t 2 + m ∫ t 1 t 2 δ x μ d u μ d s d s = − m u μ δ x μ = ∂ S ∂ x μ δ x μ = − p μ δ x μ , {\displaystyle \delta S=\left[-mu_{\mu }\delta x^{\mu }\right]_{t_{1}}^{t_{2}}+m\int _{t_{1}}^{t_{2}}\delta x^{\mu }{\frac {du_{\mu }}{ds}}ds=-mu_{\mu }\delta x^{\mu }={\frac {\partial S}{\partial x^{\mu }}}\delta x^{\mu }=-p_{\mu }\delta x^{\mu },} where the second step employs the field equations duμ/ds = 0, (δxμ)t1 = 0, and (δxμ)t2 ≡ δxμ as in the observations above. Now compare the last three expressions to find p μ = ∂ μ [ S ] = ∂ S ∂ x μ = m u μ = m ( c 1 − v 2 c 2 , v x 1 − v 2 c 2 , v y 1 − v 2 c 2 , v z 1 − v 2 c 2 ) , {\displaystyle p^{\mu }=\partial ^{\mu }[S]={\frac {\partial S}{\partial x_{\mu }}}=mu^{\mu }=m\left({\frac {c}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{x}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{y}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{z}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\right),} with norm −m2c2, and the famed result for the relativistic energy, where mr is the now unfashionable relativistic mass, follows. By comparing the expressions for momentum and energy directly, one has that holds for massless particles as well. Squaring the expressions for energy and three-momentum and relating them gives the energy–momentum relation, Substituting p μ ↔ − ∂ S ∂ x μ {\displaystyle p_{\mu }\leftrightarrow -{\frac {\partial S}{\partial x^{\mu }}}} in the equation for the norm gives the relativistic Hamilton–Jacobi equation, It is also possible to derive the results from the Lagrangian directly. By definition, p = ∂ L ∂ v = ( ∂ L ∂ x ˙ , ∂ L ∂ y ˙ , ∂ L ∂ z ˙ ) = m ( γ v x , γ v y , γ v z ) = m γ v = m u , E = p ⋅ v − L = m c 2 1 − v 2 c 2 , {\displaystyle {\begin{aligned}\mathbf {p} &={\frac {\partial L}{\partial \mathbf {v} }}=\left({\partial L \over \partial {\dot {x}}},{\partial L \over \partial {\dot {y}}},{\partial L \over \partial {\dot {z}}}\right)=m(\gamma v_{x},\gamma v_{y},\gamma v_{z})=m\gamma \mathbf {v} =m\mathbf {u} ,\\[3pt]E&=\mathbf {p} \cdot \mathbf {v} -L={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},\end{aligned}}} which constitute the standard formulae for canonical momentum and energy of a closed (time-independent Lagrangian) system. With this approach it is less clear that the energy and momentum are parts of a four-vector. The energy and the three-momentum are separately conserved quantities for isolated systems in the Lagrangian framework. Hence four-momentum is conserved as well. More on this below. More pedestrian approaches include expected behavior in electrodynamics. In this approach, the starting point is application of Lorentz force law and Newton's second law in the rest frame of the particle. The transformation properties of the electromagnetic field tensor, including invariance of electric charge, are then used to transform to the lab frame, and the resulting expression (again Lorentz force law) is interpreted in the spirit of Newton's second law, leading to the correct expression for the relativistic three- momentum. The disadvantage, of course, is that it isn't immediately clear that the result applies to all particles, whether charged or not, and that it doesn't yield the complete four-vector. It is also possible to avoid electromagnetism and use well tuned experiments of thought involving well-trained physicists throwing billiard balls, utilizing knowledge of the velocity addition formula and assuming conservation of momentum. This too gives only the three-vector part. == Conservation of four-momentum == As shown above, there are three conservation laws (not independent, the last two imply the first and vice versa): The four-momentum p (either covariant or contravariant) is conserved. The total energy E = p0c is conserved. The 3-space momentum p = ( p 1 , p 2 , p 3 ) {\displaystyle \mathbf {p} =\left(p^{1},p^{2},p^{3}\right)} is conserved (not to be confused with the classic non-relativistic momentum m v {\displaystyle m\mathbf {v} } ). Note that the invariant mass of a system of particles may be more than the sum of the particles' rest masses, since kinetic energy in the system center-of-mass frame and potential energy from forces between the particles contribute to the invariant mass. As an example, two particles with four-momenta (5 GeV/c, 4 GeV/c, 0, 0) and (5 GeV/c, −4 GeV/c, 0, 0) each have (rest) mass 3 GeV/c2 separately, but their total mass (the system mass) is 10 GeV/c2. If these particles were to collide and stick, the mass of the composite object would be 10 GeV/c2. One practical application from particle physics of the conservation of the invariant mass involves combining the four-momenta pA and pB of two daughter particles produced in the decay of a heavier particle with four-momentum pC to find the mass of the heavier particle. Conservation of four-momentum gives pCμ = pAμ + pBμ, while the mass M of the heavier particle is given by −PC ⋅ PC = M2c2. By measuring the energies and three-momenta of the daughter particles, one can reconstruct the invariant mass of the two-particle system, which must be equal to M. This technique is used, e.g., in experimental searches for Z′ bosons at high-energy particle colliders, where the Z′ boson would show up as a bump in the invariant mass spectrum of electron–positron or muon–antimuon pairs. If the mass of an object does not change, the Minkowski inner product of its four-momentum and corresponding four-acceleration Aμ is simply zero. The four-acceleration is proportional to the proper time derivative of the four-momentum divided by the particle's mass, so p μ A μ = η μ ν p μ A ν = η μ ν p μ d d τ p ν m = 1 2 m d d τ p ⋅ p = 1 2 m d d τ ( − m 2 c 2 ) = 0. {\displaystyle p^{\mu }A_{\mu }=\eta _{\mu \nu }p^{\mu }A^{\nu }=\eta _{\mu \nu }p^{\mu }{\frac {d}{d\tau }}{\frac {p^{\nu }}{m}}={\frac {1}{2m}}{\frac {d}{d\tau }}p\cdot p={\frac {1}{2m}}{\frac {d}{d\tau }}\left(-m^{2}c^{2}\right)=0.} == Canonical momentum in the presence of an electromagnetic potential == For a charged particle of charge q, moving in an electromagnetic field given by the electromagnetic four-potential: A = ( A 0 , A 1 , A 2 , A 3 ) = ( ϕ c , A x , A y , A z ) {\displaystyle A=\left(A^{0},A^{1},A^{2},A^{3}\right)=\left({\phi \over c},A_{x},A_{y},A_{z}\right)} where φ is the scalar potential and A = (Ax, Ay, Az) the vector potential, the components of the (not gauge-invariant) canonical momentum four-vector P is P μ = p μ + q A μ . {\displaystyle P^{\mu }=p^{\mu }+qA^{\mu }.} This, in turn, allows the potential energy from the charged particle in an electrostatic potential and the Lorentz force on the charged particle moving in a magnetic field to be incorporated in a compact way, in relativistic quantum mechanics. == Four-momentum in curved spacetime == In the case when there is a moving physical system with a continuous distribution of matter in curved spacetime, the primary expression for four-momentum is a four-vector with covariant index: P μ = ( E c , − P ) . {\displaystyle P_{\mu }=\left({\frac {E}{c}},-\mathbf {P} \right).} Four-momentum P μ {\displaystyle P_{\mu }} is expressed through the energy E {\displaystyle E} of physical system and relativistic momentum P {\displaystyle \mathbf {P} } . At the same time, the four-momentum P μ {\displaystyle P_{\mu }} can be represented as the sum of two non-local four-vectors of integral type: P μ = p μ + K μ . {\displaystyle P_{\mu }=p_{\mu }+K_{\mu }.} Four-vector p μ {\displaystyle p_{\mu }} is the generalized four-momentum associated with the action of fields on particles; four-vector K μ {\displaystyle K_{\mu }} is the four-momentum of the fields arising from the action of particles on the fields. Energy E {\displaystyle E} and momentum P {\displaystyle \mathbf {P} } , as well as components of four-vectors p μ {\displaystyle p_{\mu }} and K μ {\displaystyle K_{\mu }} can be calculated if the Lagrangian density L = L p + L f {\displaystyle {\mathcal {L}}={\mathcal {L}}_{p}+{\mathcal {L}}_{f}} of the system is given. The following formulas are obtained for the energy and momentum of the system: E = ∫ V ∂ ∂ v ( L p u 0 ) ⋅ v u 0 − g d x 1 d x 2 d x 3 − ∫ V ( L p + L f ) − g d x 1 d x 2 d x 3 + ∑ n = 1 N ( v n ⋅ ∂ L f ∂ v n ) . {\displaystyle E=\int _{V}{\frac {\partial }{\partial \mathbf {v} }}\left({\frac {{\mathcal {L}}_{p}}{u^{0}}}\right)\cdot \mathbf {v} u^{0}{\sqrt {-g}}dx^{1}dx^{2}dx^{3}-\int _{V}\left({\mathcal {L}}_{p}+{\mathcal {L}}_{f}\right){\sqrt {-g}}dx^{1}dx^{2}dx^{3}+\sum _{n=1}^{N}\left(\mathbf {v} _{n}\cdot {\frac {\partial L_{f}}{\partial \mathbf {v} _{n}}}\right).} P = ∫ V ∂ ∂ v ( L p u 0 ) u 0 − g d x 1 d x 2 d x 3 + ∑ n = 1 N ∂ L f ∂ v n . {\displaystyle \mathbf {P} =\int _{V}{\frac {\partial }{\partial \mathbf {v} }}\left({\frac {{\mathcal {L}}_{p}}{u^{0}}}\right)u^{0}{\sqrt {-g}}dx^{1}dx^{2}dx^{3}+\sum _{n=1}^{N}{\frac {\partial L_{f}}{\partial \mathbf {v} _{n}}}.} Here L p {\displaystyle {\mathcal {L}}_{p}} is that part of the Lagrangian density that contains terms with four-currents; v {\displaystyle \mathbf {v} } is the velocity of matter particles; u 0 {\displaystyle u^{0}} is the time component of four-velocity of particles; g {\displaystyle g} is determinant of metric tensor; L f = ∫ V L f − g d x 1 d x 2 d x 3 {\displaystyle L_{f}=\int _{V}{\mathcal {L}}_{f}{\sqrt {-g}}dx^{1}dx^{2}dx^{3}} is the part of the Lagrangian associated with the Lagrangian density L f {\displaystyle {\mathcal {L}}_{f}} ; v n {\displaystyle \mathbf {v} _{n}} is velocity of a particle of matter with number n {\displaystyle n} . == See also == Four-force Four-gradient Pauli–Lubanski pseudovector == References == Goldstein, Herbert (1980). Classical mechanics (2nd ed.). Reading, Mass.: Addison–Wesley Pub. Co. ISBN 978-0201029185. Landau, L. D.; Lifshitz, E. M. (1975) [1939]. Mechanics. Translated from Russian by J. B. Sykes and J. S. Bell. (3rd ed.). Amsterdam: Elsevier. ISBN 978-0-7506-28969. Landau, L.D.; Lifshitz, E.M. (2000). The classical theory of fields. 4th rev. English edition, reprinted with corrections; translated from the Russian by Morton Hamermesh. Oxford: Butterworth Heinemann. ISBN 9780750627689. Rindler, Wolfgang (1991). Introduction to Special Relativity (2nd ed.). Oxford: Oxford University Press. ISBN 978-0-19-853952-0. Sard, R. D. (1970). Relativistic Mechanics - Special Relativity and Classical Particle Dynamics. New York: W. A. Benjamin. ISBN 978-0805384918. Lewis, G. N.; Tolman, R. C. (1909). "The Principle of Relativity, and Non-Newtonian Mechanics". Phil. Mag. 6. 18 (106): 510–523. doi:10.1080/14786441008636725. Wikisource version
Wikipedia/Energy–momentum_4-vector
The National Science Teaching Association (NSTA), founded in 1944 (as the National Science Teachers Association) and headquartered in Arlington, Virginia, is an association of science teachers in the United States and is the largest organization of science teachers worldwide. NSTA's current membership of roughly 40,000 includes science teachers, science supervisors, administrators, scientists, business and industry representatives, and others involved in and committed to science education. The Association publishes a professional journal for each level of science teaching; a newspaper, NSTA Reports; and many other educational books and professional publications. Each year NSTA conducts a national conference and a series of area conferences. These events attract over 30,000 attendees annually. The Association serves as an advocate for science educators by keeping its members and the general public informed about national issues and trends in science education. == History == NSTA was formed by the merger of two existing non-professional organizations, the American Science Teachers Association and the American Council of Science Teachers, at a July 1944 meeting in Pittsburgh, Pennsylvania. The organization was initially headquartered at Cornell University. This first permanent headquarters, purchased in 1972, was located on Connecticut Avenue in Washington, D.C., and then moved to Arlington, Virginia in 1994. == Position statements == NSTA is engaged in an ongoing effort to "identify the qualities and standards of good science education," publishing its findings in the form of position statements. These position statements are developed by science educators, scientists, and other national experts in science education, and the input of NSTA's membership is solicited before final approval by the board of directors. Over 35 topics are covered, including The Nature of Science, Safety and Science Instruction, The Teaching of Evolution, Environmental education, Responsible Use of Live Animals and Dissection in the Science Classroom, Gender Equity in Science Education, and Use of the Metric System. In 2018, the NSTA urged teachers to "emphasize to students that no scientific controversy exists regarding the basic facts of climate change." == Science Matters == Science Matters is a major public awareness and engagement campaign designed to rekindle a national sense of urgency and action among schools and families about the importance of science education and science literacy. Science Matters builds on the success of the Building a Presence for Science program, first launched in 1997 as an e-networking initiative to assist teachers of science with professional development opportunities. The Building a Presence for Science network—now the Science Matters network—reaches readers in 34 states and the District of Columbia. == Publications == Peer-reviewed journals: Science and Children, elementary level, established in 1963 Science Scope, middle level, established in 1983 The Science Teacher, high school, established in 1950 Journal of College Science Teaching NSTA Recommends – review recommendations of science-teaching materials Connected Science Learning', linking in-school and out-of-school STEM learning Books: NSTA's publishing arm, NSTA Press, publishes 20–25 new titles per year. The NSTA Science Store offers selected publications from other publishers in addition to NSTA Press books. == NSTA student chapters == In addition to state/province chapters and associated groups, NSTA has over 100 student chapters. NSTA and the student chapters are separate but interdependent organizations that have elected to ally themselves to encourage professional development and networking of preservice teachers of science from across the United States and Canada. == NSTA affiliates == As of 2018, NSTA has the following affiliates: Association for Multicultural Science Education (AMSE) Association for Science Teacher Education (ASTE) Association of Science-Technology Centers (ASTC) Council for Elementary Science International (CESI) Council of State Science Supervisors (CSSS) National Association of Science Teaching (NARST): A Worldwide Organization for Improving Science Teaching and Learning Through Research National Middle Level Science Teachers Association (NMLSTA) National Science Education Leadership Association (NSELA) Society for College Science Teachers (SCST) == Outstanding Science Trade Books Award == This award is a joint project of NSTA and the Children’s Book Council. It has been awarded since 1973. == Best STEM Books Award == Best STEM Books is a joint project of NSTA and CBC since 2017 that represents the year’s best children’s books with STEM content. == See also == Virginia Association of Science Teachers Children’s Book Council == References == == External links == National Science Teaching Association - official website
Wikipedia/Science_and_Children
Energy development is the field of activities focused on obtaining sources of energy from natural resources. These activities include the production of renewable, nuclear, and fossil fuel derived sources of energy, and for the recovery and reuse of energy that would otherwise be wasted. Energy conservation and efficiency measures reduce the demand for energy development, and can have benefits to society with improvements to environmental issues. Societies use energy for transportation, manufacturing, illumination, heating and air conditioning, and communication, for industrial, commercial, agricultural and domestic purposes. Energy resources may be classified as primary resources, where the resource can be used in substantially its original form, or as secondary resources, where the energy source must be converted into a more conveniently usable form. Non-renewable resources are significantly depleted by human use, whereas renewable resources are produced by ongoing processes that can sustain indefinite human exploitation. Thousands of people are employed in the energy industry. The conventional industry comprises the petroleum industry, the natural gas industry, the electrical power industry, and the nuclear industry. New energy industries include the renewable energy industry, comprising alternative and sustainable manufacture, distribution, and sale of alternative fuels. == Classification of resources == Energy resources may be classified as primary resources, suitable for end use without conversion to another form, or secondary resources, where the usable form of energy required substantial conversion from a primary source. Examples of primary energy resources are wind power, solar power, wood fuel, fossil fuels such as coal, oil and natural gas, and uranium. Secondary resources are those such as electricity, hydrogen, or other synthetic fuels. Another important classification is based on the time required to regenerate an energy resource. "Renewable resources" are those that recover their capacity in a time significant by human needs. Examples are hydroelectric power or wind power, when the natural phenomena that are the primary source of energy are ongoing and not depleted by human demands. Non-renewable resources are those that are significantly depleted by human usage and that will not recover their potential significantly during human lifetimes. An example of a non-renewable energy source is coal, which does not form naturally at a rate that would support human use. == Fossil fuels == Fossil fuel (primary non-renewable fossil) sources burn coal or hydrocarbon fuels, which are the remains of the decomposition of plants and animals. There are three main types of fossil fuels: coal, petroleum, and natural gas. Another fossil fuel, liquefied petroleum gas (LPG), is principally derived from the production of natural gas. Heat from burning fossil fuel is used either directly for space heating and process heating, or converted to mechanical energy for vehicles, industrial processes, or electrical power generation. These fossil fuels are part of the carbon cycle and allow solar energy stored in the fuel to be released. The use of fossil fuels in the 18th and 19th century set the stage for the Industrial Revolution. Fossil fuels make up the bulk of the world's current primary energy sources. In 2005, 81% of the world's energy needs was met from fossil sources. The technology and infrastructure for the use of fossil fuels already exist. Liquid fuels derived from petroleum deliver much usable energy per unit of weight or volume, which is advantageous when compared with lower energy density sources such as batteries. Fossil fuels are currently economical for decentralized energy use. Energy dependence on imported fossil fuels creates energy security risks for dependent countries. Oil dependence in particular has led to war, funding of radicals, monopolization, and socio-political instability. Fossil fuels are non-renewable resources, which will eventually decline in production and become exhausted. While the processes that created fossil fuels are ongoing, fuels are consumed far more quickly than the natural rate of replenishment. Extracting fuels becomes increasingly costly as society consumes the most accessible fuel deposits. Extraction of fossil fuels results in environmental degradation, such as the strip mining and mountaintop removal for coal. Fuel efficiency is a form of thermal efficiency, meaning the efficiency of a process that converts chemical potential energy contained in a carrier fuel into kinetic energy or work. The fuel economy is the energy efficiency of a particular vehicle, is given as a ratio of distance travelled per unit of fuel consumed. Weight-specific efficiency (efficiency per unit weight) may be stated for freight, and passenger-specific efficiency (vehicle efficiency) per passenger. The inefficient atmospheric combustion (burning) of fossil fuels in vehicles, buildings, and power plants contributes to urban heat islands. Conventional production of oil peaked, conservatively, between 2007 and 2010. In 2010, it was estimated that an investment of $8 trillion in non-renewable resources would be required to maintain current levels of production for 25 years. In 2010, governments subsidized fossil fuels by an estimated $500 billion a year. Fossil fuels are also a source of greenhouse gas emissions, leading to concerns about global warming if consumption is not reduced. The combustion of fossil fuels leads to the release of pollution into the atmosphere. The fossil fuels are mainly carbon compounds. During combustion, carbon dioxide is released, and also nitrogen oxides, soot and other fine particulates. The carbon dioxide is the main contributor to recent climate change. Other emissions from fossil fuel power station include sulphur dioxide, carbon monoxide (CO), hydrocarbons, volatile organic compounds (VOC), mercury, arsenic, lead, cadmium, and other heavy metals including traces of uranium. A typical coal plant generates billions of kilowatt hours of electrical power per year. == Nuclear == === Fission === Nuclear power is the use of nuclear fission to generate useful heat and electricity. Fission of uranium produces nearly all economically significant nuclear power. Radioisotope thermoelectric generators form a very small component of energy generation, mostly in specialized applications such as deep space vehicles. Nuclear power plants, excluding naval reactors, provided about 5.7% of the world's energy and 13% of the world's electricity in 2012. In 2013, the IAEA report that there are 437 operational nuclear power reactors, in 31 countries, although not every reactor is producing electricity. In addition, there are approximately 140 naval vessels using nuclear propulsion in operation, powered by some 180 reactors. As of 2013, attaining a net energy gain from sustained nuclear fusion reactions, excluding natural fusion power sources such as the Sun, remains an ongoing area of international physics and engineering research. More than 60 years after the first attempts, commercial fusion power production remains unlikely before 2050. There is an ongoing debate about nuclear power. Proponents, such as the World Nuclear Association, the IAEA and Environmentalists for Nuclear Energy contend that nuclear power is a safe, sustainable energy source that reduces carbon emissions. Opponents contend that nuclear power poses many threats to people and the environment. Nuclear power plant accidents include the Chernobyl disaster (1986), Fukushima Daiichi nuclear disaster (2011), and the Three Mile Island accident (1979). There have also been some nuclear submarine accidents. In terms of lives lost per unit of energy generated, analysis has determined that nuclear power has caused less fatalities per unit of energy generated than the other major sources of energy generation. Energy production from coal, petroleum, natural gas and hydropower has caused a greater number of fatalities per unit of energy generated due to air pollution and energy accident effects. However, the economic costs of nuclear power accidents is high, and meltdowns can take decades to clean up. The human costs of evacuations of affected populations and lost livelihoods is also significant. Comparing Nuclear's latent cancer deaths, such as cancer with other energy sources immediate deaths per unit of energy generated(GWeyr). This study does not include fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident" classification, which would be an accident with more than 5 fatalities. As of 2012, according to the IAEA, worldwide there were 68 civil nuclear power reactors under construction in 15 countries, approximately 28 of which in the People's Republic of China (PRC), with the most recent nuclear power reactor, as of May 2013, to be connected to the electrical grid, occurring on February 17, 2013, in Hongyanhe Nuclear Power Plant in the PRC. In the United States, two new Generation III reactors are under construction at Vogtle. U.S. nuclear industry officials expect five new reactors to enter service by 2020, all at existing plants. In 2013, four aging, uncompetitive, reactors were permanently closed. Recent experiments in extraction of uranium use polymer ropes that are coated with a substance that selectively absorbs uranium from seawater. This process could make the considerable volume of uranium dissolved in seawater exploitable for energy production. Since ongoing geologic processes carry uranium to the sea in amounts comparable to the amount that would be extracted by this process, in a sense the sea-borne uranium becomes a sustainable resource. Nuclear power is a low carbon power generation method of producing electricity, with an analysis of the literature on its total life cycle emission intensity finding that it is similar to renewable sources in a comparison of greenhouse gas (GHG) emissions per unit of energy generated. Since the 1970s, nuclear fuel has displaced about 64 gigatonnes of carbon dioxide equivalent (GtCO2-eq) greenhouse gases, that would have otherwise resulted from the burning of oil, coal or natural gas in fossil-fuel power stations. ==== Nuclear power phase-out and pull-backs ==== Japan's 2011 Fukushima Daiichi nuclear accident, which occurred in a reactor design from the 1960s, prompted a rethink of nuclear safety and nuclear energy policy in many countries. Germany decided to close all its reactors by 2022, and Italy has banned nuclear power. Following Fukushima, in 2011 the International Energy Agency halved its estimate of additional nuclear generating capacity to be built by 2035. ===== Fukushima ===== Following the 2011 Fukushima Daiichi nuclear disaster – the second worst nuclear incident, that displaced 50,000 households after radioactive material leaked into the air, soil and sea, and with subsequent radiation checks leading to bans on some shipments of vegetables and fish – a global public support survey by Ipsos (2011) for energy sources was published and nuclear fission was found to be the least popular ==== Fission economics ==== The economics of new nuclear power plants is a controversial subject, since there are diverging views on this topic, and multibillion-dollar investments ride on the choice of an energy source. Nuclear power plants typically have high capital costs for building the plant, but low direct fuel costs. In recent years there has been a slowdown of electricity demand growth and financing has become more difficult, which affects large projects such as nuclear reactors, with very large upfront costs and long project cycles which carry a large variety of risks. In Eastern Europe, a number of long-established projects are struggling to find finance, notably Belene in Bulgaria and the additional reactors at Cernavoda in Romania, and some potential backers have pulled out. Where cheap gas is available and its future supply relatively secure, this also poses a major problem for nuclear projects. Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power plants were developed by state-owned or regulated utility monopolies where many of the risks associated with construction costs, operating performance, fuel price, and other factors were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants. ==== Costs ==== Costs are likely to go up for currently operating and new nuclear power plants, due to increased requirements for on-site spent fuel management and elevated design basis threats. While first of their kind designs, such as the EPRs under construction are behind schedule and over-budget, of the seven South Korean APR-1400s presently under construction worldwide, two are in S.Korea at the Hanul Nuclear Power Plant and four are at the largest nuclear station construction project in the world as of 2016, in the United Arab Emirates at the planned Barakah nuclear power plant. The first reactor, Barakah-1 is 85% completed and on schedule for grid-connection during 2017. Two of the four EPRs under construction (in Finland and France) are significantly behind schedule and substantially over cost. == Renewable sources == Renewable energy is generally defined as energy that comes from resources which are naturally replenished on a human timescale such as sunlight, wind, rain, tides, waves and geothermal heat. Renewable energy replaces conventional fuels in four distinct areas: electricity generation, hot water/space heating, motor fuels, and rural (off-grid) energy services. Including traditional biomass usage, about 19% of global energy consumption is accounted for by renewable resources. Wind powered energy production is being turned to as a prominent renewable energy source, increasing global wind power capacity by 12% in 2021. While not the case for all countries, 58% of sample countries linked renewable energy consumption to have a positive impact on economic growth. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[76] Unlike other energy sources, renewable energy sources are not as restricted by geography. Additionally deployment of renewable energy is resulting in economic benefits as well as combating climate change. Rural electrification has been researched on multiple sites and positive effects on commercial spending, appliance use, and general activities requiring electricity as energy. Renewable energy growth in at least 38 countries has been driven by the high electricity usage rates. International support for promoting renewable sources like solar and wind have continued grow. While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development. To ensure human development continues sustainably, governments around the world are beginning to research potential ways to implement renewable sources into their countries and economies. For example, the UK Government’s Department for Energy and Climate Change 2050 Pathways created a mapping technique to educate the public on land competition between energy supply technologies. This tool provides users the ability to understand what the limitations and potential their surrounding land and country has in terms of energy production. === Hydroelectricity === Hydroelectricity is electric power generated by hydropower; the force of falling or flowing water. In 2015 hydropower generated 16.6% of the world's total electricity and 70% of all renewable electricity and was expected to increase about 3.1% each year for the following 25 years. Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela. The cost of hydroelectricity is relatively low, making it a competitive source of renewable electricity. The average cost of electricity from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour. Hydro is also a flexible source of electricity since plants can be ramped up and down very quickly to adapt to changing energy demands. However, damming interrupts the flow of rivers and can harm local ecosystems, and building large dams and reservoirs often involves displacing people and wildlife. Once a hydroelectric complex is constructed, the project produces no direct waste, and has a considerably lower output level of the greenhouse gas carbon dioxide than fossil fuel powered energy plants. === Wind === Wind power harnesses the power of the wind to propel the blades of wind turbines. These turbines cause the rotation of magnets, which creates electricity. Wind towers are usually built together on wind farms. There are offshore and onshore wind farms. Global wind power capacity has expanded rapidly to 336 GW in June 2014, and wind energy production was around 4% of total worldwide electricity usage, and growing rapidly. Wind power is widely used in Europe, Asia, and the United States. Several countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, 14% in Ireland, and 9% in Germany in 2010.: 11  By 2011, at times over 50% of electricity in Germany and Spain came from wind and solar power. As of 2011, 83 countries around the world are using wind power on a commercial basis.: 11  Many of the world's largest onshore wind farms are located in the United States, China, and India. Most of the world's largest offshore wind farms are located in Denmark, Germany and the United Kingdom. The two largest offshore wind farm are currently the 630 MW London Array and Gwynt y Môr. === Solar === === Biofuels === A biofuel is a fuel that contains energy from geologically recent carbon fixation. These fuels are produced from living organisms. Examples of this carbon fixation occur in plants and microalgae. These fuels are made by a biomass conversion (biomass refers to recently living organisms, most often referring to plants or plant-derived materials). This biomass can be converted to convenient energy containing substances in three different ways: thermal conversion, chemical conversion, and biochemical conversion. This biomass conversion can result in fuel in solid, liquid, or gas form. This new biomass can be used for biofuels. Biofuels have increased in popularity because of rising oil prices and the need for energy security. Bioethanol is an alcohol made by fermentation, mostly from carbohydrates produced in sugar or starch crops such as corn or sugarcane. Cellulosic biomass, derived from non-food sources, such as trees and grasses, is also being developed as a feedstock for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the USA and in Brazil. Current plant design does not provide for converting the lignin portion of plant raw materials to fuel components by fermentation. Biodiesel is made from vegetable oils and animal fats. Biodiesel can be used as a fuel for vehicles in its pure form, but it is usually used as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. However, research is underway on producing renewable fuels from decarboxylation In 2010, worldwide biofuel production reached 105 billion liters (28 billion gallons US), up 17% from 2009, and biofuels provided 2.7% of the world's fuels for road transport, a contribution largely made up of ethanol and biodiesel. Global ethanol fuel production reached 86 billion liters (23 billion gallons US) in 2010, with the United States and Brazil as the world's top producers, accounting together for 90% of global production. The world's largest biodiesel producer is the European Union, accounting for 53% of all biodiesel production in 2010. As of 2011, mandates for blending biofuels exist in 31 countries at the national level and in 29 states or provinces.: 13–14  The International Energy Agency has a goal for biofuels to meet more than a quarter of world demand for transportation fuels by 2050 to reduce dependence on petroleum and coal. === Geothermal === Geothermal energy is thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. The geothermal energy of the Earth's crust originates from the original formation of the planet (20%) and from radioactive decay of minerals (80%). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots γη (ge), meaning earth, and θερμος (thermos), meaning hot. Earth's internal heat is thermal energy generated from radioactive decay and continual heat loss from Earth's formation. Temperatures at the core-mantle boundary may reach over 4000 °C (7,200 °F). The high temperature and pressure in Earth's interior cause some rock to melt and solid mantle to behave plastically, resulting in portions of mantle convecting upward since it is lighter than the surrounding rock. Rock and water is heated in the crust, sometimes up to 370 °C (700 °F). From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times, but it is now better known for electricity generation. Worldwide, 11,400 megawatts (MW) of geothermal power is online in 24 countries in 2012. An additional 28 gigawatts of direct geothermal heating capacity is installed for district heating, space heating, spas, industrial processes, desalination and agricultural applications in 2010. Geothermal power is cost effective, reliable, sustainable, and environmentally friendly, but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have dramatically expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels. The Earth's geothermal resources are theoretically more than adequate to supply humanity's energy needs, but only a very small fraction may be profitably exploited. Drilling and exploration for deep resources is very expensive. Forecasts for the future of geothermal power depend on assumptions about technology, energy prices, subsidies, and interest rates. Pilot programs like EWEB's customer opt in Green Power Program show that customers would be willing to pay a little more for a renewable energy source like geothermal. But as a result of government assisted research and industry experience, the cost of generating geothermal power has decreased by 25% over the past two decades. In 2001, geothermal energy cost between two and ten US cents per kWh. === Oceanic === Marine Renewable Energy (MRE) or marine power (also sometimes referred to as ocean energy, ocean power, or marine and hydrokinetic energy) refers to the energy carried by the mechanical energy of ocean waves, currents, and tides, shifts in salinity gradients, and ocean temperature differences. MRE has the potential to become a reliable and renewable energy source because of the cyclical nature of the oceans. The movement of water in the world's oceans creates a vast store of kinetic energy or energy in motion. This energy can be harnessed to generate electricity to power homes, transport, and industries. The term marine energy encompasses both wave power, i.e. power from surface waves, and tidal power, i.e. obtained from the kinetic energy of large bodies of moving water. Offshore wind power is not a form of marine energy, as wind power is derived from the wind, even if the wind turbines are placed over water. The oceans have a tremendous amount of energy and are close to many if not most concentrated populations. Ocean energy has the potential to provide a substantial amount of new renewable energy around the world. Marine energy technology is in its first stage of development. To be developed, MRE needs efficient methods of storing, transporting, and capturing ocean power, so it can be used where needed. Over the past year, countries around the world have started implementing market strategies for MRE to commercialize. Canada and China introduced incentives, such as feed-in tariffs (FiTs), which are above-market prices for MRE that allow investors and project developers a stable income. Other financial strategies consist of subsidies, grants, and funding from public-private partnerships (PPPs). China alone approved 100 ocean projects in 2019. Portugal and Spain recognize the potential of MRE in accelerating decarbonization, which is fundamental to meeting the goals of the Paris Agreement. Both countries are focusing on solar and offshore wind auctions to attract private investment, ensure cost-effectiveness, and accelerate MRE growth. Ireland sees MRE as a key component to reduce its carbon footprint. The Offshore Renewable Energy Development Plan (OREDP) supports the exploration and development of the country's significant offshore energy potential. Additionally, Ireland has implemented the Renewable Electricity Support Scheme (RESS) which includes auctions designed to provide financial support for communities, increase technology diversity, and guarantee energy security. However, while research is increasing, there have been concerns associated with threats to marine mammals, habitats, and potential changes to ocean currents. MRE can be a renewable energy source for coastal communities helping their transition from fossil fuel, but researchers are calling for a better understanding of its environmental impacts. Because ocean-energy areas are often isolated from both fishing and sea traffic, these zones may provide shelter from humans and predators for some marine species. MRE devices can be an ideal home for many fish, crayfish, mollusks, and barnacles; and may also indirectly affect seabirds, and marine mammals because they feed on those species. Similarly, such areas may create an "artificial reef effect" by boosting biodiversity nearby. Noise pollution generated from the technology is limited, also causing fish and mammals living in the area of the installation to return. In the most recent State of Science Report about MRE, the authors claim that there is no evidence for fish, mammals, or seabirds to be injured by either collision, noise pollution, or the electromagnetic field. The uncertainty of its environmental impact comes from the low quantity of MRE devices in the ocean today where data is collected. === 100% renewable energy === The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. Renewable energy use has grown much faster than anyone anticipated. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Stephen W. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges." Mark Z. Jacobson says producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Jacobson says that energy costs with a wind, solar, water system should be similar to today's energy costs. Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs ... Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly larger amounts of electricity than the total current or projected domestic demand." . Critics of the "100% renewable energy" approach include Vaclav Smil and James E. Hansen. Smil and Hansen are concerned about the variable output of solar and wind power, but Amory Lovins argues that the electricity grid can cope, just as it routinely backs up nonworking coal-fired and nuclear plants with working ones. Google spent $30 million on their "Renewable Energy Cheaper than Coal" project to develop renewable energy and stave off catastrophic climate change. The project was cancelled after concluding that a best-case scenario for rapid advances in renewable energy could only result in emissions 55 percent below the fossil fuel projections for 2050. == Increased energy efficiency == Although increasing the efficiency of energy use is not energy development per se, it may be considered under the topic of energy development since it makes existing energy sources available to do work.: 22  Efficient energy use reduces the amount of energy required to provide products and services. For example, insulating a home allows a building to use less heating and cooling energy to maintain a comfortable temperature. Installing fluorescent lamps or natural skylights reduces the amount of energy required for illumination compared to incandescent light bulbs. Compact fluorescent lights use two-thirds less energy and may last 6 to 10 times longer than incandescent lights. Improvements in energy efficiency are most often achieved by adopting an efficient technology or production process. Reducing energy use may save consumers money, if the energy savings offsets the cost of an energy efficient technology. Reducing energy use reduces emissions. According to the International Energy Agency, improved energy efficiency in buildings, industrial processes and transportation could reduce the global energy demand in 2050 to around 8% smaller than today, but serving an economy more than twice as big and a population of about 2 billion more people. Energy efficiency and renewable energy are said to be the twin pillars of sustainable energy policy. In many countries energy efficiency is also seen to have a national security benefit because it can be used to reduce the level of energy imports from foreign countries and may slow down the rate at which domestic energy resources are depleted. It's been discovered "that for OECD countries, wind, geothermal, hydro and nuclear have the lowest hazard rates among energy sources in production". == Transmission == While new sources of energy are only rarely discovered or made possible by new technology, distribution technology continually evolves. The use of fuel cells in cars, for example, is an anticipated delivery technology. This section presents the various delivery technologies that have been important to historic energy development. They all rely in way on the energy sources listed in the previous section. === Shipping and pipelines === Coal, petroleum and their derivatives are delivered by boat, rail, or road. Petroleum and natural gas may also be delivered by pipeline, and coal via a Slurry pipeline. Fuels such as gasoline and LPG may also be delivered via aircraft. Natural gas pipelines must maintain a certain minimum pressure to function correctly. The higher costs of ethanol transportation and storage are often prohibitive. === Wired energy transfer === Electricity grids are the networks used to transmit and distribute power from production source to end user, when the two may be hundreds of kilometres away. Sources include electrical generation plants such as a nuclear reactor, coal burning power plant, etc. A combination of sub-stations and transmission lines are used to maintain a constant flow of electricity. Grids may suffer from transient blackouts and brownouts, often due to weather damage. During certain extreme space weather events solar wind can interfere with transmissions. Grids also have a predefined carrying capacity or load that cannot safely be exceeded. When power requirements exceed what's available, failures are inevitable. To prevent problems, power is then rationed. Industrialised countries such as Canada, the US, and Australia are among the highest per capita consumers of electricity in the world, which is possible thanks to a widespread electrical distribution network. The US grid is one of the most advanced, although infrastructure maintenance is becoming a problem. CurrentEnergy provides a realtime overview of the electricity supply and demand for California, Texas, and the Northeast of the US. African countries with small scale electrical grids have a correspondingly low annual per capita usage of electricity. One of the most powerful power grids in the world supplies power to the state of Queensland, Australia. === Wireless energy transfer === Wireless power transfer is a process whereby electrical energy is transmitted from a power source to an electrical load that does not have a built-in power source, without the use of interconnecting wires. Currently available technology is limited to short distances and relatively low power level. Orbiting solar power collectors would require wireless transmission of power to Earth. The proposed method involves creating a large beam of microwave-frequency radio waves, which would be aimed at a collector antenna site on the Earth. Formidable technical challenges exist to ensure the safety and profitability of such a scheme. == Storage == Energy storage is accomplished by devices or physical media that store energy to perform useful operation at a later time. A device that stores energy is sometimes called an accumulator. All forms of energy are either potential energy (e.g. Chemical, gravitational, electrical energy, temperature differential, latent heat, etc.) or kinetic energy (e.g. momentum). Some technologies provide only short-term energy storage, and others can be very long-term such as power to gas using hydrogen or methane and the storage of heat or cold between opposing seasons in deep aquifers or bedrock. A wind-up clock stores potential energy (in this case mechanical, in the spring tension), a battery stores readily convertible chemical energy to operate a mobile phone, and a hydroelectric dam stores energy in a reservoir as gravitational potential energy. Ice storage tanks store ice (thermal energy in the form of latent heat) at night to meet peak demand for cooling. Fossil fuels such as coal and gasoline store ancient energy derived from sunlight by organisms that later died, became buried and over time were then converted into these fuels. Even food (which is made by the same process as fossil fuels) is a form of energy stored in chemical form. == History == Since prehistory, when humanity discovered fire to warm up and roast food, through the Middle Ages in which populations built windmills to grind the wheat, until the modern era in which nations can get electricity splitting the atom. Man has sought endlessly for energy sources. Except nuclear, geothermal and tidal, all other energy sources are from current solar isolation or from fossil remains of plant and animal life that relied upon sunlight. Ultimately, solar energy itself is the result of the Sun's nuclear fusion. Geothermal power from hot, hardened rock above the magma of the Earth's core is the result of the decay of radioactive materials present beneath the Earth's crust, and nuclear fission relies on man-made fission of heavy radioactive elements in the Earth's crust; in both cases these elements were produced in supernova explosions before the formation of the Solar System. Since the beginning of the Industrial Revolution, the question of the future of energy supplies has been of interest. In 1865, William Stanley Jevons published The Coal Question in which he saw that the reserves of coal were being depleted and that oil was an ineffective replacement. In 1914, U.S. Bureau of Mines stated that the total production was 5.7 billion barrels (910,000,000 m3). In 1956, Geophysicist M. King Hubbert deduces that U.S. oil production would peak between 1965 and 1970 and that oil production will peak "within half a century" on the basis of 1956 data. In 1989, predicted peak by Colin Campbell In 2004, OPEC estimated, with substantial investments, it would nearly double oil output by 2025 === Sustainability === The environmental movement has emphasized sustainability of energy use and development. Renewable energy is sustainable in its production; the available supply will not be diminished for the foreseeable future - millions or billions of years. "Sustainability" also refers to the ability of the environment to cope with waste products, especially air pollution. Sources which have no direct waste products (such as wind, solar, and hydropower) are brought up on this point. With global demand for energy growing, the need to adopt various energy sources is growing. Energy conservation is an alternative or complementary process to energy development. It reduces the demand for energy by using it efficiently. === Resilience === Some observers contend that idea of "energy independence" is an unrealistic and opaque concept. The alternative offer of "energy resilience" is a goal aligned with economic, security, and energy realities. The notion of resilience in energy was detailed in the 1982 book Brittle Power: Energy Strategy for National Security. The authors argued that simply switching to domestic energy would not be secure inherently because the true weakness is the often interdependent and vulnerable energy infrastructure of a country. Key aspects such as gas lines and the electrical power grid are often centralized and easily susceptible to disruption. They conclude that a "resilient energy supply" is necessary for both national security and the environment. They recommend a focus on energy efficiency and renewable energy that is decentralized. In 2008, former Intel Corporation Chairman and CEO Andrew Grove looked to energy resilience, arguing that complete independence is unfeasible given the global market for energy. He describes energy resilience as the ability to adjust to interruptions in the supply of energy. To that end, he suggests the U.S. make greater use of electricity. Electricity can be produced from a variety of sources. A diverse energy supply will be less affected by the disruption in supply of any one source. He reasons that another feature of electrification is that electricity is "sticky" – meaning the electricity produced in the U.S. is to stay there because it cannot be transported overseas. According to Grove, a key aspect of advancing electrification and energy resilience will be converting the U.S. automotive fleet from gasoline-powered to electric-powered. This, in turn, will require the modernization and expansion of the electrical power grid. As organizations such as The Reform Institute have pointed out, advancements associated with the developing smart grid would facilitate the ability of the grid to absorb vehicles en masse connecting to it to charge their batteries. === Present and future === Extrapolations from current knowledge to the future offer a choice of energy futures. Predictions parallel the Malthusian catastrophe hypothesis. Numerous are complex models based scenarios as pioneered by Limits to Growth. Modeling approaches offer ways to analyze diverse strategies, and hopefully find a road to rapid and sustainable development of humanity. Short term energy crises are also a concern of energy development. Extrapolations lack plausibility, particularly when they predict a continual increase in oil consumption. Energy production usually requires an energy investment. Drilling for oil or building a wind power plant requires energy. The fossil fuel resources that are left are often increasingly difficult to extract and convert. They may thus require increasingly higher energy investments. If investment is greater than the value of the energy produced by the resource, it is no longer an effective energy source. These resources are no longer an energy source but may be exploited for value as raw materials. New technology may lower the energy investment required to extract and convert the resources, although ultimately basic physics sets limits that cannot be exceeded. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation. The peaking of world hydrocarbon production (peak oil) may lead to significant changes, and require sustainable methods of production. One vision of a sustainable energy future involves all human structures on the earth's surface (i.e., buildings, vehicles and roads) doing artificial photosynthesis (using sunlight to split water as a source of hydrogen and absorbing carbon dioxide to make fertilizer) efficiently than plants. With contemporary space industry's economic activity and the related private spaceflight, with the manufacturing industries, that go into Earth's orbit or beyond, delivering them to those regions will require further energy development. Researchers have contemplated space-based solar power for collecting solar power for use on Earth. Space-based solar power has been in research since the early 1970s. Space-based solar power would require construction of collector structures in space. The advantage over ground-based solar power is higher intensity of light, and no weather to interrupt power collection. == Energy technology == Energy technology is an interdisciplinary engineering science having to do with the efficient, safe, environmentally friendly, and economical extraction, conversion, transportation, storage, and use of energy, targeted towards yielding high efficiency whilst skirting side effects on humans, nature, and the environment. For people, energy is an overwhelming need, and as a scarce resource, it has been an underlying cause of political conflicts and wars. The gathering and use of energy resources can be harmful to local ecosystems and may have global outcomes. Energy is also the capacity to do work. We can get energy from food. Energy can be of different forms such as kinetic, potential, mechanical, heat, light etc. Energy is required for individuals and the whole society for lighting, heating, cooking, running, industries, operating transportation and so forth. Basically there are two types of energy depending on the source s they are; 1.Renewable Energy Sources 2.Non-Renewable Energy Sources === Interdisciplinary fields === As an interdisciplinary science Energy technology is linked with many interdisciplinary fields in sundry, overlapping ways. Physics, for thermodynamics and nuclear physics Chemistry for fuel, combustion, air pollution, flue gas, battery technology and fuel cells. Electrical engineering Engineering, often for fluid energy machines such as combustion engines, turbines, pumps and compressors. Geography, for geothermal energy and exploration for resources. Mining, for petrochemical and fossil fuels. Agriculture and forestry, for sources of renewable energy. Meteorology for wind and solar energy. Water and Waterways, for hydropower. Waste management, for environmental impact. Transportation, for energy-saving transportation systems. Environmental studies, for studying the effect of energy use and production on the environment, nature and climate change. (Lighting Technology), for Interior and Exterior Natural as well as Artificial Lighting Design, Installations, and Energy Savings (Energy Cost/Benefit Analysis), for Simple Payback and Life Cycle Costing of Energy Efficiency/Conservation Measures Recommended === Electrical engineering === Electric power engineering deals with the production and use of electrical energy, which can entail the study of machines such as generators, electric motors and transformers. Infrastructure involves substations and transformer stations, power lines and electrical cable. Load management and power management over networks have meaningful sway on overall energy efficiency. Electric heating is also widely used and researched. === Thermodynamics === Thermodynamics deals with the fundamental laws of energy conversion and is drawn from theoretical Physics. === Thermal and chemical energy === Thermal and chemical energy are intertwined with chemistry and environmental studies. Combustion has to do with burners and chemical engines of all kinds, grates and incinerators along with their energy efficiency, pollution and operational safety. Exhaust gas purification technology aims to lessen air pollution through sundry mechanical, thermal and chemical cleaning methods. Emission control technology is a field of process and chemical engineering. Boiler technology deals with the design, construction and operation of steam boilers and turbines (also used in nuclear power generation, see below), drawn from applied mechanics and materials engineering. Energy conversion has to do with internal combustion engines, turbines, pumps, fans and so on, which are used for transportation, mechanical energy and power generation. High thermal and mechanical loads bring about operational safety worries which are dealt with through many branches of applied engineering science. === Nuclear energy === Nuclear technology deals with nuclear power production from nuclear reactors, along with the processing of nuclear fuel and disposal of radioactive waste, drawing from applied nuclear physics, nuclear chemistry and radiation science. Nuclear power generation has been politically controversial in many countries for several decades but the electrical energy produced through nuclear fission is of worldwide importance. There are high hopes that fusion technologies will one day replace most fission reactors but this is still a research area of nuclear physics. === Renewable energy === Renewable energy has many branches. ==== Wind power ==== Wind turbines convert wind energy into electricity by connecting a spinning rotor to a generator. Wind turbines draw energy from atmospheric currents and are designed using aerodynamics along with knowledge taken from mechanical and electrical engineering. The wind passes across the aerodynamic rotor blades, creating an area of higher pressure and an area of lower pressure on either side of the blade. The forces of lift and drag are formed due to the difference in air pressure. The lift force is stronger than the drag force; therefore the rotor, which is connected to a generator, spins. The energy is then created due to the change from the aerodynamic force to the rotation of the generator. Being recognized as one of the most efficient renewable energy sources, wind power is becoming more and more relevant and used in the world. Wind power does not use any water in the production of energy making it a good source of energy for areas without much water. Wind energy could also be produced even if the climate changes in line with current predictions, as it relies solely on wind. ==== Geothermal ==== Deep within the Earth, is an extreme heat producing layer of molten rock called magma. The very high temperatures from the magma heats nearby groundwater. There are various technologies that have been developed in order to benefit from such heat, such as using different types of power plants (dry, flash or binary), heat pumps, or wells. These processes of harnessing the heat incorporate an infrastructure which has in one form or another a turbine which is spun by either the hot water or the steam produced by it. The spinning turbine, being connected to a generator, produces energy. A more recent innovation involves the use of shallow closed-loop systems that pump heat to and from structures by taking advantage of the constant temperature of soil around 10 feet deep. ==== Hydropower ==== Hydropower draws mechanical energy from rivers, ocean waves and tides. Civil engineering is used to study and build dams, tunnels, waterways and manage coastal resources through hydrology and geology. A low speed water turbine spun by flowing water can power an electrical generator to produce electricity. ==== Bioenergy ==== Bioenergy deals with the gathering, processing and use of biomasses grown in biological manufacturing, agriculture and forestry from which power plants can draw burning fuel. Ethanol, methanol (both controversial) or hydrogen for fuel cells can be had from these technologies and used to generate electricity. ==== Enabling technologies ==== Heat pumps and Thermal energy storage are classes of technologies that can enable the utilization of renewable energy sources that would otherwise be inaccessible due to a temperature that is too low for utilization or a time lag between when the energy is available and when it is needed. While enhancing the temperature of available renewable thermal energy, heat pumps have the additional property of leveraging electrical power (or in some cases mechanical or thermal power) by using it to extract additional energy from a low quality source (such as seawater, lake water, the ground, the air, or waste heat from a process). Thermal storage technologies allow heat or cold to be stored for periods of time ranging from hours or overnight to interseasonal, and can involve storage of sensible energy (i.e. by changing the temperature of a medium) or latent energy (i.e. through phase changes of a medium, such between water and slush or ice). Short-term thermal storages can be used for peak-shaving in district heating or electrical distribution systems. Kinds of renewable or alternative energy sources that can be enabled include natural energy (e.g. collected via solar-thermal collectors, or dry cooling towers used to collect winter's cold), waste energy (e.g. from HVAC equipment, industrial processes or power plants), or surplus energy (e.g. as seasonally from hydropower projects or intermittently from wind farms). The Drake Landing Solar Community (Alberta, Canada) is illustrative. borehole thermal energy storage allows the community to get 97% of its year-round heat from solar collectors on the garage roofs, which most of the heat collected in summer. Types of storages for sensible energy include insulated tanks, borehole clusters in substrates ranging from gravel to bedrock, deep aquifers, or shallow lined pits that are insulated on top. Some types of storage are capable of storing heat or cold between opposing seasons (particularly if very large), and some storage applications require inclusion of a heat pump. Latent heat is typically stored in ice tanks or what are called phase-change materials (PCMs). == See also == World energy supply and consumption Technology Water-energy nexus Policy Energy policy, Energy policy of the United States, Energy policy of China, Energy policy of India, Energy policy of the European Union, Energy policy of the United Kingdom, Energy policy of Russia, Energy policy of Brazil, Energy policy of Canada, Energy policy of the Soviet Union, Energy Industry Liberalization and Privatization (Thailand) General Seasonal thermal energy storage (Interseasonal thermal energy storage), Geomagnetically induced current, Energy harvesting, Timeline of sustainable energy research 2020–present Feedstock Raw material, Biomaterial, Energy consumption, Materials science, Recycling, Upcycling, Downcycling Others Thorium-based nuclear power, List of oil pipelines, List of natural gas pipelines, Ocean thermal energy conversion, Growth of photovoltaics == References == == Sources == Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016. Serra, J. "Alternative Fuel Resource Development", Clean and Green Fuels Fund, (2006). Bilgen, S. and K. Kaygusuz, Renewable Energy for a Clean and Sustainable Future, Energy Sources 26, 1119 (2004). Energy analysis of Power Systems, UIC Nuclear Issues Briefing Paper 57 (2004). Silvestre B. S., Dalcol P. R. T. (2009). "Geographical proximity and innovation: Evidences from the Campos Basin oil & gas industrial agglomeration — Brazil". Technovation. 29 (8): 546–561. doi:10.1016/j.technovation.2009.01.003. == Journals == Energy Sources, Part A: Recovery, Utilization and Environmental Effects Energy Sources, Part B: Economics, Planning and Policy International Journal of Green Energy == External links == Bureau of Land Management 2012 Renewable Energy Priority Projects Energypedia - a wiki about renewable energies in the context of development cooperation Hidden Health and Environmental Costs Of Energy Production and Consumption In U.S. IEA-ECES - International Energy Agency - Energy Conservation through Energy Conservation programme. IEA HPT TCP - International Energy Agency - Technology Collaboration Programme on Heatpumping Technologies. IEA-SHC - International Energy Agency - Solar Heating and Cooling programme. SDH - Solar District Heating Platform. (European Union)
Wikipedia/Energy_resource
A Zero-Energy Building (ZEB), also known as a Net Zero-Energy (NZE) building, is a building with net zero energy consumption, meaning the total amount of energy used by the building on an annual basis is equal to the amount of renewable energy created on the site or in other definitions by renewable energy sources offsite, using technology such as heat pumps, high efficiency windows and insulation, and solar panels. The goal is that these buildings contribute less overall greenhouse gas to the atmosphere during operation than similar non-ZNE buildings. They do at times consume non-renewable energy and produce greenhouse gases, but at other times reduce energy consumption and greenhouse gas production elsewhere by the same amount. The development of zero-energy buildings is encouraged by the desire to have less of an impact on the environment, and their expansion is encouraged by tax breaks and savings on energy costs which make zero-energy buildings financially viable. Terminology tends to vary between countries, agencies, cities, towns, and reports, so a general knowledge of this concept and its various uses is essential for a versatile understanding of clean energy and renewables. The International Energy Agency (IEA) and European Union (EU) most commonly use "Net Zero Energy", with the term "zero net" being mainly used in the US. A similar concept approved and implemented by the European Union and other agreeing countries is nearly Zero Energy Building (nZEB), with the goal of having all new buildings in the region under nZEB standards by 2020. According to D'Agostino and Mazzarella (2019), the meaning of nZEB is different in each country. This is because countries have different climates, rules, and ways of calculating energy use. These differences make it hard to compare buildings or set one standard for everyone. == Overview == Typical code-compliant buildings consume 40% of the total fossil fuel energy in the US and European Union and are significant contributors of greenhouse gases. To combat such high energy usage, more and more buildings are starting to implement the carbon neutrality principle, which is viewed as a means to reduce carbon emissions and reduce dependence on fossil fuels. Although zero-energy buildings remain limited, even in developed countries, they are gaining importance and popularity. Most zero-energy buildings use the electrical grid for energy storage but some are independent of the grid and some include energy storage onsite. The buildings are called "energy-plus buildings" or in some cases "low energy houses". These buildings produce energy onsite using renewable technology like solar and wind, while reducing the overall use of energy with highly efficient lightning and heating, ventilation, and air conditioning (HVAC) technologies. The zero-energy goal is becoming more practical as the costs of alternative energy technologies decrease and the costs of traditional fossil fuels increase. A notable example is the Zero Building in Spain, which demonstrates how an office building can achieve net-zero and even positive energy performance through architectural design, passive strategies, and integrated renewable systems. The development of modern zero-energy buildings became possible largely through the progress made in new energy and construction technologies and techniques. These include highly insulating spray-foam insulation, high-efficiency solar panels, high-efficiency heat pumps and highly insulating, low emissivity, triple and quadruple-glazed windows. These innovations have also been significantly improved by academic research, which collects precise energy performance data on traditional and experimental buildings and provides performance parameters for advanced computer models to predict the efficacy of engineering designs. A study examines how different countries, including Italy, Germany, and Denmark, approach nearly zero energy buildings. It shows that while the technical solutions are often similar, the targets, baselines, and frameworks for meeting these goals vary significantly (D'Agostino & Mazzarella, 2019). Zero-energy buildings can be part of a smart grid. Some advantages of these buildings are as follows: Integration of renewable energy resources Integration of plug-in electric vehicles – called vehicle-to-grid Implementation of zero-energy concepts Although the net zero concept is applicable to a wide range of resources, water and waste, energy is usually the first resource to be targeted because: Energy, particularly electricity and heating fuel like natural gas or heating oil, is expensive. Hence reducing energy use can save the building owner money. In contrast, water and waste are inexpensive for the individual building owner. Energy, particularly electricity and heating fuel, has a high carbon footprint. Hence reducing energy use is a major way to reduce the building's carbon footprint. There are well-established means to significantly reduce the energy use and carbon footprint of buildings. These include: adding insulation, using heat pumps instead of furnaces, using low emissivity, triple or quadruple-glazed windows and adding solar panels to the roof. In some countries, there are government-sponsored subsidies and tax breaks for installing heat pumps, solar panels, triple or quadruple-glazed windows and insulation that greatly reduce the cost of getting to a net-zero energy building for the building owner. === Optimizing zero-energy building for climate impact === The introduction of zero-energy buildings makes buildings more energy efficient and reduces the rate of carbon emissions once the building is in operation; however, there is still a lot of pollution associated with a building's embodied carbon. Embodied carbon is the carbon emitted in the making and transportation of a building's materials and construction of the structure itself; it is responsible for 11% of global GHG emissions and 28% of global building sector emissions. The importance of embodied carbon will grow as it will begin to account for the greater portion of a building's carbon emissions. In some newer, energy efficient buildings, embodied carbon has risen to 47% of the building's lifetime emissions. Focusing on embodied carbon is part of optimizing construction for climate impact and zero carbon emissions requires slightly different considerations from optimizing only for energy efficiency. A 2019 study found that between 2020 and 2030, reducing upfront carbon emissions and switching to clean or renewable energy is more important than increasing building efficiency because "building a highly energy efficient structure can actually produce more greenhouse gas than a basic code compliant one if carbon-intensive materials are used." The study stated that because "Net-zero energy codes will not significantly reduce emissions in time, policy makers and regulators must aim for true net zero carbon buildings, not net zero energy buildings." One way to reduced embodied carbon is by using low-carbon materials for construction such as straw, wood, linoleum, or cedar. For materials like concrete and steel, options to reduce embodied emissions do exist, however, these are unlikely to be available at large scale in the short-term. In conclusion, it has been determined that the optimal design point for greenhouse gas reduction appeared to be at four story multifamily buildings of low-carbon materials, such as those listed above, which could be a template for low-carbon emitting structures. == Definitions == Despite sharing the name "zero net energy", there are several definitions of what the term means in practice, with a particular difference in usage between North America and Europe. Zero net site energy use In this type of ZNE, the amount of energy provided by on-site renewable energy sources is equal to the amount of energy used by the building. In the United States, "zero net energy building" generally refers to this type of building. Zero net source energy use This ZNE generates the same amount of energy as is used, including the energy used to transport the energy to the building. This type accounts for energy losses during electricity generation and transmission. These ZNEs must generate more electricity than zero net site energy buildings. Net zero energy emissions Outside the United States and Canada, a ZEB is generally defined as one with zero net energy emissions, also known as a zero carbon building (ZCB) or zero emissions building (ZEB). Under this definition the carbon emissions generated from on-site or off-site fossil fuel use are balanced by the amount of on-site renewable energy production. Other definitions include not only the carbon emissions generated by the building in use, but also those generated in the construction of the building and the embodied energy of the structure. Others debate whether the carbon emissions of commuting to and from the building should also be included in the calculation. Recent work in New Zealand has initiated an approach to include building user transport energy within zero energy building frameworks. Net zero cost In this type of building, the cost of purchasing energy is balanced by income from sales of electricity to the grid of electricity generated on-site. Such a status depends on how a utility credits net electricity generation and the utility rate structure the building uses. Net off-site zero energy use A building may be considered a ZEB if 100% of the energy it purchases comes from renewable energy sources, even if the energy is generated off the site. Off-the-grid Off-the-grid buildings are stand-alone ZEBs that are not connected to an off-site energy utility facility. They require distributed renewable energy generation and energy storage capability (for when the sun is not shining, wind is not blowing, etc.). An energy autarkic house is a building concept where the balance of the own energy consumption and production can be made on an hourly or even smaller basis. Energy autarkic houses can be taken off-the-grid. Net Zero Energy Building Based on scientific analysis within the joint research program "Towards Net Zero Energy Solar Buildings" a methodological framework was set up which allows different definitions, in accordance with country's political targets, specific (climate) conditions and respectively formulated requirements for indoor conditions: The overall conceptual understanding of a Net ZEB is an energy efficient, grid-connected building enabled to generate energy from renewable sources to compensate its own energy demand (see figure 1).The wording "Net" emphasizes the energy exchange between the building and the energy infrastructure. By the building-grid interaction, the Net ZEBs becomes an active part of the renewable energy infrastructure. This connection to energy grids prevents seasonal energy storage and oversized on-site systems for energy generation from renewable sources like in energy autonomous buildings. The similarity of both concepts is a pathway of two actions: 1) reduce energy demand by means of energy efficiency measures and passive energy use; 2) generate energy from renewable sources. However, the Net ZEBs grid interaction and plans to widely increase their numbers of evoking considerations on increased flexibility in the shift of energy loads and reduced peak demands. Positive Energy District Expanding some of the principles of zero-energy buildings to a city district level, Positive Energy Districts (PED) are districts or other urban areas that produce at least as much energy on an annual basis as they consume. The impetus to develop whole positive energy districts instead of single buildings is based on the possibility of sharing resources, managing energy efficiently systems across many buildings and reaching economics of scale. Within this balancing procedure several aspects and explicit choices have to be determined: The building system boundary is split into a physical boundary which determines which renewable resources are considered (e.g. in buildings footprint, on-site or even off-site) respectively how many buildings are included in the balance (single building, cluster of buildings) and a balance boundary which determines the included energy uses (e.g. heating, cooling, ventilation, hot water, lighting, appliances, IT, central services, electric vehicles, and embodied energy, etc.). It should be noticed that renewable energy supply options can be prioritized (e.g. by transportation or conversion effort, availability over the lifetime of the building or replication potential for future, etc.) and therefore create a hierarchy. It may be argued that resources within the building footprint or on-site should be given priority over off-site supply options. The weighting system converts the physical units of different energy carriers into a uniform metric (site/final energy, source/primary energy renewable parts included or not, energy cost, equivalent carbon emissions and even energy or environmental credits) and allows their comparison and compensation among each other in one single balance (e.g. exported PV electricity can compensate for imported biomass). Politically influenced and therefore possibly asymmetrically or time-dependent conversion/weighting factors can affect the relative value of energy carriers and can influence the required energy generation capacity. The balancing period is often assumed to be one year (suitable to cover all operation energy uses). A shorter period (monthly or seasonal) could also be considered as well as a balance over the entire life cycle (including embodied energy, which could also be annualized and counted in addition to operational energy uses). The energy balance can be done in two balance types: 1) Balance of delivered/imported and exported energy (monitoring phase as self-consumption of energy generated on-site can be included); 2) Balance between (weighted) energy demand and (weighted) energy generation (for design phase as normal end users temporal consumption patterns – e.g., for lighting, appliances, etc. – are lacking). Alternatively, a balance based on monthly net values in which only residuals per month are summed up to an annual balance is imaginable. This can be seen either as a load/generation balance or as a special case of import/export balance where a "virtual monthly self-consumption" is assumed (see figure 2 and compare). Besides the energy balance, the Net ZEBs can be characterized by their ability to match the building's load by its energy generation (load matching) or to work beneficially with respect to the needs of the local grid infrastructure (grind interaction). Both can be expressed by suitable indicators which are intended as assessment tools only. == Design and construction == The most cost-effective steps toward a reduction in a building's energy consumption usually occur during the design process. To achieve efficient energy use, zero energy design departs significantly from conventional construction practice. Successful zero energy building designers typically combine time tested passive solar, or artificial/fake conditioning, principles that work with the on-site assets. Sunlight and solar heat, prevailing breezes, and the cool of the earth below a building, can provide daylighting and stable indoor temperatures with minimum mechanical means. ZEBs are normally optimized to use passive solar heat gain and shading, combined with thermal mass to stabilize diurnal temperature variations throughout the day, and in most climates are superinsulated. All the technologies needed to create zero energy buildings are available off-the-shelf today. Sophisticated 3-D building energy simulation tools are available to model how a building will perform with a range of design variables such as building orientation (relative to the daily and seasonal position of the sun), window and door type and placement, overhang depth, insulation type and values of the building elements, air tightness (weatherization), the efficiency of heating, cooling, lighting, and other equipment, as well as local climate. These simulations help the designers predict how the building will perform before it is built, and enable them to model the economic and financial implications on building cost benefit analysis, or even more appropriate – life-cycle assessment. Zero-energy buildings are built with significant energy-saving features. The heating and cooling loads are lowered by using high-efficiency equipment (such as heat pumps rather than furnaces. Heat pumps are about four times as efficient as furnaces) added insulation (especially in the attic and in the basement of houses), high-efficiency windows (such as low emissivity, triple-glazed windows), draft-proofing, high efficiency appliances (particularly modern high-efficiency refrigerators), high-efficiency LED lighting, passive solar gain in winter and passive shading in the summer, natural ventilation, and other techniques. These features vary depending on climate zones in which the construction occurs. Water heating loads can be lowered by using water conservation fixtures, heat recovery units on waste water, and by using solar water heating, and high-efficiency water heating equipment. In addition, daylighting with skylights or solartubes can provide 100% of daytime illumination within the home. Nighttime illumination is typically done with fluorescent and LED lighting that use 1/3 or less power than incandescent lights, without adding unwanted heat. And miscellaneous electric loads can be lessened by choosing efficient appliances and minimizing phantom loads or standby power. Other techniques to reach net zero (dependent on climate) are Earth sheltered building principles, superinsulation walls using straw-bale construction, pre-fabricated building panels and roof elements plus exterior landscaping for seasonal shading. Once the energy use of the building has been minimized it can be possible to generate all that energy on site using roof-mounted solar panels. See examples of zero net energy houses here. Zero-energy buildings are often designed to make dual use of energy including that from white goods. For example, using refrigerator exhaust to heat domestic water, ventilation air and shower drain heat exchangers, office machines and computer servers, and body heat to heat the building. These buildings make use of heat energy that conventional buildings may exhaust outside. They may use heat recovery ventilation, hot water heat recycling, combined heat and power, and absorption chiller units. == Energy harvest == ZEBs harvest available energy to meet their electricity and heating or cooling needs. By far the most common way to harvest energy is to use roof-mounted solar photovoltaic panels that turn the sun's light into electricity. Energy can also be harvested with solar thermal collectors (which use the sun's heat to heat water for the building). Heat pumps can also harvest heat and cool from the air (air-sourced) or ground near the building (ground-sourced otherwise known as geothermal). Technically, heat pumps move heat rather than harvest it, but the overall effect in terms of reduced energy use and reduced carbon footprint is similar. In the case of individual houses, various microgeneration technologies may be used to provide heat and electricity to the building, using solar cells or wind turbines for electricity, and biofuels or solar thermal collectors linked to a seasonal thermal energy storage (STES) for space heating. An STES can also be used for summer cooling by storing the cold of winter underground. To cope with fluctuations in demand, zero energy buildings are frequently connected to the electricity grid, export electricity to the grid when there is a surplus, and drawing electricity when not enough electricity is being produced. Other buildings may be fully autonomous. Energy harvesting is most often more effective in regards to cost and resource utilization when done on a local but combined scale, for example a group of houses, cohousing, local district or village rather than an individual house basis. An energy benefit of such localized energy harvesting is the virtual elimination of electrical transmission and electricity distribution losses. On-site energy harvesting such as with roof top mounted solar panels eliminates these transmission losses entirely. Energy harvesting in commercial and industrial applications should benefit from the topography of each location. However, a site that is free of shade can generate large amounts of solar powered electricity from the building's roof and almost any site can use geothermal or air-sourced heat pumps. The production of goods under net zero fossil energy consumption requires locations of geothermal, microhydro, solar, and wind resources to sustain the concept. Zero-energy neighborhoods, such as the BedZED development in the United Kingdom, and those that are spreading rapidly in California and China, may use distributed generation schemes. This may in some cases include district heating, community chilled water, shared wind turbines, etc. There are current plans to use ZEB technologies to build entire off-the-grid or net zero energy use cities. === The "energy harvest" versus "energy conservation" debate === One of the key areas of debate in zero energy building design is over the balance between energy conservation and the distributed point-of-use harvesting of renewable energy (solar energy, wind energy, and thermal energy). Most zero energy homes use a combination of these strategies. As a result of significant government subsidies for photovoltaic solar electric systems, wind turbines, etc., there are those who suggest that a ZEB is a conventional house with distributed renewable energy harvesting technologies. Entire additions of such homes have appeared in locations where photovoltaic (PV) subsidies are significant, but many so called "Zero Energy Homes" still have utility bills. This type of energy harvesting without added energy conservation may not be cost effective with the current price of electricity generated with photovoltaic equipment, depending on the local price of power company electricity. The cost, energy and carbon-footprint savings from conservation (e.g., added insulation, triple-glazed windows and heat pumps) compared to those from on-site energy generation (e.g., solar panels) have been published for an upgrade to an existing house here. Since the 1980s, passive solar building design and passive house have demonstrated heating energy consumption reductions of 70% to 90% in many locations, without active energy harvesting. For new builds, and with expert design, this can be accomplished with little additional construction cost for materials over a conventional building. Very few industry experts have the skills or experience to fully capture benefits of the passive design. Such passive solar designs are much more cost-effective than adding expensive photovoltaic panels on the roof of a conventional inefficient building. A few kilowatt-hours of photovoltaic panels (costing the equivalent of about US$2–3 dollars per annual kWh production) may only reduce external energy requirements by 15% to 30%. A 29 kWh (100,000 BTU) high seasonal energy efficiency ratio 14 conventional air conditioner requires over 7 kW of photovoltaic electricity while it is operating, and that does not include enough for off-the-grid night-time operation. Passive cooling, and superior system engineering techniques, can reduce the air conditioning requirement by 70% to 90%. Photovoltaic-generated electricity becomes more cost-effective when the overall demand for electricity is lower. ==== Combined approach in rapid retrofits for existing buildings ==== Companies in Germany and the Netherlands offer rapid climate retrofit packages for existing buildings, which add a custom designed shell of insulation to the outside of a building, along with upgrades for more sustainable energy use, such as heat pumps. Similar pilot projects are underway in the US. === Occupant behavior === The energy used in a building can vary greatly depending on the behavior of its occupants. The acceptance of what is considered comfortable varies widely. Studies of identical homes have shown dramatic differences in energy use in a variety of climates. An average widely accepted ratio of highest to lowest energy consumer in identical homes is about 3, with some identical homes using up to 20 times as much heating energy as the others. Occupant behavior can vary from differences in setting and programming thermostats, varying levels of illumination and hot water use, window and shading system operation and the amount of miscellaneous electric devices or plug loads used. == Utility concerns == Utility companies are typically legally responsible for maintaining the electrical infrastructure that brings power to our cities, neighborhoods, and individual buildings. Utility companies typically own this infrastructure up to the property line of an individual parcel, and in some cases own electrical infrastructure on private land as well. In the US utilities have expressed concern that the use of Net Metering for ZNE projects threatens the utilities base revenue, which in turn impacts their ability to maintain and service the portion of the electrical grid that they are responsible for. Utilities have expressed concern that states that maintain Net Metering laws may saddle non-ZNE homes with higher utility costs, as those homeowners would be responsible for paying for grid maintenance while ZNE home owners would theoretically pay nothing if they do achieve ZNE status. This creates potential equity issues, as currently, the burden would appear to fall on lower-income households. A possible solution to this issue is to create a minimum base charge for all homes connected to the utility grid, which would force ZNE home owners to pay for grid services independently of their electrical use. Additional concerns are that local distribution as well as larger transmission grids have not been designed to convey electricity in two directions, which may be necessary as higher levels of distributed energy generation come on line. Overcoming this barrier could require extensive upgrades to the electrical grid, however, as of 2010, this is not believed to be a major problem until renewable generation reaches much higher levels of penetration. == Development efforts == Wide acceptance of zero-energy building technology may require more government incentives or building code regulations, the development of recognized standards, or significant increases in the cost of conventional energy. The Google photovoltaic campus and the Microsoft 480-kilowatt photovoltaic campus relied on US Federal, and especially California, subsidies and financial incentives. California is now providing US$3.2 billion in subsidies for residential-and-commercial near-zero-energy buildings. The details of other American states' renewable energy subsidies (up to US$5.00 per watt) can be found in the Database of State Incentives for Renewables and Efficiency. The Florida Solar Energy Center has a slide presentation on recent progress in this area. The World Business Council for Sustainable Development has launched a major initiative to support the development of ZEB. Led by the CEO of United Technologies and the Chairman of Lafarge, the organization has both the support of large global companies and the expertise to mobilize the corporate world and governmental support to make ZEB a reality. Their first report, a survey of key players in real estate and construction, indicates that the costs of building green are overestimated by 300 percent. Survey respondents estimated that greenhouse gas emissions by buildings are 19 percent of the worldwide total, in contrast to the actual value of roughly 40 percent. === Influential zero-energy and low-energy buildings === Those who commissioned construction of passive houses and zero-energy homes (over the last three decades) were essential to iterative, incremental, cutting-edge, technology innovations. Much has been learned from many significant successes, and a few expensive failures. The zero-energy building concept has been a progressive evolution from other low-energy building designs. Among these, the Canadian R-2000 and the German passive house standards have been internationally influential. Collaborative government demonstration projects, such as the superinsulated Saskatchewan House, and the International Energy Agency's Task 13, have also played their part. === Net zero energy building definition === The US National Renewable Energy Laboratory (NREL) published a report called Net-Zero Energy Buildings: A Classification System Based on Renewable Energy Supply Options. This is the first report to lay out a full spectrum classification system for Net Zero/Renewable Energy buildings that includes the full spectrum of Clean Energy sources, both on site and off site. This classification system identifies the following four main categories of Net Zero Energy Buildings/Sites/Campuses: NZEB:A – A footprint renewables Net Zero Energy Building NZEB:B – A site renewables Net Zero Energy Building NZEB:C – An imported renewables Net Zero Energy Building NZEB:D – An off-site purchased renewables Net Zero Energy Building Applying this US Government Net Zero classification system means that every building can become net nero with the right combination of the key net zero technologies – PV (solar), GHP (geothermal heating and cooling, thermal batteries), EE (energy efficiency), sometimes wind, and electric batteries. A graphical exposé of the scale of impact of applying these NREL guidelines for net zero can be seen in the graphic at Net Zero Foundation titled "Net Zero Effect on U.S. Total Energy Use" showing a possible 39% US total fossil fuel use reduction by changing US residential and commercial buildings to net zero, 37% savings if we still use natural gas for cooking at the same level. === Net zero carbon conversion example === Many well known universities have professed to want to completely convert their energy systems off of fossil fuels. Capitalizing on the continuing developments in both photovoltaics and geothermal heat pump technologies, and in the advancing electric battery field, complete conversion to a carbon free energy solution is becoming easier. Large scale hydroelectric has been around since before 1900. An example of such a project is in the Net Zero Foundation's proposal at MIT to take that campus completely off fossil fuel use. This proposal shows the coming application of Net Zero Energy Buildings technologies at the District Energy scale. == Advantages and disadvantages == === Advantages === isolation for building owners from future energy price increases increased comfort due to more-uniform interior temperatures (this can be demonstrated with comparative isotherm maps) reduced total cost of ownership due to improved energy efficiency reduced total net monthly cost of living reduced risk of loss from grid blackouts Minimal to no future energy price increases for building owners reduced requirement for energy austerity and carbon emission taxes improved reliability – photovoltaic systems have 25-year warranties and seldom fail during weather problems – the 1982 photovoltaic systems on the Walt Disney World EPCOT Energy Pavilion were still in use until 2018, even through three hurricanes. They were taken down in 2018 in preparation for a new ride. higher resale value as potential owners demand more ZEBs than available supply the value of a ZEB building relative to similar conventional building should increase every time energy costs increase contribute to the greater benefits of the society, e.g. providing sustainable renewable energy to the grid, reducing the need of grid expansion Optimizing bottom-up urban building energy models (UBEM) can make strides in the exactness of reenactment of building vitality. === Disadvantages === initial costs can be higher – effort required to understand, apply, and qualify for ZEB subsidies, if they exist. very few designers or builders have the necessary skills or experience to build ZEBs possible declines in future utility company renewable energy costs may lessen the value of capital invested in energy efficiency new photovoltaic solar cells equipment technology price has been falling at roughly 17% per year – It will lessen the value of capital invested in a solar electric generating system – Current subsidies may be phased out as photovoltaic mass production lowers future price challenge to recover higher initial costs on resale of building, but new energy rating systems are being introduced gradually. while the individual house may use an average of net zero energy over a year, it may demand energy at the time when peak demand for the grid occurs. In such a case, the capacity of the grid must still provide electricity to all loads. Therefore, a ZEB may not reduce risk of loss from grid blackouts. without an optimized thermal envelope the embodied energy, heating and cooling energy and resource usage is higher than needed. ZEB by definition do not mandate a minimum heating and cooling performance level thus allowing oversized renewable energy systems to fill the energy gap. solar energy capture using the house envelope only works in locations unobstructed from the sun. The solar energy capture cannot be optimized in north (for northern hemisphere, or south for southern Hemisphere) facing shade, or wooded surroundings. ZEB is not free of carbon emissions, glass has a high embodied energy, and the production requires a lot of carbon. Building regulations such as height restrictions or fire code may prevent implementation of wind or solar power or external additions to an existing thermal envelope. === Zero energy building versus green building === The goal of green building and sustainable architecture is to use resources more efficiently and reduce a building's negative impact on the environment. Zero energy buildings achieve one key goal of exporting as much renewable energy as it uses over the course of year; reducing greenhouse gas emissions. ZEB goals need to be defined and set, as they are critical to the design process. Zero energy buildings may or may not be considered "green" in all areas, such as reducing waste, using recycled building materials, etc. However, zero energy, or net-zero buildings do tend to have a much lower ecological impact over the life of the building compared with other "green" buildings that require imported energy and/or fossil fuel to be habitable and meet the needs of occupants. Both terms, zero energy buildings and green buildings, have similarities and differences. "Green" buildings often focus on operational energy, and disregard the embodied carbon footprint from construction. According to the IPCC, embodied carbon will make up half of the total carbon emissions between now[2020] and 2050. On the other hand, zero energy buildings are specifically designed to produce enough energy from renewable energy sources to meet its own consumption requirements, and green buildings can be generally defined as a building that reduces negative impacts or positively impacts our natural environment [1-NEWUSDE]. There are several factors that must be considered before a building is determined to be a green building. Building a green building must include an efficient use of utilities such as water and energy, use of renewable energy, use of recycling and reusing practices to reduce waste, provide proper indoor air quality, use of ethically sourced and non-toxic materials, use of a design that allows the building to adapt to changing environmental climates, and aspects of the design, construction, and operational process that address the environment and quality of life of its occupants. The term green building can also be used to refer to the practice of green building which includes being resource efficient from its design, to its construction, to its operational processes, and ultimately to its deconstruction. The practice of green building differs slightly from zero energy buildings because it considers all environmental impacts such as use of materials and water pollution for example, whereas the scope of zero energy buildings only includes the buildings energy consumption and ability to produce an equal amount, or more, of energy from renewable energy sources. There are many unforeseen design challenges and site conditions required to efficiently meet the renewable energy needs of a building and its occupants, as much of this technology is new. Designers must apply holistic design principles, and take advantage of the free naturally occurring assets available, such as passive solar orientation, natural ventilation, daylighting, thermal mass, and night time cooling. Designers and engineers must also experiment with new materials and technological advances, striving for more affordable and efficient production. A main issue in recent research regarding challenges and implementation is that there are no common standards or measures for defining and checking "nearly" zero energy performance (D'Agostino & Mazzarella, 2019). === Zero energy building versus zero heating building === With advances in ultra low U-value glazing a (nearly) zero heating building is proposed to supersede nearly-zero energy buildings in EU. The zero heating building reduces on the passive solar design and makes the building more opened to conventional architectural design. The zero heating building removes the need for seasonal / winter utility power reserve. The annual specific heating demand for the zero-heating house should not exceed 3 kWh/m2a. Zero heating building is simpler to design and to operate. For example: there is no need for modulated sun shading. ==== Certification ==== The two most common certifications for green building are Passive House, and LEED. The goal of Passive House is to be energy efficient and reduce the use of heating/cooling to below standard. LEED certification is more comprehensive in regards to energy use, a building is awarded credits as it demonstrates sustainable practices across a range of categories. Another certification that designates a building as a net zero energy building exists within the requirements of the Living Building Challenge (LBC) called the Net Zero Energy Building (NZEB) certification provided by the International Living Future Institute (ILFI). The designation was developed in November 2011 as the NZEB certification but was then simplified to the Zero Energy Building Certification in 2017. Included in the list of green building certifications, the BCA Green Mark rating system allows for the evaluation of buildings for their performance and impact on the environment == Worldwide == === International initiatives === As a response to global warming and increasing greenhouse gas emissions, countries around the world have been gradually implementing different policies to tackle ZEB. Between 2008 and 2013, researchers from Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Italy, the Republic of Korea, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, the United Kingdom and the US worked together in the joint research program called "Towards Net Zero Energy Solar Buildings". The program was created under the umbrella of International Energy Agency (IEA) Solar Heating and Cooling Program (SHC) Task 40 / Energy in Buildings and Communities (EBC, formerly ECBCS) Annex 52 with the intent of harmonizing international definition frameworks regarding net-zero and very low energy buildings by diving them into subtasks. The European Union has even mandated that all new buildings must be nearly zero-energy by the end of 2020 (and public buildings by 2018). However, D'Agostino and Mazzarella (2019) note that this directive has been applied inconsistently. Some countries use primary energy consumption as the measure, while others prioritize CO₂ emissions or renewable energy input. In 2015, the Paris Agreement was created under the United Nations Framework Convention on Climate Change (UNFCC) with the intent of keeping the global temperature rise of the 21st century below 2 degrees Celsius and limiting temperature increase to 1.5 degrees Celsius by limiting greenhouse gas emissions. While there was no enforced compliance, 197 countries signed the international treaty which bound developed countries legally through a mutual cooperation where each party would update its INDC every five years and report annually to the COP. Due to the advantages of energy efficiency and carbon emission reduction, ZEBs are widely being implemented in many different countries as a solution to energy and environmental problems within the infrastructure sector. === Australia === National trajectory In Australia, the Trajectory for Low Energy Buildings and its Addendum were agreed by all Commonwealth, state and territory energy ministers in 2019. The Trajectory is a national plan that aims to achieve zero energy and carbon-ready commercial and residential buildings in Australia. It is a key initiative to address Australia's 40% energy productivity improvement target by 2030 under the National Energy Productivity Plan. On 7 July 2023, the Energy and Climate Change Ministerial Council agreed to update the Trajectory for Low Energy Buildings by the end of 2024. The updates to the Trajectory will: support the delivery of a low energy, net zero emissions residential and commercial building sector by 2050 consider the success of the existing program help develop the policy pathway for the building sector to achieve net zero by 2050. ZEB in Australia Council House 2 (CH2)Council House 2(also known as CH2), is an office building located at 240 Little Collins Street in the Melbourne central business district, Australia. It is used by the City of Melbourne council, and in April 2005, became the first purpose-built office building in Australia to achieve a maximum Six Green Star rating. === Belgium === In Belgium there is a project with the ambition to make the Belgian city Leuven climate-neutral in 2030. === Brazil === In Brazil, the Ordinance No. 42, of February 24, 2021, approved the Inmetro Normative Instruction for the Classification of Energy Efficiency of Commercial, Service and Public Buildings (INI-C), which improves the Technical Quality Requirements for the Energy Efficiency Level of Commercial, Service and Public Buildings (RTQ-C), specifying the criteria and methods for classifying commercial, service and public buildings as to their energy efficiency. Annex D presents the procedures for determining the potential for local renewable energy generation and the assessment conditions for Near Zero Energy Buildings (NZEBs) and Positive Energy Buildings (PEBs). === Canada === The Canadian Home Builders Association – National oversees the Net Zero Homes certification label, a voluntary industry-led labeling initiative. In December 2017, the BC Energy Step Code entered into legal force in British Columbia. Local British Columbia governments may use the standard to incentivize or require a level of energy efficiency in new construction that goes above and beyond the requirements of the base building code. The regulation is designed as a technical roadmap to help the province reach its target that all new buildings will attain a net zero energy ready level of performance by 2032. In August 2017, the Government of Canada released Build Smart – Canada's Buildings Strategy, as a key driver of the Pan Canadian Framework on Clean Growth and Climate Change, Canada's national climate strategy. The Build Smart strategy seeks to dramatically increase the energy efficiency of Canadian buildings in pursuit of a net zero energy ready level of performance. In Canada the Net-Zero Energy Home Coalition is an industry association promoting net-zero energy home construction and the adoption of a near net-zero energy home (nNZEH), NZEH Ready and NZEH standard. The Canada Mortgage and Housing Corporation is sponsoring the EQuilibrium Sustainable Housing Competition that will see the completion of fifteen zero-energy and near-zero-energy demonstration projects across the country starting in 2008. The EcoTerra House in Eastman, Quebec is Canada's first nearly net-zero energy housing built through the CMHC EQuilibrium Sustainable Housing Competition. The house was designed by Assoc. Prof. Dr. Masa Noguchi of the University of Melbourne for Alouette Homes and engineered by Prof. Dr. Andreas K. Athienitis of Concordia University. In 2014, the public library building in Varennes, QC, became the first ZNE institutional building in Canada. The library is also LEED gold certified. The EcoPlusHome in Bathurst, New Brunswick. The Eco Plus Home is a prefabricated test house built by Maple Leaf Homes and with technology from Bosch Thermotechnology. Mohawk College will be building Hamilton's first net Zero Building === China === With an estimated population of 1,439,323,776 people, China has become one of the world's leading contributor to greenhouse gas emissions due to its ongoing rapid urbanization. Even with the growing increase in building infrastructure, China has long been considered as a country where the overall energy demand has consistently grown less rapidly than the gross domestic product (GDP) of China. Since the late 1970s, China has been using half as much energy as it did in 1997, but due to its dense population and rapid growth of infrastructure, China has become the world's second largest energy consumer and is in a position to become the leading contributor to greenhouse gas emissions in the next century. Since 2010, Chinese government has been driven by the release of new national policies to increase ZEB design standards and has also laid out a series of incentives to increase ZEB projects in China. In November 2015, China's Ministry of Housing and Urban-Rural Development (MOHURD) released a technical guide regarding passive and low energy green residential buildings. This guide was aimed at improving energy efficiency in China's infrastructure and was also the first of its kind to be formally released as a guide for energy efficiency. Also, with rapid growth in ZEBs in the last three years, there is an estimated influx of ZEBs to be built in China by 2020 along with the existing ZEB projects that are already built. As a response to the Paris Agreement in 2015, China stated that it set a target of reducing peak carbon emissions around 2030 while also aiming to lower carbon dioxide emissions by 60-65 percent from 2005 emissions per unit of GDP. In 2020, Chinese Communist Party leader Xi Jinping released a statement in his address to the UN General Assembly declaring that China would be carbon neutral by 2060 pushing forward climate change reforms. With more than 95 percent of China's energy originating from fuel sources that emit carbon dioxide, carbon neutrality in China will require an almost complete transition to fuel sources such as solar power, wind, hydro, or nuclear power. In order to achieve carbon neutrality, China's proposed energy quota policy will have to incorporate new monitoring and mechanisms that ensure accurate measurements of energy performance of buildings. Future research should investigate the different possible challenges that could come up due to implementation of ZEB policies in China. ==== Net-zero energy projects in China ==== One of the new generation net-zero energy office buildings successfully constructed is the 71-story Pearl River Tower located in Guangzhou, China. Designed by Skidmore Owings Merrill LLP, the tower was designed with the idea that the building would generate the same amount of energy used on an annual basis while also following the four steps to net zero energy: reduction, absorption, reclamation, and generation. While initial plans for the Pearl River Tower included natural gas-fired microturbines used for generation electricity, photovoltaic panels integrated into the glazed roof and shading louvers and tactical building design in combination with the VAWT's electricity generation were chosen instead due to local regulations. === Denmark === Strategic Research Centre on Zero Energy Buildings was in 2009 established at Aalborg University by a grant from the Danish Council for Strategic Research (DSF), the Programme Commission for Sustainable Energy and Environment, and in cooperation with the Technical University of Denmark, Danish Technological Institute, The Danish Construction Association and some private companies. The purpose of the centre is through development of integrated, intelligent technologies for the buildings, which ensure considerable energy conservation and optimal application of renewable energy, to develop zero energy building concepts. In cooperation with the industry, the centre will create the necessary basis for a long-term sustainable development in the building sector. === Germany === Technische Universität Darmstadt won first place in the international zero energy design 2007 Solar Decathlon competition, with a passivhaus design (Passive house) + renewables, scoring highest in the Architecture, Lighting, and Engineering contests Fraunhofer Institute for Solar Energy Systems ISE, Freiburg im Breisgau Net zero energy, energy-plus or climate-neutral buildings in the next generation of electricity grids === India === India's first net zero building is Indira Paryavaran Bhawan, located in New Delhi, inaugurated in 2014. Features include passive solar building design and other green technologies. High-efficiency solar panels are proposed. It cools air from toilet exhaust using a thermal wheel in order to reduce load on its chiller system. It has many water conservation features. === Iran === In 2011, Payesh Energy House (PEH) or Khaneh Payesh Niroo by a collaboration of Fajr-e-Toseah Consultant Engineering Company and Vancouver Green Homes Ltd] under management of Payesh Energy Group (EPG) launched the first Net-Zero passive house in Iran. This concept makes the design and construction of PEH a sample model and standardized process for mass production by MAPSA. Also, an example of the new generation of zero energy office buildings is the 24-story OIIC Office Tower, which is started in 2011, as the OIIC Company headquarters. It uses both modest energy efficiency, and a big distributed renewable energy generation from both solar and wind. It is managed by Rahgostar Naft Company in Tehran, Iran. The tower is receiving economic support from government subsidies that are now funding many significant fossil-fuel-free efforts. === Ireland === In 2005, a private company launched the world's first standardised passive house in Ireland, this concept makes the design and construction of passive house a standardised process. Conventional low energy construction techniques have been refined and modelled on the PHPP (Passive House Design Package) to create the standardised passive house. Building offsite allows high precision techniques to be utilised and reduces the possibility of errors in construction. In 2009 the same company started a project to use 23,000 liters of water in a seasonal storage tank, heated up by evacuated solar tubes throughout the year, with the aim to provide the house with enough heat throughout the winter months thus eliminating the need for any electrical heat to keep the house comfortably warm. The system is monitored and documented by a research team from The University of Ulster and the results will be included in part of a PhD thesis. In 2012 Cork Institute of Technology started renovation work on its 1974 building stock to develop a net zero energy building retrofit. The exemplar project will become Ireland's first zero energy testbed offering a post-occupancy evaluation of actual building performance against design benchmarks. === Jamaica === The first zero energy building in Jamaica and the Caribbean opened at the Mona Campus of the University of the West Indies (UWI) in 2017. The 2300 square foot building was designed to inspire more sustainable and energy efficient buildings in the area. === Japan === After the April 2011 Fukushima earthquake followed by the up with Fukushima Daiichi nuclear disaster, Japan experienced severe power crisis that led to the awareness of the importance of energy conservation. In 2012 Ministry of Economy, Trade and Industry, Ministry of Land, Infrastructure, Transport and Tourism and Ministry of the Environment (Japan) summarized the road map for Low-carbon Society which contains the goal of ZEH and ZEB to be standard of new construction in 2020. The Mitsubishi Electric Corporation is underway with the construction of Japan's first zero energy office building, set to be completed in October, 2020 (as of September 2020). The SUSTIE ZEB test facility is located in Kamakura, Japan, to develop ZEB technology. With the net zero certification, the facility projects to reduce energy consumption by 103%. Japan has made it a goal that all new houses be net zero energy by 2030. The developing company Sekisui House introduced their first net zero home in 2013, and is now planning Japan's first zero energy condominium in Nagoya City, it is a three-story building with 12 units. There are solar panels on the roof and fuel cells for each unit to provide backup power. === Korea (Republic of) === South Korea's Mandatory ZEB requirements, which have been previously applied to buildings with a GFA of 1,000 m2+ in 2021 will expand to buildings with a GFA of 500 m2+ in 2022, until being applicable to all public buildings starting in 2024. For private buildings, ZEB certification will be mandated for building permits with a GFA of over 100,000 m2 from 2023. After 2025, zero-energy construction for private buildings will be expanded to GFAs over 1,000 m2. The goal of the policy is to convert all public sector buildings to ZEB grade 3 (an energy independence rate of 60% ~ 80%), and all private buildings to ZEB grade 5 (an energy independence rate of 20% ~ 40%) by 2030. EnergyX DY-Building (에너지엑스 DY빌딩), the first commercial Net-Zero Energy Building (NZEB, or ZEB grade 1) and the first Plus Energy Building (+ZEB, or ZEB grade plus) in Korea was opened and introduced in 2023. The energy technology and sustainable architectural platform company EnergyX developed, designed, and engineered the building with its proprietary technologies and services. EnergyX DY-Building received the ZEB certification with an energy independence rate (or energy self-sufficiency rate) of 121.7%. === Malaysia === In October 2007, the Malaysia Energy Centre (PTM) successfully completed the development and construction of the PTM Zero Energy Office (ZEO) Building. The building has been designed to be a super-energy-efficient building using only 286 kWh/day. The renewable energy – photovoltaic combination is expected to result in a net zero energy requirement from the grid. The building is currently undergoing a fine tuning process by the local energy management team. Findings are expected to be published in a year. In 2016, the Sustainable Energy Development Authority Malaysia (SEDA Malaysia) started a voluntary initiative called Low Carbon Building Facilitation Program. The purpose is to support the current low carbon cities program in Malaysia. Under the program, several project demonstration managed to reduce energy and carbon beyond 50% savings and some managed to save more than 75%. Continuous improvement of super energy efficient buildings with significant implementation of on-site renewable energy managed to make a few of them become nearly Zero Energy (nZEB) as well as Net-Zero Energy Building (NZEB). In March 2018, SEDA Malaysia has started the Zero Energy Building Facilitation Program. Malaysia also has its own sustainable building tool special for Low Carbon and zero energy building, called GreenPASS that been developed by the Construction Industry Development Board Malaysia (CIDB) in 2012, and currently being administered and promoted by SEDA Malaysia. GreenPASS official is called the Construction Industry Standard (CIS) 20:2012. === Netherlands === In September 2006, the Dutch headquarters of the World Wildlife Fund (WWF) in Zeist was opened. This earth-friendly building gives back more energy than it uses. All materials in the building were tested against strict requirements laid down by the WWF and the architect. === Norway === In February 2009, the Research Council of Norway assigned The Faculty of Architecture and Fine Art at the Norwegian University of Science and Technology to host the Research Centre on Zero Emission Buildings (ZEB), which is one of eight new national Centres for Environment-friendly Energy Research (FME). The main objective of the FME-centres is to contribute to the development of good technologies for environmentally friendly energy and to raise the level of Norwegian expertise in this area. In addition, they should help to generate new industrial activity and new jobs. Over the next eight years, the FME-Centre ZEB will develop competitive products and solutions for existing and new buildings that will lead to market penetration of zero emission buildings related to their production, operation and demolition. === Singapore === Singapore unveiled a prominent development at the National University of Singapore that is a net-zero energy building. The building, called SDE4, is located within a group of three buildings in its School of Design and Environment (SDE). The design of the building achieved a Green Mark Platinum certification as it produces as much energy as it consumes with its solar panel covered rooftop and hybrid cooling system along with many integrated systems to achieve optimum energy efficiency. This development was the first new-build zero-energy building to come to fruition in Singapore, and the first zero-energy building at the NUS. The first retrofitted zero energy building to be developed in Singapore was a building at the Building and Construction Authority (BCA) academy by the Minister for National Development Mah Bow Tan at the inaugural Singapore Green Building Week on October 26, 2009. Singapore's Green Building Week (SGBW) promotes sustainable development and celebrates the achievements of successfully designed sustainable buildings. A net-zero energy building unveiled more recently is the SMU Connexion (SMUC). It is the first net-zero energy building in the city that also utilizes mass engineered timber (MET). It is designed to meet the Building and Construction Authority (BCA) Green Mark Platinum certification and has been in operation since January 2020. === Switzerland === The Swiss MINERGIE-A-Eco label certifies zero energy buildings. The first building with this label, a single-family home, was completed in Mühleberg in 2011. === United Arab Emirates === Masdar City in Abu Dhabi Dubai The Sustainable City in Dubai === United Kingdom === In December 2006, the government announced that by 2016 all new homes in England will be zero energy buildings. To encourage this, an exemption from Stamp Duty Land Tax is planned. In Wales the plan is for the standard to be met earlier in 2011, although it is looking more likely that the actual implementation date will be 2012. However, as a result of a unilateral change of policy published at the time of the March 2011 budget, a more limited policy is now planned which, it is estimated, will only mitigate two thirds of the emissions of a new home. BedZED development Hockerton Housing Project In January 2019 the Ministry of Housing Communities and Local Government simply defined 'Zero Energy' as 'just meets current building standards' neatly solving this problem. === United States === In the US, ZEB research is currently being supported by the US Department of Energy (DOE) Building America Program, including industry-based consortia and researcher organizations at the National Renewable Energy Laboratory (NREL), the Florida Solar Energy Center (FSEC), Lawrence Berkeley National Laboratory (LBNL), and Oak Ridge National Laboratory (ORNL). From fiscal year 2008 to 2012, DOE plans to award $40 million to four Building America teams, the Building Science Corporation; IBACOS; the Consortium of Advanced Residential Buildings; and the Building Industry Research Alliance, as well as a consortium of academic and building industry leaders. The funds will be used to develop net-zero-energy homes that consume 50% to 70% less energy than conventional homes. DOE is also awarding $4.1 million to two regional building technology application centers that will accelerate the adoption of new and developing energy-efficient technologies. The two centers, located at the University of Central Florida and Washington State University, will serve 17 states, providing information and training on commercially available energy-efficient technologies. The U.S. Energy Independence and Security Act of 2007 created 2008 through 2012 funding for a new solar air conditioning research and development program, which should soon demonstrate multiple new technology innovations and mass production economies of scale. The 2008 Solar America Initiative funded research and development into future development of cost-effective Zero Energy Homes in the amount of $148 million in 2008. The Solar Energy Tax Credits were extended until the end of 2016. By Executive Order 13514, U.S. President Barack Obama mandated that by 2015, 15% of existing Federal buildings conform to new energy efficiency standards and 100% of all new Federal buildings be Zero-Net-Energy by 2030. ==== Energy Free Home Challenge ==== In 2007, the philanthropic Siebel Foundation created the Energy Free Home Foundation. The goal was to offer $20 million in global incentive prizes to design and build a 2,000 square foot (186 square meter) three-bedroom, two bathroom home with (1) net-zero annual utility bills that also has (2) high market appeal, and (3) costs no more than a conventional home to construct. The plan included funding to build the top ten entries at $250,000 each, a $10 million first prize, and then a total of 100 such homes to be built and sold to the public. Beginning in 2009, Thomas Siebel made many presentations about his Energy Free Home Challenge. The Siebel Foundation Report stated that the Energy Free Home Challenge was "Launching in late 2009". The Lawrence Berkeley National Laboratory at the University of California, Berkeley participated in writing the "Feasibility of Achieving Zero-Net-Energy, Zero-Net-Cost Homes" for the $20-million Energy Free Home Challenge. If implemented, the Energy Free Home Challenge would have provided increased incentives for improved technology and consumer education about zero energy buildings coming in at the same cost as conventional housing. ==== US Department of Energy Solar Decathlon ==== The US Department of Energy Solar Decathlon is an international competition that challenges collegiate teams to design, build, and operate the most attractive, effective, and energy-efficient solar-powered house. Achieving zero net energy balance is a major focus of the competition. ==== States ==== ===== Arizona ===== Zero Energy House developed by the NAHB Research Center and John Wesley Miller Companies, Tucson. ===== California ===== The State of California has proposed that all new low- and mid-rise residential buildings, and all new commercial buildings, be designed and constructed to ZNE standards beginning in 2020 and 2030, respectively. The requirements, if implemented, will be promulgated via the California Building Code, which is updated on a three-year cycle and which currently mandates some of the highest energy efficiency standards in the United States. California is anticipated to further increase efficiency requirements by 2020, thus avoiding the trends discussed above of building standard housing and achieving ZNE by adding large amounts of renewables. The California Energy Commission is required to perform a cost-benefit analysis to prove that new regulations create a net benefit for residents of the state. West Village, located on the University of California campus in Davis, California, was the largest ZNE-planned community in North America at the time of its opening in 2014. The development contains student housing for approximately 1,980 UC Davis students as well as leasable office space and community amenities including a community center, pool, gym, restaurant and convenience store. Office spaces in the development are currently leased by energy and transportation-related University programs. The project was a public-private partnership between the university and West Village Community Partnership LLC, led by Carmel Partners of San Francisco, a private developer, who entered into a 60-year ground lease with the university and was responsible for the design, construction, and implementation of the $300 million project, which is intended to be market-rate housing for Davis. This is unique as the developer designed the project to achieve ZNE at no added cost to themselves or to the residents. Designed and modeled to achieve ZNE, the project uses a mixture of passive elements (roof overhangs, well-insulated walls, radiant heat barriers, ducts in insulated spaces, etc.) as well as active approaches (occupancy sensors on lights, high-efficiency appliances and lighting, etc.). Designed to out-perform California's 2008 Title 24 energy codes by 50%, the project produced 87% of the energy it consumed during its first year in operation. The shortcoming in ZNE status is attributed to several factors, including improperly functioning heat pump water heaters, which have since been fixed. Occupant behavior is significantly different from that anticipated, with the all-student population using more energy on a per-capita basis than typical inhabitants of single-family homes in the area. One of the primary factors driving increased energy use appears to be the increased miscellaneous electrical loads (MEL, or plug loads) in the form of mini-refrigerators, lights, computers, gaming consoles, televisions, and other electronic equipment. The university continues to work with the developer to identify strategies for achieving ZNE status. These approaches include incentivizing occupant behavior and increasing the site's renewable energy capacity, which is a 4 MW photovoltaic array per the original design. The West Village site is also home to the Honda Smart Home US, a beyond-ZNE single-family home that incorporates cutting-edge technologies in energy management, lighting, construction, and water efficiency. The IDeAs Z2 Design Facility is a net zero energy, zero carbon retrofit project occupied since 2007. It uses less than one fourth the energy of a typical U.S. office by applying strategies such as daylighting, radiant heating/cooling with a ground-source heat pump and high energy performance lighting and computing. The remaining energy demand is met with renewable energy from its building-integrated photovoltaic array. In 2009, building owner and occupant Integrated Design Associates (IDeAs) recorded actual measured energy use intensity of 21.17 thousand British thermal units per square foot (66.8 kWh/m2) per year, with 21.72 thousand British thermal units per square foot (68.5 kWh/m2) per year produced, for a net of −0.55 thousand British thermal units per square foot (−1.7 kWh/m2) per year. The building is also carbon neutral, with no gas connection, and with carbon offsets purchased to cover the embodied carbon of the building materials used in the renovation. The Zero Net Energy Center, scheduled to open in 2013 in San Leandro, is to be a 46,000-square-foot electrician training facility created by the International Brotherhood of Electrical Workers Local 595 and the Northern California chapter of the National Electrical Contractors Association. Training will include energy-efficient construction methods. The Green Idea House is a net zero energy, zero-carbon retrofit in Hermosa Beach. George LeyVa Middle School Administrative Offices, occupied since fall 2011, is a net zero energy, net zero carbon emissions building of just over 9,000 square feet. With daylighting, variable refrigerant flow HVAC, and displacement ventilation, it is designed to use half of the energy of a conventional California school building, and, through a building-integrated solar array, provides 108% of the energy needed to offset its annual electricity use. The excess helps power the remainder of the middle school campus. It is the first publicly funded NZE K–12 building in California. The Stevens Library at Sacred Heart Schools in California is the first net-zero library in the United States, receiving Net Zero Energy Building status from the International Living Future Institute, part of the PG&E Zero Net Energy Pilot Project. The Santa Monica City Services Building is among the first net-zero energy, net-zero water public/municipal buildings in California. Completed in 2020, the 50,000-square-foot addition to the historic Santa Monica City Hall building was designed to provide its own energy and water, and to minimize energy use through efficient building systems. At 402,000 square-feet, the California Air Resources Board Southern California Headquarters – Mary D. Nichols Campus, is the largest net-zero energy facility in the United States. A photovoltaic system covers 204,903 square-feet between the facility rooftop and parking pavilions. The +3.5 megawatt system is anticipated to generate roughly 6,235,000 kWh reusable energy per year. The facility was dedicated on November 18, 2021. ===== Colorado ===== The Moore House achieves net-zero energy usage with passive solar design, 'tuned' heat reflective windows, super-insulated and air-tight construction, natural daylighting, solar thermal panels for hot water and space heating, a photovoltaic (PV) system that generates more carbon-free electricity than the house requires, and an energy-recovery ventilator (ERV) for fresh air. The green building strategies used on the Moore House earned it a verified home energy rating system (HERS) score of −3. The NREL Research Support Facility in Golden is a class A office building. Its energy efficiency features include: Thermal storage concrete structure, transpired solar collectors, 70 miles of radiant piping, high-efficiency office equipment, and an energy-efficient data center that reduces the data center's energy use by 50% over traditional approaches. Wayne Aspinall Federal Building in Grand Junction, originally constructed in 1918, became the first Net Zero Energy building listed on the National Register of Historic Places. On-site renewable energy generation is intended to produce 100% of the building's energy throughout the year using the following energy efficiency features: Variable refrigerant flow for the HVAC, a geo-exchange system, advanced metering and building controls, high-efficient lighting systems, thermally enhanced building envelope, interior window system (to maintain historic windows), and advanced power strips (APS) with individual occupancy sensors. Tutt Library at Colorado College was renovated to be a net-zero library in 2017, making it the largest ZNE academic library. It received an Innovation Award from the National Association of college and University Business Officers. ===== Florida ===== The 1999 side-by-side Florida Solar Energy Center Lakeland demonstration project was called the "Zero Energy Home". It was a first-generation university effort that significantly influenced the creation of the U.S. Department of Energy, Energy Efficiency and Renewable Energy, Zero Energy Home program. ===== Illinois ===== The Walgreens store located on 741 Chicago Ave, Evanston, is the first of the company's stores to be built and or converted to a net zero energy building. It is the first net zero energy retail stores to be built and will pave the way to renovating and building net zero energy retail stores in the near future. The Walgreens store includes the following energy efficiency features: Geo-exchange system, energy-efficient building materials, LED lighting and daylight harvesting, and carbon dioxide refrigerant. The Electrical and Computer Engineering building at the University of Illinois at Urbana-Champaign, which was built in 2014, is a net zero building. ===== Iowa ===== The MUM Sustainable Living Center was designed to surpass LEED Platinum qualification. The Maharishi University of Management (MUM) in Fairfield, Iowa, founded by Maharishi Mahesh Yogi (best known for having brought Transcendental Meditation to the West) incorporates principles of Bau Biology (a German system that focuses on creating a healthy indoor environment), as well as Maharishi Vedic Architecture (an Indian system of architecture focused on the precise orientation, proportions and placement of rooms). The building is one of the few in the country to qualify as net zero, and one of even fewer that can claim the banner of grid positive via its solar power system. A rainwater catchment system and on-site natural waste-water treatment likewise take the building off (sewer) grid with respect to water and waste treatment. Additional green features include natural daylighting in every room, natural and breathable earth block walls (made by the program's students), purified rainwater for both potable and non-potable functions; and an on-site water purification and recycling system consisting of plants, algae, and bacteria. ===== Kentucky ===== Richardsville Elementary School, part of the Warren County Public School District in south-central Kentucky, is the first Net Zero energy school in the United States. To reach Net Zero, innovative energy reduction strategies were used by CMTA Consulting Engineers and Sherman Carter Barnhart Architects including dedicated outdoor air systems (DOAS) with dynamic reset, new IT systems, alternative methods to prepare lunches, and the use of solar photovoltaics. The project has an efficient thermal envelope constructed with insulated concrete form (ICF) walls, geothermal water source heat pumps, low-flow fixtures, and features daylighting extensively throughout. It is also the first truly wireless school in Kentucky. Locust Trace AgriScience Center, an agricultural-based vocational school serving Fayette County Public Schools and surrounding districts, features a Net Zero Academic Building engineered by CMTA Consulting Engineers and designed by Tate Hill Jacobs Architects. The facility, located in Lexington, Kentucky, also has a greenhouse, riding arena with stalls, and a barn. To reach Net Zero in the Academic Building the project utilizes an air-tight envelope, expanded indoor temperature setpoints in specified areas to more closely model real-world conditions, a solar thermal system, and geothermal water source heat pumps. The school has further reduced its site impact by minimizing municipal water use through the use of a dual system consisting of a standard leach field system and a constructed wetlands system and using pervious surfaces to collect, drain, and use rainwater for crop irrigation and animal watering. ===== Massachusetts ===== The government of Cambridge has enacted a plan for "net zero" carbon emissions from all buildings in the city by 2040. John W. Olver Transit Center, designed by Charles Rose Architects Inc, is an intermodal transit hub in Greenfield, Massachusetts. Built with American Recovery and Reinvestment Act funds, the facility was constructed with solar panels, geothermal wells, copper heat screens and other energy efficient technologies. ===== Michigan ===== The Mission Zero House is the 110-year-old Ann Arbor home of Greenovation.TV host and Environment Report contributor Matthew Grocoff. As of 2011, the home is the oldest home in America to achieve net-zero energy. The owners are chronicling their project on Greenovation.TV and The Environment Report on public radio. The Vineyard Project is a Zero Energy Home (ZEH) thanks to the Passive Solar Design, 3.3 Kws of Photovoltaics, Solar Hot Water and Geothermal Heating and Cooling. The home is pre-wired for a future wind turbine and only uses 600 kWh of energy per month while a minimum of 20 kWh of electricity per day with many days net-metering backwards. The project also used ICF insulation throughout the entire house and is certified as Platinum under the LEED for Homes certification. This Project was awarded Green Builder Magazine Home of the Year 2009. The Lenawee Center for a Sustainable Future, a new campus for Lenawee Intermediate School District, serves as a living laboratory for the future of agriculture. It is the first Net Zero education building in Michigan, engineered by CMTA Consulting Engineers and designed by The Collaborative, Inc. The project includes solar arrays on the ground as well as the roof, a geothermal heating and cooling system, solar tubes, permeable pavement and sidewalks, a sedum green roof, and an overhang design to regulate building temperature. ===== Missouri ===== In 2010, architectural firm HOK worked with energy and daylighting consultant The Weidt Group to design a 170,735-square-foot (15,861.8 m2) net zero carbon emissions Class A office building prototype in St. Louis, Missouri. The team chronicled its process and results on Netzerocourt.com. ===== New Jersey ===== The 31 Tannery Project, located in Branchburg, New Jersey, serves as the corporate headquarters for Ferreira Construction, the Ferreira Group, and Noveda Technologies. The 42,000-square-foot (3,900 m2) office and shop building was constructed in 2006 and is the first building in the state of New Jersey to meet New Jersey's Executive Order 54. The building is also the first Net Zero Electric Commercial Building in the United States. ===== New York ===== Green Acres, the first true zero-net energy development in America, is located in New Paltz, about 80 miles (130 km) north of New York City. Greenhill Contracting began construction on this development of 25 single family homes in summer 2008, with designs by BOLDER Architecture. After a full year of occupancy, from March 2009 to March 2010, the solar panels of the first occupied home in Green Acres generated 1490 kWh more energy than the home consumed. The second occupied home has also achieved zero-net energy use. As of June 2011, five houses have been completed, purchased and occupied, two are under construction, and several more are being planned. The homes are built of insulated concrete forms with spray foam insulated rafters and triple pane casement windows, heated and cooled by a geothermal system, to create extremely energy-efficient and long-lasting buildings. The heat recovery ventilator provides constant fresh air and, with low or no VOC (volatile organic compound) materials, these homes are very healthy to live in. To the best of our knowledge, Green Acres is the first development of multiple buildings, residential or commercial, that achieves true zero-net energy use in the United States, and the first zero-net energy development of single family homes in the world. Greenhill Contracting has built two luxury zero-net energy homes in Esopus, completed in 2008. One house was the first Energy Star rated zero-net energy home in the Northeast and the first registered zero-net energy home on the US Department of Energy's Builder's Challenge website. These homes were the template for Green Acres and the other zero-net energy homes that Greenhill Contracting has built, in terms of methods and materials. The headquarters of Hudson Solar, a dba of Hudson Valley Clean Energy, Inc., located in Rhinebeck and completed in 2007, was determined by NESEA (the Northeast Sustainable Energy Association) to have become the first proven zero-net energy commercial building in New York State and the ten northeast United States (October 2008). The building consumes less energy than it generates, using a solar electric system to generate power from the sun, geothermal heating and cooling, and solar thermal collectors to heat all its hot water. ===== Oklahoma ===== The first 5,000-square-foot (460 m2) zero-energy design home was built in 1979 with support from President Carter's new United States Department of Energy. It relied heavily on passive solar building design for space heat, water heat and space cooling. It heated and cooled itself effectively in a climate where the summer peak temperature was 110 degrees Fahrenheit, and the winter low temperature was −10 F. It did not use active solar systems. It is a double envelope house that uses a gravity-fed natural convection air flow design to circulate passive solar heat from 1,000 square feet (93 m2) of south-facing glass on its greenhouse through a thermal buffer zone in the winter. A swimming pool in the greenhouse provided thermal mass for winter heat storage. In the summer, air from two 24-inch (610 mm) 100-foot-long (30 m) underground earth tubes is used to cool the thermal buffer zone and exhaust heat through 7200 cfm of outer-envelope roof vents. ===== Oregon ===== Net Zero Energy Building Certification launched in 2011, with an international following. The first project, Painters Hall, is Pringle Creek's Community Center, café, office, art gallery, and event venue. Originally built in the 1930s, Painters Hall was renovated to LEED Platinum Net Zero energy building standards in 2010, demonstrating the potential of converting existing building stock into high‐performance, sustainable building sites. Painters Hall features simple low-cost solutions for energy reduction, such as natural daylighting and passive cooling lighting, that save money and increase comfort. A district ground-source geothermal loop serves the building's GSHP for highly efficient heating and air conditioning. Excess generation from the 20.2 kW rooftop solar array offsets pumping for the neighborhoods geo loop system. Open to the public, Painters Hall is a hub for gatherings of friends, neighbors, and visitors at the heart of a neighborhood designed around nature and community. ===== Pennsylvania ===== The Phipps Center for Sustainable Landscapes in Pittsburgh was designed to be one of the greenest buildings in the world. It achieved Net Zero Energy Building Certification from the Living Building Challenge in February 2014 and is pursuing full certification. The Phipps Center uses energy conservation technologies such as solar hot water collectors, carbon dioxide sensors, and daylighting, as well as renewable energy technologies to allow it to achieve Net Zero Energy status. The Lombardo Welcome Center at Millersville University became the first building in the state to become zero-energy certified. This was the largest step in Millersville University's goal to be carbon neutral by 2040. According to the International Living Future Institute, The Lombardo Welcome Center is one of the highest-performing buildings throughout the country generating 75% more energy than currently being used. ===== Rhode Island ===== In Newport, the Paul W. Crowley East Bay MET School is the first Net Zero project to be constructed in Rhode Island. It is a 17,000 sq ft building, housing eight large classrooms, seven bathrooms and a kitchen. It will have PV panels to supply all necessary electricity for the building and a geothermal well which will be the source of heat. ===== Tennessee ===== civitas, designed by archimania, Memphis, Tennessee. civitas is a case study home on the banks of the Mississippi River, currently under construction. It aims to embrace cultural, climatic, and economic challenges. The home will set a precedent for Southeastern high-performance design. ===== Texas ===== The University of North Texas (UNT) constructed a Zero Energy Research Laboratory on its 300-acre research campus, Discovery Park, in Denton, Texas. The project was funded at over $1,150,000 and will primarily benefit students in mechanical and energy engineering (UNT became the first university to offer degrees in mechanical and energy engineering in 2006). This 1,200-square-foot structure is now competed and held ribbon-cutting ceremony for the University of North Texas' Zero Energy Laboratory on April 20, 2012. The West Irving Library in Irving, Texas, became the first net zero library in Texas in 2011, running entirely off solar energy. Since then it has produced a surplus. It has LEED gold certification. ===== Vermont ===== The Putney School's net zero Field House was opened on October 10, 2009. In use for over a year, as of December 2010, the Field House used 48,374 kWh and produced a total of 51,371 kWh during the first 12 months of operation, thus performing at slightly better than net-zero. Also in December, the building won an AIA-Vermont Honor Award. The Charlotte Vermont House designed by Pill-Maharam Architects is a verified net zero energy house completed in 2007. The project won the Northeast Sustainable Energy Association's Net Zero Energy award in 2009. == See also == == References == == Further reading == Nisson, J. D. Ned; and Gautam Dutt, "The Superinsulated Home Book", John Wiley & Sons, 1985, ISBN 978-0-471-88734-8, ISBN 978-0-471-81343-9. Markvart, Thomas; Editor, "Solar Electricity" John Wiley & Sons; 2nd edition, 2000, ISBN 978-0-471-98853-3. Clarke, Joseph; "Energy Simulation in Building Design", 2nd ed., Butterworth-Heinemann; 2nd ed.,, 2001, ISBN 978-0-7506-5082-3. National Renewable Energy Laboratory, 2000 ZEB meeting report Noguchi, Masa, ed., "The Quest for Zero Carbon Housing Solutions", Open House International, Vol. 33, No. 3, 2008 Voss, Karsten; Musall, Eike: "Net zero energy buildings – International projects of carbon neutrality in buildings", Munich, 2011, ISBN 978-3-920034-80-5.
Wikipedia/Zero-energy_building
This list compares various energies in joules (J), organized by order of magnitude. == Below 1 J == == 1 to 105 J == == 106 to 1011 J == == 1012 to 1017 J == == 1018 to 1023 J == == Over 1024 J == == SI multiples == The joule is named after James Prescott Joule. As with every SI unit named after a person, its symbol starts with an upper case letter (J), but when written in full, it follows the rules for capitalisation of a common noun; i.e., joule becomes capitalised at the beginning of a sentence and in titles but is otherwise in lower case. == See also == Conversion of units of energy Energy conversion efficiency Energy density Metric system Outline of energy Scientific notation TNT equivalent == Notes ==
Wikipedia/Orders_of_magnitude_(energy)
Affine gauge theory is classical gauge theory where gauge fields are affine connections on the tangent bundle over a smooth manifold X {\displaystyle X} . For instance, these are gauge theory of dislocations in continuous media when X = R 3 {\displaystyle X=\mathbb {R} ^{3}} , the generalization of metric-affine gravitation theory when X {\displaystyle X} is a world manifold and, in particular, gauge theory of the fifth force. == Affine tangent bundle == Being a vector bundle, the tangent bundle T X {\displaystyle TX} of an n {\displaystyle n} -dimensional manifold X {\displaystyle X} admits a natural structure of an affine bundle A T X {\displaystyle ATX} , called the affine tangent bundle, possessing bundle atlases with affine transition functions. It is associated to a principal bundle A F X {\displaystyle AFX} of affine frames in tangent space over X {\displaystyle X} , whose structure group is a general affine group G A ( n , R ) {\displaystyle GA(n,\mathbb {R} )} . The tangent bundle T X {\displaystyle TX} is associated to a principal linear frame bundle F X {\displaystyle FX} , whose structure group is a general linear group G L ( n , R ) {\displaystyle GL(n,\mathbb {R} )} . This is a subgroup of G A ( n , R ) {\displaystyle GA(n,\mathbb {R} )} so that the latter is a semidirect product of G L ( n , R ) {\displaystyle GL(n,\mathbb {R} )} and a group T n {\displaystyle T^{n}} of translations. There is the canonical imbedding of F X {\displaystyle FX} to A F X {\displaystyle AFX} onto a reduced principal subbundle which corresponds to the canonical structure of a vector bundle T X {\displaystyle TX} as the affine one. Given linear bundle coordinates ( x μ , x ˙ μ ) , x ˙ ′ μ = ∂ x ′ μ ∂ x ν x ˙ ν , ( 1 ) {\displaystyle (x^{\mu },{\dot {x}}^{\mu }),\qquad {\dot {x}}'^{\mu }={\frac {\partial x'^{\mu }}{\partial x^{\nu }}}{\dot {x}}^{\nu },\qquad \qquad (1)} on the tangent bundle T X {\displaystyle TX} , the affine tangent bundle can be provided with affine bundle coordinates ( x μ , x ~ μ = x ˙ μ + a μ ( x α ) ) , x ~ ′ μ = ∂ x ′ μ ∂ x ν x ~ ν + b μ ( x α ) . ( 2 ) {\displaystyle (x^{\mu },{\widetilde {x}}^{\mu }={\dot {x}}^{\mu }+a^{\mu }(x^{\alpha })),\qquad {\widetilde {x}}'^{\mu }={\frac {\partial x'^{\mu }}{\partial x^{\nu }}}{\widetilde {x}}^{\nu }+b^{\mu }(x^{\alpha }).\qquad \qquad (2)} and, in particular, with the linear coordinates (1). == Affine gauge fields == The affine tangent bundle A T X {\displaystyle ATX} admits an affine connection A {\displaystyle A} which is associated to a principal connection on an affine frame bundle A F X {\displaystyle AFX} . In affine gauge theory, it is treated as an affine gauge field. Given the linear bundle coordinates (1) on A T X = T X {\displaystyle ATX=TX} , an affine connection A {\displaystyle A} is represented by a connection tangent-valued form A = d x λ ⊗ [ ∂ λ + ( Γ λ μ ν ( x α ) x ˙ ν + σ λ μ ( x α ) ) ∂ ˙ μ ] . ( 3 ) {\displaystyle A=dx^{\lambda }\otimes [\partial _{\lambda }+(\Gamma _{\lambda }{}^{\mu }{}_{\nu }(x^{\alpha }){\dot {x}}^{\nu }+\sigma _{\lambda }^{\mu }(x^{\alpha })){\dot {\partial }}_{\mu }].\qquad \qquad (3)} This affine connection defines a unique linear connection Γ = d x λ ⊗ [ ∂ λ + Γ λ μ ν ( x α ) x ˙ ν ∂ ˙ μ ] ( 4 ) {\displaystyle \Gamma =dx^{\lambda }\otimes [\partial _{\lambda }+\Gamma _{\lambda }{}^{\mu }{}_{\nu }(x^{\alpha }){\dot {x}}^{\nu }{\dot {\partial }}_{\mu }]\qquad \qquad (4)} on T X {\displaystyle TX} , which is associated to a principal connection on F X {\displaystyle FX} . Conversely, every linear connection Γ {\displaystyle \Gamma } (4) on T X → X {\displaystyle TX\to X} is extended to the affine one A Γ {\displaystyle A\Gamma } on A T X {\displaystyle ATX} which is given by the same expression (4) as Γ {\displaystyle \Gamma } with respect to the bundle coordinates (1) on A T X = T X {\displaystyle ATX=TX} , but it takes a form A Γ = d x λ ⊗ [ ∂ λ + ( Γ λ μ ν ( x α ) x ~ ν + s λ μ ( x α ) ) ∂ ~ μ ] , s λ μ = − Γ λ μ ν a ν + ∂ λ a μ , {\displaystyle A\Gamma =dx^{\lambda }\otimes [\partial _{\lambda }+(\Gamma _{\lambda }{}^{\mu }{}_{\nu }(x^{\alpha }){\widetilde {x}}^{\nu }+s_{\lambda }^{\mu }(x^{\alpha })){\widetilde {\partial }}_{\mu }],\qquad s_{\lambda }^{\mu }=-\Gamma _{\lambda }{}^{\mu }{}_{\nu }a^{\nu }+\partial _{\lambda }a^{\mu },} relative to the affine coordinates (2). Then any affine connection A {\displaystyle A} (3) on A T X → X {\displaystyle ATX\to X} is represented by a sum A = A Γ + σ ( 5 ) {\displaystyle A=A\Gamma +\sigma \qquad \qquad (5)} of the extended linear connection A Γ {\displaystyle A\Gamma } and a basic soldering form σ = σ λ μ ( x α ) d x λ ⊗ ∂ μ ( 6 ) {\displaystyle \sigma =\sigma _{\lambda }^{\mu }(x^{\alpha })dx^{\lambda }\otimes \partial _{\mu }\qquad \qquad (6)} on T X {\displaystyle TX} , where ∂ ˙ μ = ∂ μ {\displaystyle {\dot {\partial }}_{\mu }=\partial _{\mu }} due to the canonical isomorphism V A T X = A T X × X T X {\displaystyle VATX=ATX\times _{X}TX} of the vertical tangent bundle V A T X {\displaystyle VATX} of A T X {\displaystyle ATX} . Relative to the linear coordinates (1), the sum (5) is brought into a sum A = Γ + σ {\displaystyle A=\Gamma +\sigma } of a linear connection Γ {\displaystyle \Gamma } and the soldering form σ {\displaystyle \sigma } (6). In this case, the soldering form σ {\displaystyle \sigma } (6) often is treated as a translation gauge field, though it is not a connection. Let us note that a true translation gauge field (i.e., an affine connection which yields a flat linear connection on T X {\displaystyle TX} ) is well defined only on a parallelizable manifold X {\displaystyle X} . == Gauge theory of dislocations == In field theory, one meets a problem of physical interpretation of translation gauge fields because there are no fields subject to gauge translations u ( x ) → u ( x ) + a ( x ) {\displaystyle u(x)\to u(x)+a(x)} . At the same time, one observes such a field in gauge theory of dislocations in continuous media because, in the presence of dislocations, displacement vectors u k {\displaystyle u^{k}} , k = 1 , 2 , 3 {\displaystyle k=1,2,3} , of small deformations are determined only with accuracy to gauge translations u k → u k + a k ( x ) {\displaystyle u^{k}\to u^{k}+a^{k}(x)} . In this case, let X = R 3 {\displaystyle X=\mathbb {R} ^{3}} , and let an affine connection take a form A = d x i ⊗ ( ∂ i + A i j ( x k ) ∂ ~ j ) {\displaystyle A=dx^{i}\otimes (\partial _{i}+A_{i}^{j}(x^{k}){\widetilde {\partial }}_{j})} with respect to the affine bundle coordinates (2). This is a translation gauge field whose coefficients A l j {\displaystyle A_{l}^{j}} describe plastic distortion, covariant derivatives D j u i = ∂ j u i − A j i {\displaystyle D_{j}u^{i}=\partial _{j}u^{i}-A_{j}^{i}} coincide with elastic distortion, and a strength F j i k = ∂ j A i k − ∂ i A j k {\displaystyle F_{ji}^{k}=\partial _{j}A_{i}^{k}-\partial _{i}A_{j}^{k}} is a dislocation density. Equations of gauge theory of dislocations are derived from a gauge invariant Lagrangian density L ( σ ) = μ D i u k D i u k + λ 2 ( D i u i ) 2 − ϵ F k i j F k i j , {\displaystyle L_{(\sigma )}=\mu D_{i}u^{k}D^{i}u_{k}+{\frac {\lambda }{2}}(D_{i}u^{i})^{2}-\epsilon F^{k}{}_{ij}F_{k}{}^{ij},} where μ {\displaystyle \mu } and λ {\displaystyle \lambda } are the Lamé parameters of isotropic media. These equations however are not independent since a displacement field u k ( x ) {\displaystyle u^{k}(x)} can be removed by gauge translations and, thereby, it fails to be a dynamic variable. == Gauge theory of the fifth force == In gauge gravitation theory on a world manifold X {\displaystyle X} , one can consider an affine, but not linear connection on the tangent bundle T X {\displaystyle TX} of X {\displaystyle X} . Given bundle coordinates (1) on T X {\displaystyle TX} , it takes the form (3) where the linear connection Γ {\displaystyle \Gamma } (4) and the basic soldering form σ {\displaystyle \sigma } (6) are considered as independent variables. As was mentioned above, the soldering form σ {\displaystyle \sigma } (6) often is treated as a translation gauge field, though it is not a connection. On another side, one mistakenly identifies σ {\displaystyle \sigma } with a tetrad field. However, these are different mathematical object because a soldering form is a section of the tensor bundle T X ⊗ T ∗ X {\displaystyle TX\otimes T^{*}X} , whereas a tetrad field is a local section of a Lorentz reduced subbundle of a frame bundle F X {\displaystyle FX} . In the spirit of the above-mentioned gauge theory of dislocations, it has been suggested that a soldering field σ {\displaystyle \sigma } can describe sui generi deformations of a world manifold X {\displaystyle X} which are given by a bundle morphism s : T X ∋ ∂ λ → ∂ λ ⌋ ( θ + σ ) = ( δ λ ν + σ λ ν ) ∂ ν ∈ T X , {\displaystyle s:TX\ni \partial _{\lambda }\to \partial _{\lambda }\rfloor (\theta +\sigma )=(\delta _{\lambda }^{\nu }+\sigma _{\lambda }^{\nu })\partial _{\nu }\in TX,} where θ = d x μ ⊗ ∂ μ {\displaystyle \theta =dx^{\mu }\otimes \partial _{\mu }} is a tautological one-form. Then one considers metric-affine gravitation theory ( g , Γ ) {\displaystyle (g,\Gamma )} on a deformed world manifold as that with a deformed pseudo-Riemannian metric g ~ μ ν = s α μ s β ν g α β {\displaystyle {\widetilde {g}}^{\mu \nu }=s_{\alpha }^{\mu }s_{\beta }^{\nu }g^{\alpha \beta }} when a Lagrangian of a soldering field σ {\displaystyle \sigma } takes a form L ( σ ) = 1 2 [ a 1 T μ ν μ T α ν α + a 2 T μ ν α T μ ν α + a 3 T μ ν α T ν μ α + a 4 ϵ μ ν α β T γ μ γ T β ν α − μ σ μ ν σ ν μ + λ σ μ μ σ ν ν ] − g {\displaystyle L_{(\sigma )}={\frac {1}{2}}[a_{1}T^{\mu }{}_{\nu \mu }T_{\alpha }{}^{\nu \alpha }+a_{2}T_{\mu \nu \alpha }T^{\mu \nu \alpha }+a_{3}T_{\mu \nu \alpha }T^{\nu \mu \alpha }+a_{4}\epsilon ^{\mu \nu \alpha \beta }T^{\gamma }{}_{\mu \gamma }T_{\beta \nu \alpha }-\mu \sigma ^{\mu }{}_{\nu }\sigma ^{\nu }{}_{\mu }+\lambda \sigma ^{\mu }{}_{\mu }\sigma ^{\nu }{}_{\nu }]{\sqrt {-g}}} , where ϵ μ ν α β {\displaystyle \epsilon ^{\mu \nu \alpha \beta }} is the Levi-Civita symbol, and T α ν μ = D ν σ α μ − D μ σ α ν {\displaystyle T^{\alpha }{}_{\nu \mu }=D_{\nu }\sigma ^{\alpha }{}_{\mu }-D_{\mu }\sigma ^{\alpha }{}_{\nu }} is the torsion of a linear connection Γ {\displaystyle \Gamma } with respect to a soldering form σ {\displaystyle \sigma } . In particular, let us consider this gauge model in the case of small gravitational and soldering fields whose matter source is a point mass. Then one comes to a modified Newtonian potential of the fifth force type. == See also == Connection (affine bundle) Dislocations Fifth force Gauge gravitation theory Metric-affine gravitation theory Classical unified field theories == References == A. Kadic, D. Edelen, A Gauge Theory of Dislocations and Disclinations, Lecture Notes in Physics 174 (Springer, New York, 1983), ISBN 3-540-11977-9 G. Sardanashvily, O. Zakharov, Gauge Gravitation Theory (World Scientific, Singapore, 1992), ISBN 981-02-0799-9 C. Malyshev, The dislocation stress functions from the double curl T(3)-gauge equations: Linearity and look beyond, Annals of Physics 286 (2000) 249. == External links == G. Sardanashvily, Gravity as a Higgs field. III. Nongravitational deviations of gravitational field, arXiv:gr-qc/9411013 .
Wikipedia/Affine_gauge_theory
Physics is an open access online publication containing commentaries on the best of the peer-reviewed research published in the journals of the American Physical Society. The editor-in-chief of Physics is Matteo Rini. It highlights papers in Physical Review Letters and the Physical Review family of journals. The magazine was established in 2008. == Features == Physics contains three types of commentaries on research papers: journalistic articles ("Focus"), in depth pieces written by active researchers ("Viewpoints"), and short summaries of a research paper ("Synopsis") written by editorial staff. Readers get free access to the underlying research papers on which the commentaries are based. == References == == External links == Official website
Wikipedia/Physics_(magazine)
The Physics is an American hip hop group based in Seattle, Washington. It was created in the late-1990s when its members, Thig Natural (Gathigi Gishuru), Monk Wordsmith (Njuguna Gishuru) and Just D'Amato (Justin Hare) were students at O'Dea High School in Seattle. Since 2007, the trio has released three full-length albums, two EPs and several non-album singles. The Physics are an integral part of the Seattle hip hop community, and much of their music is centered on life in Seattle ("Seward Park" and "Coronas on Madrona," for example). They are known for rapping about everyman themes such as romance, working 9-5 jobs, and enjoying life. == Members == The Physics consists of two MCs, real-life brothers Thig Natural (often referred to as Thig Nat), and Monk Wordsmith (who functions as the group's hype man as well) and producer, Justo. All three members grew up in a neighborhood known as Seattle's South End. Thig and Monk are first-generation Americans of Kenyan descent, and Just D'Amato (also known as Justo) is half-Filipino. The Physics reference their heritage in their music, especially in the Tomorrow People track, "Journey of the Drum." Thig is also a photographer who shoots primarily fashion and street photography. Their music frequently features other artists from hip-hop and other genres. They have collaborated with R&B singers Mario and Malice Sweet, producer Jake One, and Phonte, THEESatisfaction, Macklemore, Grynch, Sol, Bambu, Dave B, Blue Scholars and more. In recent years, they have recorded and performed with a live band including trumpeter Owuor Arunga, guitarist Eben Haase, keyboardist Sam Wishkoski, and Mario and Malice Sweet’s backup vocals. == Distribution == The Physics’ three albums, Future Talk, Love Is a Business and Tomorrow People have all been self-released, as have their free EPs and singles. The Physics' most recent album, Tomorrow People, was funded through Kickstarter. The crowd funding campaign raised $11,721 from 242 backers – surpassing a goal of $8,000. The project appeared on Kickstarter's "Staff Picks" and "Popular" pages in July and August 2012. == Critical reception and role in the Seattle scene == The Physics' sound has been synonymous with "Seattle summers," for many, due to its hyperlocal lyrics and laid back, upbeat vibe. "You could hardly imagine better ambassadors of our deep, watery homegrown flow," wrote The Stranger's Larry Mizell Jr. in a 2012 review. Seattle Weekly writer Todd Hamm wrote, The Physics' most recent album, Tomorrow People, transcends local hip hop to a broader appeal. "Their vibe is positive, their production is intricate and well-finished, and they reach a rare-level of professionalism with each release." The Seattle Times named Love Is a Business one of the Top 10 local albums of 2011. Radio station 90.3 KEXP ranked Tomorrow People on its "DJ's Top 10 Lists of 2012." Three Piece was named Seattle Weekly's "Best Free EP To Light Up Your Summer" in 2010. Love is a Business LP ranked No.1 on KEXP’s hip hop and variety charts for eight weeks each, and Tomorrow People LP hit No. 1 on KEXP's hip hop charts for all of August 2012. == Performances and Tours == The Physics have performed at Seattle's Bumbershoot festival in 2008 and 2010, and the Capitol Hill Block Party music festival in 2008 and 2010. They played at South By Southwest in Austin, Texas in March 2011, and at Sasquatch! Music Festival at the Gorge Amphitheatre in May 2012. The group opened for Mos Def at CityArts Fest2012. In late 2011, The Physics toured with Blue Scholars on the Cinémetroplis Tour, ending with a sold-out show at the Bowery Ballroom in New York City. In fall 2012, The Physics launched Tomorrow People Tour, a West Coast tour with The Bar (Prometheus Brown and Bambu) and Grynch. They played shows from Los Angeles to Vancouver, British Columbia. In December 2012 The Physics joined Blue Scholars again on the national Town All Day Tour. == Discography == === Albums === 2007: Future Talk 2011: Love Is a Business 2012: Tomorrow People 2013: Digital Wildlife 2015: Wish You Were Here === EPs === 2009: High-Society 2010: Three Piece === Non-album Singles === 2012: "After Effect (feat. Grynch)" 2011: "Fix You" 2011: "The Recipe (feat. Craig G)" produced by David Dejesus == References == == External links == Official website The Physics on Bandcamp The Physics on Twitter
Wikipedia/The_Physics_(group)
Physics was an instrumental band from San Diego, California, established by John D. Goff and Denver Lucas in late 1993 after the breakup of Johnny Superbad & the Bulletcatchers. == History == Featuring a rotating cast of musicians from the San Diego experimental underground but mainly composed of Denver Lucas on drums, Jeff Coad on synthesizers, John Goff, Will Goff, Jason Soares, Rob Crow, Travis Nelson, Ryan Jencks on guitar and Matt Lorenz on visuals/projections. This early incarnation came to be known as the "Black Period". Mainly inspired by theories in quantum theory and Eastern Mysticism, Physics was musically influenced by Krautrock, minimalism, early Doom/Drone, and Electronic Kosmische, though were often associated with the Math Rock genre. After the untimely death of Denver Lucas in the mid-1990s, the Physics personnel underwent numerous changes until resulting in Cameron Jones on drums which was later known as the "Gray Period" then ultimately the "White Period". After Physics dissolved in 2000, Jason Soares and Jeff Coad went on to form the more electronic-based Aspects Of Physics also with Matt Lorenz. Will and John Goff went on to form the electronic band SSI. Rob Crow started Pinback (co-led by Zach Smith from Three Mile Pilot). In 2015, coinciding with the release of the documentary It's Gonna Blow!!! San Diego's Music Underground 1986–1996, Physics reformed for reunion shows in Portland, Oregon and Los Angeles == Discography == Black 7, (Dagon Productions), 1994 Physics 1, (Flapping Jet), 1997 Physics 2, (Gravity), 1998 1999-11-21, (Neurot Recordings), 1999 Live: 2.7.98 (EP), (Gold Standard Laboratories), 2000 == References == == External links == Band website
Wikipedia/Physics_(band)
In continuum mechanics, the Cauchy stress tensor (symbol σ {\displaystyle {\boldsymbol {\sigma }}} , named after Augustin-Louis Cauchy), also called true stress tensor or simply stress tensor, completely defines the state of stress at a point inside a material in the deformed state, placement, or configuration. The second order tensor consists of nine components σ i j {\displaystyle \sigma _{ij}} and relates a unit-length direction vector e to the traction vector T(e) across an imaginary surface perpendicular to e: T ( e ) = e ⋅ σ or T j ( e ) = ∑ i σ i j e i . {\displaystyle \mathbf {T} ^{(\mathbf {e} )}=\mathbf {e} \cdot {\boldsymbol {\sigma }}\quad {\text{or}}\quad T_{j}^{(e)}=\sum _{i}\sigma _{ij}e_{i}.} The SI base units of both stress tensor and traction vector are newton per square metre (N/m2) or pascal (Pa), corresponding to the stress scalar. The unit vector is dimensionless. The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle for stress. The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations: it is a central concept in the linear theory of elasticity. For large deformations, also called finite deformations, other measures of stress are required, such as the Piola–Kirchhoff stress tensor, the Biot stress tensor, and the Kirchhoff stress tensor. According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). At the same time, according to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine. However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, K n → 1 {\displaystyle K_{n}\rightarrow 1} , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. There are certain invariants associated with the stress tensor, whose values do not depend upon the coordinate system chosen, or the area element upon which the stress tensor operates. These are the three eigenvalues of the stress tensor, which are called the principal stresses. == Euler–Cauchy stress principle – stress vector == The Euler–Cauchy stress principle states that upon any surface (real or imaginary) that divides the body, the action of one part of the body on the other is equivalent (equipollent) to the system of distributed forces and couples on the surface dividing the body, and it is represented by a field T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} , called the traction vector, defined on the surface S {\displaystyle S} and assumed to depend continuously on the surface's unit vector n {\displaystyle \mathbf {n} } .: p.66–96  To formulate the Euler–Cauchy stress principle, consider an imaginary surface S {\displaystyle S} passing through an internal material point P {\displaystyle P} dividing the continuous body into two segments, as seen in Figure 2.1a or 2.1b (one may use either the cutting plane diagram or the diagram with the arbitrary volume inside the continuum enclosed by the surface S {\displaystyle S} ). Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces F {\displaystyle \mathbf {F} } and body forces b {\displaystyle \mathbf {b} } . Thus, the total force F {\displaystyle {\mathcal {F}}} applied to a body or to a portion of the body can be expressed as: F = b + F {\displaystyle {\mathcal {F}}=\mathbf {b} +\mathbf {F} } Only surface forces will be discussed in this article as they are relevant to the Cauchy stress tensor. When the body is subjected to external surface forces or contact forces F {\displaystyle \mathbf {F} } , following Euler's equations of motion, internal contact forces and moments are transmitted from point to point in the body, and from one segment to the other through the dividing surface S {\displaystyle S} , due to the mechanical contact of one portion of the continuum onto the other (Figure 2.1a and 2.1b). On an element of area Δ S {\displaystyle \Delta S} containing P {\displaystyle P} , with normal vector n {\displaystyle \mathbf {n} } , the force distribution is equipollent to a contact force Δ F {\displaystyle \Delta \mathbf {F} } exerted at point P and surface moment Δ M {\displaystyle \Delta \mathbf {M} } . In particular, the contact force is given by Δ F = T ( n ) Δ S {\displaystyle \Delta \mathbf {F} =\mathbf {T} ^{(\mathbf {n} )}\,\Delta S} where T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} is the mean surface traction. Cauchy's stress principle asserts: p.47–102  that as Δ S {\displaystyle \Delta S} becomes very small and tends to zero the ratio Δ F / Δ S {\displaystyle \Delta \mathbf {F} /\Delta S} becomes d F / d S {\displaystyle d\mathbf {F} /dS} and the couple stress vector Δ M {\displaystyle \Delta \mathbf {M} } vanishes. In specific fields of continuum mechanics the couple stress is assumed not to vanish; however, classical branches of continuum mechanics address non-polar materials which do not consider couple stresses and body moments. The resultant vector d F / d S {\displaystyle d\mathbf {F} /dS} is defined as the surface traction, also called stress vector, traction, or traction vector. given by T ( n ) = T i ( n ) e i {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=T_{i}^{(\mathbf {n} )}\mathbf {e} _{i}} at the point P {\displaystyle P} associated with a plane with a normal vector n {\displaystyle \mathbf {n} } : T i ( n ) = lim Δ S → 0 Δ F i Δ S = d F i d S . {\displaystyle T_{i}^{(\mathbf {n} )}=\lim _{\Delta S\to 0}{\frac {\Delta F_{i}}{\Delta S}}={dF_{i} \over dS}.} This equation means that the stress vector depends on its location in the body and the orientation of the plane on which it is acting. This implies that the balancing action of internal contact forces generates a contact force density or Cauchy traction field T ( n , x , t ) {\displaystyle \mathbf {T} (\mathbf {n} ,\mathbf {x} ,t)} that represents a distribution of internal contact forces throughout the volume of the body in a particular configuration of the body at a given time t {\displaystyle t} . It is not a vector field because it depends not only on the position x {\displaystyle \mathbf {x} } of a particular material point, but also on the local orientation of the surface element as defined by its normal vector n {\displaystyle \mathbf {n} } . Depending on the orientation of the plane under consideration, the stress vector may not necessarily be perpendicular to that plane, i.e. parallel to n {\displaystyle \mathbf {n} } , and can be resolved into two components (Figure 2.1c): one normal to the plane, called normal stress σ n = lim Δ S → 0 Δ F n Δ S = d F n d S , {\displaystyle \mathbf {\sigma _{\mathrm {n} }} =\lim _{\Delta S\to 0}{\frac {\Delta F_{\mathrm {n} }}{\Delta S}}={\frac {dF_{\mathrm {n} }}{dS}},} where d F n {\displaystyle dF_{\mathrm {n} }} is the normal component of the force d F {\displaystyle d\mathbf {F} } to the differential area d S {\displaystyle dS} and the other parallel to this plane, called the shear stress τ = lim Δ S → 0 Δ F s Δ S = d F s d S , {\displaystyle \mathbf {\tau } =\lim _{\Delta S\to 0}{\frac {\Delta F_{\mathrm {s} }}{\Delta S}}={\frac {dF_{\mathrm {s} }}{dS}},} where d F s {\displaystyle dF_{\mathrm {s} }} is the tangential component of the force d F {\displaystyle d\mathbf {F} } to the differential surface area d S {\displaystyle dS} . The shear stress can be further decomposed into two mutually perpendicular vectors. === Cauchy's postulate === According to the Cauchy Postulate, the stress vector T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} remains unchanged for all surfaces passing through the point P {\displaystyle P} and having the same normal vector n {\displaystyle \mathbf {n} } at P {\displaystyle P} , i.e., having a common tangent at P {\displaystyle P} . This means that the stress vector is a function of the normal vector n {\displaystyle \mathbf {n} } only, and is not influenced by the curvature of the internal surfaces. === Cauchy's fundamental lemma === A consequence of Cauchy's postulate is Cauchy's Fundamental Lemma, also called the Cauchy reciprocal theorem,: p.103–130  which states that the stress vectors acting on opposite sides of the same surface are equal in magnitude and opposite in direction. Cauchy's fundamental lemma is equivalent to Newton's third law of motion of action and reaction, and is expressed as − T ( n ) = T ( − n ) . {\displaystyle -\mathbf {T} ^{(\mathbf {n} )}=\mathbf {T} ^{(-\mathbf {n} )}.} == Cauchy's stress theorem—stress tensor == The state of stress at a point in the body is then defined by all the stress vectors T(n) associated with all planes (infinite in number) that pass through that point. However, according to Cauchy's fundamental theorem, also called Cauchy's stress theorem, merely by knowing the stress vectors on three mutually perpendicular planes, the stress vector on any other plane passing through that point can be found through coordinate transformation equations. Cauchy's stress theorem states that there exists a second-order tensor field σ(x, t), called the Cauchy stress tensor, independent of n, such that T is a linear function of n: T ( n ) = n ⋅ σ or T j ( n ) = σ i j n i . {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=\mathbf {n} \cdot {\boldsymbol {\sigma }}\quad {\text{or}}\quad T_{j}^{(n)}=\sigma _{ij}n_{i}.} This equation implies that the stress vector T(n) at any point P in a continuum associated with a plane with normal unit vector n can be expressed as a function of the stress vectors on the planes perpendicular to the coordinate axes, i.e. in terms of the components σij of the stress tensor σ. To prove this expression, consider a tetrahedron with three faces oriented in the coordinate planes, and with an infinitesimal area dA oriented in an arbitrary direction specified by a normal unit vector n (Figure 2.2). The tetrahedron is formed by slicing the infinitesimal element along an arbitrary plane with unit normal n. The stress vector on this plane is denoted by T(n). The stress vectors acting on the faces of the tetrahedron are denoted as T(e1), T(e2), and T(e3), and are by definition the components σij of the stress tensor σ. This tetrahedron is sometimes called the Cauchy tetrahedron. The equilibrium of forces, i.e. Euler's first law of motion (Newton's second law of motion), gives: T ( n ) d A − T ( e 1 ) d A 1 − T ( e 2 ) d A 2 − T ( e 3 ) d A 3 = ρ ( h 3 d A ) a , {\displaystyle \mathbf {T} ^{(\mathbf {n} )}\,dA-\mathbf {T} ^{(\mathbf {e} _{1})}\,dA_{1}-\mathbf {T} ^{(\mathbf {e} _{2})}\,dA_{2}-\mathbf {T} ^{(\mathbf {e} _{3})}\,dA_{3}=\rho \left({\frac {h}{3}}dA\right)\mathbf {a} ,} where the right-hand-side represents the product of the mass enclosed by the tetrahedron and its acceleration: ρ is the density, a is the acceleration, and h is the height of the tetrahedron, considering the plane n as the base. The area of the faces of the tetrahedron perpendicular to the axes can be found by projecting dA into each face (using the dot product): d A 1 = ( n ⋅ e 1 ) d A = n 1 d A , {\displaystyle dA_{1}=\left(\mathbf {n} \cdot \mathbf {e} _{1}\right)dA=n_{1}\;dA,} d A 2 = ( n ⋅ e 2 ) d A = n 2 d A , {\displaystyle dA_{2}=\left(\mathbf {n} \cdot \mathbf {e} _{2}\right)dA=n_{2}\;dA,} d A 3 = ( n ⋅ e 3 ) d A = n 3 d A , {\displaystyle dA_{3}=\left(\mathbf {n} \cdot \mathbf {e} _{3}\right)dA=n_{3}\;dA,} and then substituting into the equation to cancel out dA: T ( n ) − T ( e 1 ) n 1 − T ( e 2 ) n 2 − T ( e 3 ) n 3 = ρ ( h 3 ) a . {\displaystyle \mathbf {T} ^{(\mathbf {n} )}-\mathbf {T} ^{(\mathbf {e} _{1})}n_{1}-\mathbf {T} ^{(\mathbf {e} _{2})}n_{2}-\mathbf {T} ^{(\mathbf {e} _{3})}n_{3}=\rho \left({\frac {h}{3}}\right)\mathbf {a} .} To consider the limiting case as the tetrahedron shrinks to a point, h must go to 0 (intuitively, the plane n is translated along n toward O). As a result, the right-hand-side of the equation approaches 0, so T ( n ) = T ( e 1 ) n 1 + T ( e 2 ) n 2 + T ( e 3 ) n 3 . {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=\mathbf {T} ^{(\mathbf {e} _{1})}n_{1}+\mathbf {T} ^{(\mathbf {e} _{2})}n_{2}+\mathbf {T} ^{(\mathbf {e} _{3})}n_{3}.} Assuming a material element (see figure at the top of the page) with planes perpendicular to the coordinate axes of a Cartesian coordinate system, the stress vectors associated with each of the element planes, i.e. T(e1), T(e2), and T(e3) can be decomposed into a normal component and two shear components, i.e. components in the direction of the three coordinate axes. For the particular case of a surface with normal unit vector oriented in the direction of the x1-axis, denote the normal stress by σ11, and the two shear stresses as σ12 and σ13: T ( e 1 ) = T 1 ( e 1 ) e 1 + T 2 ( e 1 ) e 2 + T 3 ( e 1 ) e 3 = σ 11 e 1 + σ 12 e 2 + σ 13 e 3 , {\displaystyle \mathbf {T} ^{(\mathbf {e} _{1})}=T_{1}^{(\mathbf {e} _{1})}\mathbf {e} _{1}+T_{2}^{(\mathbf {e} _{1})}\mathbf {e} _{2}+T_{3}^{(\mathbf {e} _{1})}\mathbf {e} _{3}=\sigma _{11}\mathbf {e} _{1}+\sigma _{12}\mathbf {e} _{2}+\sigma _{13}\mathbf {e} _{3},} T ( e 2 ) = T 1 ( e 2 ) e 1 + T 2 ( e 2 ) e 2 + T 3 ( e 2 ) e 3 = σ 21 e 1 + σ 22 e 2 + σ 23 e 3 , {\displaystyle \mathbf {T} ^{(\mathbf {e} _{2})}=T_{1}^{(\mathbf {e} _{2})}\mathbf {e} _{1}+T_{2}^{(\mathbf {e} _{2})}\mathbf {e} _{2}+T_{3}^{(\mathbf {e} _{2})}\mathbf {e} _{3}=\sigma _{21}\mathbf {e} _{1}+\sigma _{22}\mathbf {e} _{2}+\sigma _{23}\mathbf {e} _{3},} T ( e 3 ) = T 1 ( e 3 ) e 1 + T 2 ( e 3 ) e 2 + T 3 ( e 3 ) e 3 = σ 31 e 1 + σ 32 e 2 + σ 33 e 3 , {\displaystyle \mathbf {T} ^{(\mathbf {e} _{3})}=T_{1}^{(\mathbf {e} _{3})}\mathbf {e} _{1}+T_{2}^{(\mathbf {e} _{3})}\mathbf {e} _{2}+T_{3}^{(\mathbf {e} _{3})}\mathbf {e} _{3}=\sigma _{31}\mathbf {e} _{1}+\sigma _{32}\mathbf {e} _{2}+\sigma _{33}\mathbf {e} _{3},} In index notation this is T ( e i ) = T j ( e i ) e j = σ i j e j . {\displaystyle \mathbf {T} ^{(\mathbf {e} _{i})}=T_{j}^{(\mathbf {e} _{i})}\mathbf {e} _{j}=\sigma _{ij}\mathbf {e} _{j}.} The nine components σij of the stress vectors are the components of a second-order Cartesian tensor called the Cauchy stress tensor, which can be used to completely define the state of stress at a point and is given by σ = σ i j = [ T ( e 1 ) T ( e 2 ) T ( e 3 ) ] = [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] ≡ [ σ x x σ x y σ x z σ y x σ y y σ y z σ z x σ z y σ z z ] ≡ [ σ x τ x y τ x z τ y x σ y τ y z τ z x τ z y σ z ] , {\displaystyle {\boldsymbol {\sigma }}=\sigma _{ij}=\left[{\begin{matrix}\mathbf {T} ^{(\mathbf {e} _{1})}\\\mathbf {T} ^{(\mathbf {e} _{2})}\\\mathbf {T} ^{(\mathbf {e} _{3})}\\\end{matrix}}\right]=\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\\\end{matrix}}\right]\equiv \left[{\begin{matrix}\sigma _{xx}&\sigma _{xy}&\sigma _{xz}\\\sigma _{yx}&\sigma _{yy}&\sigma _{yz}\\\sigma _{zx}&\sigma _{zy}&\sigma _{zz}\\\end{matrix}}\right]\equiv \left[{\begin{matrix}\sigma _{x}&\tau _{xy}&\tau _{xz}\\\tau _{yx}&\sigma _{y}&\tau _{yz}\\\tau _{zx}&\tau _{zy}&\sigma _{z}\\\end{matrix}}\right],} where σ11, σ22, and σ33 are normal stresses, and σ12, σ13, σ21, σ23, σ31, and σ32 are shear stresses. The first index i indicates that the stress acts on a plane normal to the Xi -axis, and the second index j denotes the direction in which the stress acts (For example, σ12 implies that the stress is acting on the plane that is normal to the 1st axis i.e.;X1 and acts along the 2nd axis i.e.;X2). A stress component is positive if it acts in the positive direction of the coordinate axes, and if the plane where it acts has an outward normal vector pointing in the positive coordinate direction. Thus, using the components of the stress tensor T ( n ) = T ( e 1 ) n 1 + T ( e 2 ) n 2 + T ( e 3 ) n 3 = ∑ i = 1 3 T ( e i ) n i = ( σ i j e j ) n i = σ i j n i e j {\displaystyle {\begin{aligned}\mathbf {T} ^{(\mathbf {n} )}&=\mathbf {T} ^{(\mathbf {e} _{1})}n_{1}+\mathbf {T} ^{(\mathbf {e} _{2})}n_{2}+\mathbf {T} ^{(\mathbf {e} _{3})}n_{3}\\&=\sum _{i=1}^{3}\mathbf {T} ^{(\mathbf {e} _{i})}n_{i}\\&=\left(\sigma _{ij}\mathbf {e} _{j}\right)n_{i}\\&=\sigma _{ij}n_{i}\mathbf {e} _{j}\end{aligned}}} or, equivalently, T j ( n ) = σ i j n i . {\displaystyle T_{j}^{(\mathbf {n} )}=\sigma _{ij}n_{i}.} Alternatively, in matrix form we have [ T 1 ( n ) T 2 ( n ) T 3 ( n ) ] = [ n 1 n 2 n 3 ] ⋅ [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] . {\displaystyle \left[{\begin{matrix}T_{1}^{(\mathbf {n} )}&T_{2}^{(\mathbf {n} )}&T_{3}^{(\mathbf {n} )}\end{matrix}}\right]=\left[{\begin{matrix}n_{1}&n_{2}&n_{3}\end{matrix}}\right]\cdot \left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\\\end{matrix}}\right].} The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a six-dimensional vector of the form: σ = [ σ 1 σ 2 σ 3 σ 4 σ 5 σ 6 ] T ≡ [ σ 11 σ 22 σ 33 σ 23 σ 13 σ 12 ] T . {\displaystyle {\boldsymbol {\sigma }}={\begin{bmatrix}\sigma _{1}&\sigma _{2}&\sigma _{3}&\sigma _{4}&\sigma _{5}&\sigma _{6}\end{bmatrix}}^{\textsf {T}}\equiv {\begin{bmatrix}\sigma _{11}&\sigma _{22}&\sigma _{33}&\sigma _{23}&\sigma _{13}&\sigma _{12}\end{bmatrix}}^{\textsf {T}}.} The Voigt notation is used extensively in representing stress–strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software. === Transformation rule of the stress tensor === It can be shown that the stress tensor is a contravariant second order tensor, which is a statement of how it transforms under a change of the coordinate system. From an xi-system to an xi' -system, the components σij in the initial system are transformed into the components σij' in the new system according to the tensor transformation rule (Figure 2.4): σ i j ′ = a i m a j n σ m n or σ ′ = A σ A T , {\displaystyle \sigma '_{ij}=a_{im}a_{jn}\sigma _{mn}\quad {\text{or}}\quad {\boldsymbol {\sigma }}'=\mathbf {A} {\boldsymbol {\sigma }}\mathbf {A} ^{\textsf {T}},} where A is a rotation matrix with components aij. In matrix form this is [ σ 11 ′ σ 12 ′ σ 13 ′ σ 21 ′ σ 22 ′ σ 23 ′ σ 31 ′ σ 32 ′ σ 33 ′ ] = [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ] [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] [ a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 ] . {\displaystyle \left[{\begin{matrix}\sigma '_{11}&\sigma '_{12}&\sigma '_{13}\\\sigma '_{21}&\sigma '_{22}&\sigma '_{23}\\\sigma '_{31}&\sigma '_{32}&\sigma '_{33}\\\end{matrix}}\right]=\left[{\begin{matrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{matrix}}\right]\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\\\end{matrix}}\right]\left[{\begin{matrix}a_{11}&a_{21}&a_{31}\\a_{12}&a_{22}&a_{32}\\a_{13}&a_{23}&a_{33}\\\end{matrix}}\right].} Expanding the matrix operation, and simplifying terms using the symmetry of the stress tensor, gives σ 11 ′ = a 11 2 σ 11 + a 12 2 σ 22 + a 13 2 σ 33 + 2 a 11 a 12 σ 12 + 2 a 11 a 13 σ 13 + 2 a 12 a 13 σ 23 , σ 22 ′ = a 21 2 σ 11 + a 22 2 σ 22 + a 23 2 σ 33 + 2 a 21 a 22 σ 12 + 2 a 21 a 23 σ 13 + 2 a 22 a 23 σ 23 , σ 33 ′ = a 31 2 σ 11 + a 32 2 σ 22 + a 33 2 σ 33 + 2 a 31 a 32 σ 12 + 2 a 31 a 33 σ 13 + 2 a 32 a 33 σ 23 , σ 12 ′ = a 11 a 21 σ 11 + a 12 a 22 σ 22 + a 13 a 23 σ 33 + ( a 11 a 22 + a 12 a 21 ) σ 12 + ( a 12 a 23 + a 13 a 22 ) σ 23 + ( a 11 a 23 + a 13 a 21 ) σ 13 , σ 23 ′ = a 21 a 31 σ 11 + a 22 a 32 σ 22 + a 23 a 33 σ 33 + ( a 21 a 32 + a 22 a 31 ) σ 12 + ( a 22 a 33 + a 23 a 32 ) σ 23 + ( a 21 a 33 + a 23 a 31 ) σ 13 , σ 13 ′ = a 11 a 31 σ 11 + a 12 a 32 σ 22 + a 13 a 33 σ 33 + ( a 11 a 32 + a 12 a 31 ) σ 12 + ( a 12 a 33 + a 13 a 32 ) σ 23 + ( a 11 a 33 + a 13 a 31 ) σ 13 . {\displaystyle {\begin{aligned}\sigma _{11}'={}&a_{11}^{2}\sigma _{11}+a_{12}^{2}\sigma _{22}+a_{13}^{2}\sigma _{33}+2a_{11}a_{12}\sigma _{12}+2a_{11}a_{13}\sigma _{13}+2a_{12}a_{13}\sigma _{23},\\\sigma _{22}'={}&a_{21}^{2}\sigma _{11}+a_{22}^{2}\sigma _{22}+a_{23}^{2}\sigma _{33}+2a_{21}a_{22}\sigma _{12}+2a_{21}a_{23}\sigma _{13}+2a_{22}a_{23}\sigma _{23},\\\sigma _{33}'={}&a_{31}^{2}\sigma _{11}+a_{32}^{2}\sigma _{22}+a_{33}^{2}\sigma _{33}+2a_{31}a_{32}\sigma _{12}+2a_{31}a_{33}\sigma _{13}+2a_{32}a_{33}\sigma _{23},\\\sigma _{12}'={}&a_{11}a_{21}\sigma _{11}+a_{12}a_{22}\sigma _{22}+a_{13}a_{23}\sigma _{33}\\&+(a_{11}a_{22}+a_{12}a_{21})\sigma _{12}+(a_{12}a_{23}+a_{13}a_{22})\sigma _{23}+(a_{11}a_{23}+a_{13}a_{21})\sigma _{13},\\\sigma _{23}'={}&a_{21}a_{31}\sigma _{11}+a_{22}a_{32}\sigma _{22}+a_{23}a_{33}\sigma _{33}\\&+(a_{21}a_{32}+a_{22}a_{31})\sigma _{12}+(a_{22}a_{33}+a_{23}a_{32})\sigma _{23}+(a_{21}a_{33}+a_{23}a_{31})\sigma _{13},\\\sigma _{13}'={}&a_{11}a_{31}\sigma _{11}+a_{12}a_{32}\sigma _{22}+a_{13}a_{33}\sigma _{33}\\&+(a_{11}a_{32}+a_{12}a_{31})\sigma _{12}+(a_{12}a_{33}+a_{13}a_{32})\sigma _{23}+(a_{11}a_{33}+a_{13}a_{31})\sigma _{13}.\end{aligned}}} The Mohr circle for stress is a graphical representation of this transformation of stresses. === Normal and shear stresses === The magnitude of the normal stress component σn of any stress vector T(n) acting on an arbitrary plane with normal unit vector n at a given point, in terms of the components σij of the stress tensor σ, is the dot product of the stress vector and the normal unit vector: σ n = T ( n ) ⋅ n = T i ( n ) n i = σ i j n i n j . {\displaystyle {\begin{aligned}\sigma _{\mathrm {n} }&=\mathbf {T} ^{(\mathbf {n} )}\cdot \mathbf {n} \\&=T_{i}^{(\mathbf {n} )}n_{i}\\&=\sigma _{ij}n_{i}n_{j}.\end{aligned}}} The magnitude of the shear stress component τn, acting orthogonal to the vector n, can then be found using the Pythagorean theorem: τ n = ( T ( n ) ) 2 − σ n 2 = T i ( n ) T i ( n ) − σ n 2 , {\displaystyle {\begin{aligned}\tau _{\mathrm {n} }&={\sqrt {\left(T^{(\mathbf {n} )}\right)^{2}-\sigma _{\mathrm {n} }^{2}}}\\&={\sqrt {T_{i}^{(\mathbf {n} )}T_{i}^{(\mathbf {n} )}-\sigma _{\mathrm {n} }^{2}}},\end{aligned}}} where ( T ( n ) ) 2 = T i ( n ) T i ( n ) = ( σ i j n j ) ( σ i k n k ) = σ i j σ i k n j n k . {\displaystyle \left(T^{(\mathbf {n} )}\right)^{2}=T_{i}^{(\mathbf {n} )}T_{i}^{(\mathbf {n} )}=\left(\sigma _{ij}n_{j}\right)\left(\sigma _{ik}n_{k}\right)=\sigma _{ij}\sigma _{ik}n_{j}n_{k}.} == Balance laws – Cauchy's equations of motion == === Cauchy's first law of motion === According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations: σ j i , j + F i = 0 {\displaystyle \sigma _{ji,j}+F_{i}=0} , where σ j i , j = ∑ j ∂ j σ j i {\displaystyle \sigma _{ji,j}=\sum _{j}\partial _{j}\sigma _{ji}} For example, for a hydrostatic fluid in equilibrium conditions, the stress tensor takes on the form: σ i j = − p δ i j , {\displaystyle {\sigma _{ij}}=-p{\delta _{ij}},} where p {\displaystyle p} is the hydrostatic pressure, and δ i j {\displaystyle {\delta _{ij}}\ } is the kronecker delta. === Cauchy's second law of motion === According to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine: σ i j = σ j i {\displaystyle \sigma _{ij}=\sigma _{ji}} However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, K n → 1 {\displaystyle K_{n}\rightarrow 1} , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. == Principal stresses and stress invariants == At every point in a stressed body there are at least three planes, called principal planes, with normal vectors n {\displaystyle \mathbf {n} } , called principal directions, where the corresponding stress vector is perpendicular to the plane, i.e., parallel or in the same direction as the normal vector n {\displaystyle \mathbf {n} } , and where there are no normal shear stresses τ n {\displaystyle \tau _{\mathrm {n} }} . The three stresses normal to these principal planes are called principal stresses. The components σ i j {\displaystyle \sigma _{ij}} of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the magnitude of the vector is a physical quantity (a scalar) and is independent of the Cartesian coordinate system chosen to represent the vector (so long as it is normal). Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors. A stress vector parallel to the normal unit vector n {\displaystyle \mathbf {n} } is given by: T ( n ) = λ n = σ n n {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=\lambda \mathbf {n} =\mathbf {\sigma } _{\mathrm {n} }\mathbf {n} } where λ {\displaystyle \lambda } is a constant of proportionality, and in this particular case corresponds to the magnitudes σ n {\displaystyle \sigma _{\mathrm {n} }} of the normal stress vectors or principal stresses. Knowing that T i ( n ) = σ i j n j {\displaystyle T_{i}^{(n)}=\sigma _{ij}n_{j}} and n i = δ i j n j {\displaystyle n_{i}=\delta _{ij}n_{j}} , we have T i ( n ) = λ n i σ i j n j = λ n i σ i j n j − λ n i = 0 ( σ i j − λ δ i j ) n j = 0 {\displaystyle {\begin{aligned}T_{i}^{(n)}&=\lambda n_{i}\\\sigma _{ij}n_{j}&=\lambda n_{i}\\\sigma _{ij}n_{j}-\lambda n_{i}&=0\\\left(\sigma _{ij}-\lambda \delta _{ij}\right)n_{j}&=0\\\end{aligned}}} This is a homogeneous system, i.e. equal to zero, of three linear equations where n j {\displaystyle n_{j}} are the unknowns. To obtain a nontrivial (non-zero) solution for n j {\displaystyle n_{j}} , the determinant matrix of the coefficients must be equal to zero, i.e. the system is singular. Thus, | σ i j − λ δ i j | = | σ 11 − λ σ 12 σ 13 σ 21 σ 22 − λ σ 23 σ 31 σ 32 σ 33 − λ | = 0 {\displaystyle \left|\sigma _{ij}-\lambda \delta _{ij}\right|={\begin{vmatrix}\sigma _{11}-\lambda &\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}-\lambda &\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}-\lambda \\\end{vmatrix}}=0} Expanding the determinant leads to the characteristic equation | σ i j − λ δ i j | = − λ 3 + I 1 λ 2 − I 2 λ + I 3 = 0 {\displaystyle \left|\sigma _{ij}-\lambda \delta _{ij}\right|=-\lambda ^{3}+I_{1}\lambda ^{2}-I_{2}\lambda +I_{3}=0} where I 1 = σ 11 + σ 22 + σ 33 = σ k k = tr ( σ ) I 2 = | σ 22 σ 23 σ 32 σ 33 | + | σ 11 σ 13 σ 31 σ 33 | + | σ 11 σ 12 σ 21 σ 22 | = σ 11 σ 22 + σ 22 σ 33 + σ 11 σ 33 − σ 12 2 − σ 23 2 − σ 31 2 = 1 2 ( σ i i σ j j − σ i j σ j i ) = 1 2 [ ( tr ( σ ) ) 2 − tr ( σ 2 ) ] I 3 = det ( σ i j ) = det ( σ ) = σ 11 σ 22 σ 33 + 2 σ 12 σ 23 σ 31 − σ 12 2 σ 33 − σ 23 2 σ 11 − σ 31 2 σ 22 {\displaystyle {\begin{aligned}I_{1}&=\sigma _{11}+\sigma _{22}+\sigma _{33}\\&=\sigma _{kk}={\text{tr}}({\boldsymbol {\sigma }})\\[4pt]I_{2}&={\begin{vmatrix}\sigma _{22}&\sigma _{23}\\\sigma _{32}&\sigma _{33}\\\end{vmatrix}}+{\begin{vmatrix}\sigma _{11}&\sigma _{13}\\\sigma _{31}&\sigma _{33}\\\end{vmatrix}}+{\begin{vmatrix}\sigma _{11}&\sigma _{12}\\\sigma _{21}&\sigma _{22}\\\end{vmatrix}}\\&=\sigma _{11}\sigma _{22}+\sigma _{22}\sigma _{33}+\sigma _{11}\sigma _{33}-\sigma _{12}^{2}-\sigma _{23}^{2}-\sigma _{31}^{2}\\&={\frac {1}{2}}\left(\sigma _{ii}\sigma _{jj}-\sigma _{ij}\sigma _{ji}\right)={\frac {1}{2}}\left[\left({\text{tr}}({\boldsymbol {\sigma }})\right)^{2}-{\text{tr}}\left({\boldsymbol {\sigma }}^{2}\right)\right]\\[4pt]I_{3}&=\det(\sigma _{ij})=\det({\boldsymbol {\sigma }})\\&=\sigma _{11}\sigma _{22}\sigma _{33}+2\sigma _{12}\sigma _{23}\sigma _{31}-\sigma _{12}^{2}\sigma _{33}-\sigma _{23}^{2}\sigma _{11}-\sigma _{31}^{2}\sigma _{22}\\\end{aligned}}} The characteristic equation has three real roots λ i {\displaystyle \lambda _{i}} , i.e. not imaginary due to the symmetry of the stress tensor. The σ 1 = max ( λ 1 , λ 2 , λ 3 ) {\displaystyle \sigma _{1}=\max \left(\lambda _{1},\lambda _{2},\lambda _{3}\right)} , σ 3 = min ( λ 1 , λ 2 , λ 3 ) {\displaystyle \sigma _{3}=\min \left(\lambda _{1},\lambda _{2},\lambda _{3}\right)} and σ 2 = I 1 − σ 1 − σ 3 {\displaystyle \sigma _{2}=I_{1}-\sigma _{1}-\sigma _{3}} , are the principal stresses, functions of the eigenvalues λ i {\displaystyle \lambda _{i}} . The eigenvalues are the roots of the characteristic polynomial. The principal stresses are unique for a given stress tensor. Therefore, from the characteristic equation, the coefficients I 1 {\displaystyle I_{1}} , I 2 {\displaystyle I_{2}} and I 3 {\displaystyle I_{3}} , called the first, second, and third stress invariants, respectively, always have the same value regardless of the coordinate system's orientation. For each eigenvalue, there is a non-trivial solution for n j {\displaystyle n_{j}} in the equation ( σ i j − λ δ i j ) n j = 0 {\displaystyle \left(\sigma _{ij}-\lambda \delta _{ij}\right)n_{j}=0} . These solutions are the principal directions or eigenvectors defining the plane where the principal stresses act. The principal stresses and principal directions characterize the stress at a point and are independent of the orientation. A coordinate system with axes oriented to the principal directions implies that the normal stresses are the principal stresses and the stress tensor is represented by a diagonal matrix: σ i j = [ σ 1 0 0 0 σ 2 0 0 0 σ 3 ] {\displaystyle \sigma _{ij}={\begin{bmatrix}\sigma _{1}&0&0\\0&\sigma _{2}&0\\0&0&\sigma _{3}\end{bmatrix}}} The principal stresses can be combined to form the stress invariants, I 1 {\displaystyle I_{1}} , I 2 {\displaystyle I_{2}} , and I 3 {\displaystyle I_{3}} . The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus, I 1 = σ 1 + σ 2 + σ 3 I 2 = σ 1 σ 2 + σ 2 σ 3 + σ 3 σ 1 I 3 = σ 1 σ 2 σ 3 {\displaystyle {\begin{aligned}I_{1}&=\sigma _{1}+\sigma _{2}+\sigma _{3}\\I_{2}&=\sigma _{1}\sigma _{2}+\sigma _{2}\sigma _{3}+\sigma _{3}\sigma _{1}\\I_{3}&=\sigma _{1}\sigma _{2}\sigma _{3}\\\end{aligned}}} Because of its simplicity, the principal coordinate system is often useful when considering the state of the elastic medium at a particular point. Principal stresses are often expressed in the following equation for evaluating stresses in the x and y directions or axial and bending stresses on a part.: p.58–59  The principal normal stresses can then be used to calculate the von Mises stress and ultimately the safety factor and margin of safety. σ 1 , σ 2 = σ x + σ y 2 ± ( σ x − σ y 2 ) 2 + τ x y 2 {\displaystyle \sigma _{1},\sigma _{2}={\frac {\sigma _{x}+\sigma _{y}}{2}}\pm {\sqrt {\left({\frac {\sigma _{x}-\sigma _{y}}{2}}\right)^{2}+\tau _{xy}^{2}}}} Using just the part of the equation under the square root is equal to the maximum and minimum shear stress for plus and minus. This is shown as: τ max , τ min = ± ( σ x − σ y 2 ) 2 + τ x y 2 {\displaystyle \tau _{\max },\tau _{\min }=\pm {\sqrt {\left({\frac {\sigma _{x}-\sigma _{y}}{2}}\right)^{2}+\tau _{xy}^{2}}}} == Maximum and minimum shear stresses == The maximum shear stress or maximum principal shear stress is equal to one-half the difference between the largest and smallest principal stresses, and acts on the plane that bisects the angle between the directions of the largest and smallest principal stresses, i.e. the plane of the maximum shear stress is oriented 45 ∘ {\displaystyle 45^{\circ }} from the principal stress planes. The maximum shear stress is expressed as τ max = 1 2 | σ max − σ min | {\displaystyle \tau _{\max }={\frac {1}{2}}\left|\sigma _{\max }-\sigma _{\min }\right|} Assuming σ 1 ≥ σ 2 ≥ σ 3 {\displaystyle \sigma _{1}\geq \sigma _{2}\geq \sigma _{3}} then τ max = 1 2 | σ 1 − σ 3 | {\displaystyle \tau _{\max }={\frac {1}{2}}\left|\sigma _{1}-\sigma _{3}\right|} When the stress tensor is non zero the normal stress component acting on the plane for the maximum shear stress is non-zero and it is equal to σ n = 1 2 ( σ 1 + σ 3 ) {\displaystyle \sigma _{\text{n}}={\frac {1}{2}}\left(\sigma _{1}+\sigma _{3}\right)} == Stress deviator tensor == The stress tensor σ i j {\displaystyle \sigma _{ij}} can be expressed as the sum of two other stress tensors: a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, π δ i j {\displaystyle \pi \delta _{ij}} , which tends to change the volume of the stressed body; and a deviatoric component called the stress deviator tensor, s i j {\displaystyle s_{ij}} , which tends to distort it. So σ i j = s i j + π δ i j , {\displaystyle \sigma _{ij}=s_{ij}+\pi \delta _{ij},\,} where π {\displaystyle \pi } is the mean stress given by π = σ k k 3 = σ 11 + σ 22 + σ 33 3 = 1 3 I 1 . {\displaystyle \pi ={\frac {\sigma _{kk}}{3}}={\frac {\sigma _{11}+\sigma _{22}+\sigma _{33}}{3}}={\frac {1}{3}}I_{1}.\,} Pressure ( p {\displaystyle p} ) is generally defined as negative one-third the trace of the stress tensor minus any stress the divergence of the velocity contributes with, i.e. p = ζ ∇ ⋅ u → − π = ζ ∂ u k ∂ x k − π = ∑ k ζ ∂ u k ∂ x k − π , {\displaystyle p=\zeta \,\nabla \cdot {\vec {u}}-\pi =\zeta \,{\frac {\partial u_{k}}{\partial x_{k}}}-\pi =\sum _{k}\zeta \,{\frac {\partial u_{k}}{\partial x_{k}}}-\pi ,} where ζ {\displaystyle \zeta } is a proportionality constant (viz. the Volume viscosity), ∇ ⋅ {\displaystyle \nabla \cdot } is the divergence operator, x k {\displaystyle x_{k}} is the k:th Cartesian coordinate, u → {\displaystyle {\vec {u}}} is the flow velocity and u k {\displaystyle u_{k}} is the k:th Cartesian component of u → {\displaystyle {\vec {u}}} . The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the Cauchy stress tensor: s i j = σ i j − σ k k 3 δ i j , [ s 11 s 12 s 13 s 21 s 22 s 23 s 31 s 32 s 33 ] = [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] − [ π 0 0 0 π 0 0 0 π ] = [ σ 11 − π σ 12 σ 13 σ 21 σ 22 − π σ 23 σ 31 σ 32 σ 33 − π ] . {\displaystyle {\begin{aligned}s_{ij}&=\sigma _{ij}-{\frac {\sigma _{kk}}{3}}\delta _{ij},\,\\\left[{\begin{matrix}s_{11}&s_{12}&s_{13}\\s_{21}&s_{22}&s_{23}\\s_{31}&s_{32}&s_{33}\end{matrix}}\right]&=\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{matrix}}\right]-\left[{\begin{matrix}\pi &0&0\\0&\pi &0\\0&0&\pi \end{matrix}}\right]\\&=\left[{\begin{matrix}\sigma _{11}-\pi &\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}-\pi &\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}-\pi \end{matrix}}\right].\end{aligned}}} === Invariants of the stress deviator tensor === As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor s i j {\displaystyle s_{ij}} are the same as the principal directions of the stress tensor σ i j {\displaystyle \sigma _{ij}} . Thus, the characteristic equation is | s i j − λ δ i j | = λ 3 − J 1 λ 2 − J 2 λ − J 3 = 0 , {\displaystyle \left|s_{ij}-\lambda \delta _{ij}\right|=\lambda ^{3}-J_{1}\lambda ^{2}-J_{2}\lambda -J_{3}=0,} where J 1 {\displaystyle J_{1}} , J 2 {\displaystyle J_{2}} and J 3 {\displaystyle J_{3}} are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of s i j {\displaystyle s_{ij}} or its principal values s 1 {\displaystyle s_{1}} , s 2 {\displaystyle s_{2}} , and s 3 {\displaystyle s_{3}} , or alternatively, as a function of σ i j {\displaystyle \sigma _{ij}} or its principal values σ 1 {\displaystyle \sigma _{1}} , σ 2 {\displaystyle \sigma _{2}} , and σ 3 {\displaystyle \sigma _{3}} . Thus, J 1 = s k k = 0 , J 2 = 1 2 s i j s j i = 1 2 tr ⁡ ( s 2 ) = 1 2 ( s 1 2 + s 2 2 + s 3 2 ) = 1 6 [ ( σ 11 − σ 22 ) 2 + ( σ 22 − σ 33 ) 2 + ( σ 33 − σ 11 ) 2 ] + σ 12 2 + σ 23 2 + σ 31 2 = 1 6 [ ( σ 1 − σ 2 ) 2 + ( σ 2 − σ 3 ) 2 + ( σ 3 − σ 1 ) 2 ] = 1 3 I 1 2 − I 2 = 1 2 [ tr ⁡ ( σ 2 ) − 1 3 tr ⁡ ( σ ) 2 ] , J 3 = det ( s i j ) = 1 3 s i j s j k s k i = 1 3 tr ( s 3 ) = 1 3 ( s 1 3 + s 2 3 + s 3 3 ) = s 1 s 2 s 3 = 2 27 I 1 3 − 1 3 I 1 I 2 + I 3 = 1 3 [ tr ( σ 3 ) − tr ⁡ ( σ 2 ) tr ⁡ ( σ ) + 2 9 tr ⁡ ( σ ) 3 ] . {\displaystyle {\begin{aligned}J_{1}&=s_{kk}=0,\\[3pt]J_{2}&={\frac {1}{2}}s_{ij}s_{ji}={\frac {1}{2}}\operatorname {tr} \left({\boldsymbol {s}}^{2}\right)\\&={\frac {1}{2}}\left(s_{1}^{2}+s_{2}^{2}+s_{3}^{2}\right)\\&={\frac {1}{6}}\left[(\sigma _{11}-\sigma _{22})^{2}+(\sigma _{22}-\sigma _{33})^{2}+(\sigma _{33}-\sigma _{11})^{2}\right]+\sigma _{12}^{2}+\sigma _{23}^{2}+\sigma _{31}^{2}\\&={\frac {1}{6}}\left[(\sigma _{1}-\sigma _{2})^{2}+(\sigma _{2}-\sigma _{3})^{2}+(\sigma _{3}-\sigma _{1})^{2}\right]\\&={\frac {1}{3}}I_{1}^{2}-I_{2}={\frac {1}{2}}\left[\operatorname {tr} \left({\boldsymbol {\sigma }}^{2}\right)-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})^{2}\right],\\[3pt]J_{3}&=\det(s_{ij})\\&={\frac {1}{3}}s_{ij}s_{jk}s_{ki}={\frac {1}{3}}{\text{tr}}\left({\boldsymbol {s}}^{3}\right)\\&={\frac {1}{3}}\left(s_{1}^{3}+s_{2}^{3}+s_{3}^{3}\right)\\&=s_{1}s_{2}s_{3}\\&={\frac {2}{27}}I_{1}^{3}-{\frac {1}{3}}I_{1}I_{2}+I_{3}={\frac {1}{3}}\left[{\text{tr}}({\boldsymbol {\sigma }}^{3})-\operatorname {tr} \left({\boldsymbol {\sigma }}^{2}\right)\operatorname {tr} ({\boldsymbol {\sigma }})+{\frac {2}{9}}\operatorname {tr} ({\boldsymbol {\sigma }})^{3}\right].\,\end{aligned}}} Because s k k = 0 {\displaystyle s_{kk}=0} , the stress deviator tensor is in a state of pure shear. A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as σ vM = 3 J 2 = 1 2 [ ( σ 1 − σ 2 ) 2 + ( σ 2 − σ 3 ) 2 + ( σ 3 − σ 1 ) 2 ] . {\displaystyle \sigma _{\text{vM}}={\sqrt {3\,J_{2}}}={\sqrt {{\frac {1}{2}}~\left[(\sigma _{1}-\sigma _{2})^{2}+(\sigma _{2}-\sigma _{3})^{2}+(\sigma _{3}-\sigma _{1})^{2}\right]}}\,.} == Octahedral stresses == Considering the principal directions as the coordinate axes, a plane whose normal vector makes equal angles with each of the principal axes (i.e. having direction cosines equal to | 1 / 3 | {\displaystyle |1/{\sqrt {3}}|} ) is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress σ oct {\displaystyle \sigma _{\text{oct}}} and octahedral shear stress τ oct {\displaystyle \tau _{\text{oct}}} , respectively. Octahedral plane passing through the origin is known as the π-plane (π not to be confused with mean stress denoted by π in above section) . On the π-plane, s i j = 1 3 I {\textstyle s_{ij}={\frac {1}{3}}I} . Knowing that the stress tensor of point O (Figure 6) in the principal axes is σ i j = [ σ 1 0 0 0 σ 2 0 0 0 σ 3 ] {\displaystyle \sigma _{ij}={\begin{bmatrix}\sigma _{1}&0&0\\0&\sigma _{2}&0\\0&0&\sigma _{3}\end{bmatrix}}} the stress vector on an octahedral plane is then given by: T oct ( n ) = σ i j n i e j = σ 1 n 1 e 1 + σ 2 n 2 e 2 + σ 3 n 3 e 3 = 1 3 ( σ 1 e 1 + σ 2 e 2 + σ 3 e 3 ) {\displaystyle {\begin{aligned}\mathbf {T} _{\text{oct}}^{(\mathbf {n} )}&=\sigma _{ij}n_{i}\mathbf {e} _{j}\\&=\sigma _{1}n_{1}\mathbf {e} _{1}+\sigma _{2}n_{2}\mathbf {e} _{2}+\sigma _{3}n_{3}\mathbf {e} _{3}\\&={\frac {1}{\sqrt {3}}}(\sigma _{1}\mathbf {e} _{1}+\sigma _{2}\mathbf {e} _{2}+\sigma _{3}\mathbf {e} _{3})\end{aligned}}} The normal component of the stress vector at point O associated with the octahedral plane is σ oct = T i ( n ) n i = σ i j n i n j = σ 1 n 1 n 1 + σ 2 n 2 n 2 + σ 3 n 3 n 3 = 1 3 ( σ 1 + σ 2 + σ 3 ) = 1 3 I 1 {\displaystyle {\begin{aligned}\sigma _{\text{oct}}&=T_{i}^{(n)}n_{i}\\&=\sigma _{ij}n_{i}n_{j}\\&=\sigma _{1}n_{1}n_{1}+\sigma _{2}n_{2}n_{2}+\sigma _{3}n_{3}n_{3}\\&={\frac {1}{3}}(\sigma _{1}+\sigma _{2}+\sigma _{3})={\frac {1}{3}}I_{1}\end{aligned}}} which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes. The shear stress on the octahedral plane is then τ oct = T i ( n ) T i ( n ) − σ oct 2 = [ 1 3 ( σ 1 2 + σ 2 2 + σ 3 2 ) − 1 9 ( σ 1 + σ 2 + σ 3 ) 2 ] 1 2 = 1 3 [ ( σ 1 − σ 2 ) 2 + ( σ 2 − σ 3 ) 2 + ( σ 3 − σ 1 ) 2 ] 1 2 = 1 3 2 I 1 2 − 6 I 2 = 2 3 J 2 {\displaystyle {\begin{aligned}\tau _{\text{oct}}&={\sqrt {T_{i}^{(n)}T_{i}^{(n)}-\sigma _{\text{oct}}^{2}}}\\&=\left[{\frac {1}{3}}\left(\sigma _{1}^{2}+\sigma _{2}^{2}+\sigma _{3}^{2}\right)-{\frac {1}{9}}(\sigma _{1}+\sigma _{2}+\sigma _{3})^{2}\right]^{\frac {1}{2}}\\&={\frac {1}{3}}\left[(\sigma _{1}-\sigma _{2})^{2}+(\sigma _{2}-\sigma _{3})^{2}+(\sigma _{3}-\sigma _{1})^{2}\right]^{\frac {1}{2}}={\frac {1}{3}}{\sqrt {2I_{1}^{2}-6I_{2}}}={\sqrt {{\frac {2}{3}}J_{2}}}\end{aligned}}} == See also == Cauchy momentum equation Critical plane analysis Stress–energy tensor == Notes == == References ==
Wikipedia/Deviatoric_stress_tensor
In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. == Origins == The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Curie-Weiss law for magnetic susceptibility, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”. Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation. == Validity == In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not. Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest. == Formal approach (Hamiltonian) == The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian H = H 0 + Δ H {\displaystyle {\mathcal {H}}={\mathcal {H}}_{0}+\Delta {\mathcal {H}}} has the following upper bound: F ≤ F 0 = d e f ⟨ H ⟩ 0 − T S 0 , {\displaystyle F\leq F_{0}\ {\stackrel {\mathrm {def} }{=}}\ \langle {\mathcal {H}}\rangle _{0}-TS_{0},} where S 0 {\displaystyle S_{0}} is the entropy, and F {\displaystyle F} and F 0 {\displaystyle F_{0}} are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian H 0 {\displaystyle {\mathcal {H}}_{0}} . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as H 0 = ∑ i = 1 N h i ( ξ i ) , {\displaystyle {\mathcal {H}}_{0}=\sum _{i=1}^{N}h_{i}(\xi _{i}),} where ξ i {\displaystyle \xi _{i}} are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation. For the most common case that the target Hamiltonian contains only pairwise interactions, i.e., H = ∑ ( i , j ) ∈ P V i , j ( ξ i , ξ j ) , {\displaystyle {\mathcal {H}}=\sum _{(i,j)\in {\mathcal {P}}}V_{i,j}(\xi _{i},\xi _{j}),} where P {\displaystyle {\mathcal {P}}} is the set of pairs that interact, the minimising procedure can be carried out formally. Define Tr i ⁡ f ( ξ i ) {\displaystyle \operatorname {Tr} _{i}f(\xi _{i})} as the generalized sum of the observable f {\displaystyle f} over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by F 0 = Tr 1 , 2 , … , N ⁡ H ( ξ 1 , ξ 2 , … , ξ N ) P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) + k T Tr 1 , 2 , … , N ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) log ⁡ P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) , {\displaystyle {\begin{aligned}F_{0}&=\operatorname {Tr} _{1,2,\ldots ,N}{\mathcal {H}}(\xi _{1},\xi _{2},\ldots ,\xi _{N})P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\\&+kT\,\operatorname {Tr} _{1,2,\ldots ,N}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})\log P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N}),\end{aligned}}} where P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle P_{0}^{(N)}(\xi _{1},\xi _{2},\dots ,\xi _{N})} is the probability to find the reference system in the state specified by the variables ( ξ 1 , ξ 2 , … , ξ N ) {\displaystyle (\xi _{1},\xi _{2},\dots ,\xi _{N})} . This probability is given by the normalized Boltzmann factor P 0 ( N ) ( ξ 1 , ξ 2 , … , ξ N ) = 1 Z 0 ( N ) e − β H 0 ( ξ 1 , ξ 2 , … , ξ N ) = ∏ i = 1 N 1 Z 0 e − β h i ( ξ i ) = d e f ∏ i = 1 N P 0 ( i ) ( ξ i ) , {\displaystyle {\begin{aligned}P_{0}^{(N)}(\xi _{1},\xi _{2},\ldots ,\xi _{N})&={\frac {1}{Z_{0}^{(N)}}}e^{-\beta {\mathcal {H}}_{0}(\xi _{1},\xi _{2},\ldots ,\xi _{N})}\\&=\prod _{i=1}^{N}{\frac {1}{Z_{0}}}e^{-\beta h_{i}(\xi _{i})}\ {\stackrel {\mathrm {def} }{=}}\ \prod _{i=1}^{N}P_{0}^{(i)}(\xi _{i}),\end{aligned}}} where Z 0 {\displaystyle Z_{0}} is the partition function. Thus F 0 = ∑ ( i , j ) ∈ P Tr i , j ⁡ V i , j ( ξ i , ξ j ) P 0 ( i ) ( ξ i ) P 0 ( j ) ( ξ j ) + k T ∑ i = 1 N Tr i ⁡ P 0 ( i ) ( ξ i ) log ⁡ P 0 ( i ) ( ξ i ) . {\displaystyle {\begin{aligned}F_{0}&=\sum _{(i,j)\in {\mathcal {P}}}\operatorname {Tr} _{i,j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(i)}(\xi _{i})P_{0}^{(j)}(\xi _{j})\\&+kT\sum _{i=1}^{N}\operatorname {Tr} _{i}P_{0}^{(i)}(\xi _{i})\log P_{0}^{(i)}(\xi _{i}).\end{aligned}}} In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities P 0 ( i ) {\displaystyle P_{0}^{(i)}} using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations P 0 ( i ) ( ξ i ) = 1 Z 0 e − β h i M F ( ξ i ) , i = 1 , 2 , … , N , {\displaystyle P_{0}^{(i)}(\xi _{i})={\frac {1}{Z_{0}}}e^{-\beta h_{i}^{MF}(\xi _{i})},\quad i=1,2,\ldots ,N,} where the mean field is given by h i MF ( ξ i ) = ∑ { j ∣ ( i , j ) ∈ P } Tr j ⁡ V i , j ( ξ i , ξ j ) P 0 ( j ) ( ξ j ) . {\displaystyle h_{i}^{\text{MF}}(\xi _{i})=\sum _{\{j\mid (i,j)\in {\mathcal {P}}\}}\operatorname {Tr} _{j}V_{i,j}(\xi _{i},\xi _{j})P_{0}^{(j)}(\xi _{j}).} == Applications == Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions. === Ising model === ==== Formal derivation ==== The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice. A magnetisation function can be calculated from the resultant approximate free energy. The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian, − m ∑ i s i {\displaystyle -m\sum _{i}s_{i}} , the variational free energy is F V = F 0 + ⟨ ( − J ∑ s i s j − h ∑ s i ) − ( − m ∑ s i ) ⟩ 0 . {\displaystyle F_{V}=F_{0}+\left\langle \left(-J\sum s_{i}s_{j}-h\sum s_{i}\right)-\left(-m\sum s_{i}\right)\right\rangle _{0}.} By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is m = J ∑ ⟨ s j ⟩ 0 + h , {\displaystyle m=J\sum \langle s_{j}\rangle _{0}+h,} which is the ensemble average of spin. This simplifies to m = tanh ( z J β m ) + h . {\displaystyle m={\text{tanh}}(zJ\beta m)+h.} Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins. ==== Non-interacting spins approximation ==== Consider the Ising model on a d {\displaystyle d} -dimensional lattice. The Hamiltonian is given by H = − J ∑ ⟨ i , j ⟩ s i s j − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }s_{i}s_{j}-h\sum _{i}s_{i},} where the ∑ ⟨ i , j ⟩ {\displaystyle \sum _{\langle i,j\rangle }} indicates summation over the pair of nearest neighbors ⟨ i , j ⟩ {\displaystyle \langle i,j\rangle } , and s i , s j = ± 1 {\displaystyle s_{i},s_{j}=\pm 1} are neighboring Ising spins. Let us transform our spin variable by introducing the fluctuation from its mean value m i ≡ ⟨ s i ⟩ {\displaystyle m_{i}\equiv \langle s_{i}\rangle } . We may rewrite the Hamiltonian as H = − J ∑ ⟨ i , j ⟩ ( m i + δ s i ) ( m j + δ s j ) − h ∑ i s i , {\displaystyle H=-J\sum _{\langle i,j\rangle }(m_{i}+\delta s_{i})(m_{j}+\delta s_{j})-h\sum _{i}s_{i},} where we define δ s i ≡ s i − m i {\displaystyle \delta s_{i}\equiv s_{i}-m_{i}} ; this is the fluctuation of the spin. If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values. The mean field approximation consists of neglecting this second-order fluctuation term: H ≈ H MF ≡ − J ∑ ⟨ i , j ⟩ ( m i m j + m i δ s j + m j δ s i ) − h ∑ i s i . {\displaystyle H\approx H^{\text{MF}}\equiv -J\sum _{\langle i,j\rangle }(m_{i}m_{j}+m_{i}\delta s_{j}+m_{j}\delta s_{i})-h\sum _{i}s_{i}.} These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions. Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields H MF = − J ∑ ⟨ i , j ⟩ ( m 2 + 2 m ( s i − m ) ) − h ∑ i s i . {\displaystyle H^{\text{MF}}=-J\sum _{\langle i,j\rangle }{\big (}m^{2}+2m(s_{i}-m){\big )}-h\sum _{i}s_{i}.} The summation over neighboring spins can be rewritten as ∑ ⟨ i , j ⟩ = 1 2 ∑ i ∑ j ∈ n n ( i ) {\displaystyle \sum _{\langle i,j\rangle }={\frac {1}{2}}\sum _{i}\sum _{j\in nn(i)}} , where n n ( i ) {\displaystyle nn(i)} means "nearest neighbor of i {\displaystyle i} ", and the 1 / 2 {\displaystyle 1/2} prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression H MF = J m 2 N z 2 − ( h + m J z ) ⏟ h eff. ∑ i s i , {\displaystyle H^{\text{MF}}={\frac {Jm^{2}Nz}{2}}-\underbrace {(h+mJz)} _{h^{\text{eff.}}}\sum _{i}s_{i},} where z {\displaystyle z} is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field h eff. = h + J z m {\displaystyle h^{\text{eff.}}=h+Jzm} , which is the sum of the external field h {\displaystyle h} and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension d {\displaystyle d} , z = 2 d {\displaystyle z=2d} ). Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain Z = e − β J m 2 N z 2 [ 2 cosh ⁡ ( h + m J z k B T ) ] N , {\displaystyle Z=e^{-{\frac {\beta Jm^{2}Nz}{2}}}\left[2\cosh \left({\frac {h+mJz}{k_{\text{B}}T}}\right)\right]^{N},} where N {\displaystyle N} is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization m {\displaystyle m} as a function of h eff. {\displaystyle h^{\text{eff.}}} . We thus have two equations between m {\displaystyle m} and h eff. {\displaystyle h^{\text{eff.}}} , allowing us to determine m {\displaystyle m} as a function of temperature. This leads to the following observation: For temperatures greater than a certain value T c {\displaystyle T_{\text{c}}} , the only solution is m = 0 {\displaystyle m=0} . The system is paramagnetic. For T < T c {\displaystyle T<T_{\text{c}}} , there are two non-zero solutions: m = ± m 0 {\displaystyle m=\pm m_{0}} . The system is ferromagnetic. T c {\displaystyle T_{\text{c}}} is given by the following relation: T c = J z k B {\displaystyle T_{\text{c}}={\frac {Jz}{k_{B}}}} . This shows that MFT can account for the ferromagnetic phase transition. === Application to other systems === Similarly, MFT can be applied to other types of Hamiltonian as in the following cases: To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap Δ {\displaystyle \Delta } . The molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero. To determine the optimal amino acid side chain packing given a fixed protein backbone in protein structure prediction (see Self-consistent mean field (biology)). To determine the elastic properties of a composite material. Variationally minimisation like mean field theory can be also be used in statistical inference. == Extension to time-dependent mean fields == In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition. == See also == Dynamical mean field theory Mean field game theory == References ==
Wikipedia/Mean_field_theory
In mathematics, generalized functions are objects extending the notion of functions on real or complex numbers. There is more than one recognized theory, for example the theory of distributions. Generalized functions are especially useful for treating discontinuous functions more like smooth functions, and describing discrete physical phenomena such as point charges. They are applied extensively, especially in physics and engineering. Important motivations have been the technical requirements of theories of partial differential equations and group representations. A common feature of some of the approaches is that they build on operator aspects of everyday, numerical functions. The early history is connected with some ideas on operational calculus, and some contemporary developments are closely related to Mikio Sato's algebraic analysis. == Some early history == In the mathematics of the nineteenth century, aspects of generalized function theory appeared, for example in the definition of the Green's function, in the Laplace transform, and in Riemann's theory of trigonometric series, which were not necessarily the Fourier series of an integrable function. These were disconnected aspects of mathematical analysis at the time. The intensive use of the Laplace transform in engineering led to the heuristic use of symbolic methods, called operational calculus. Since justifications were given that used divergent series, these methods were questionable from the point of view of pure mathematics. They are typical of later application of generalized function methods. An influential book on operational calculus was Oliver Heaviside's Electromagnetic Theory of 1899. When the Lebesgue integral was introduced, there was for the first time a notion of generalized function central to mathematics. An integrable function, in Lebesgue's theory, is equivalent to any other which is the same almost everywhere. That means its value at each point is (in a sense) not its most important feature. In functional analysis a clear formulation is given of the essential feature of an integrable function, namely the way it defines a linear functional on other functions. This allows a definition of weak derivative. During the late 1920s and 1930s further basic steps were taken. The Dirac delta function was boldly defined by Paul Dirac (an aspect of his scientific formalism); this was to treat measures, thought of as densities (such as charge density) like genuine functions. Sergei Sobolev, working in partial differential equation theory, defined the first rigorous theory of generalized functions in order to define weak solutions of partial differential equations (i.e. solutions which are generalized functions, but may not be ordinary functions). Others proposing related theories at the time were Salomon Bochner and Kurt Friedrichs. Sobolev's work was extended by Laurent Schwartz. == Schwartz distributions == The most definitive development was the theory of distributions developed by Laurent Schwartz, systematically working out the principle of duality for topological vector spaces. Its main rival in applied mathematics is mollifier theory, which uses sequences of smooth approximations (the 'James Lighthill' explanation). This theory was very successful and is still widely used, but suffers from the main drawback that distributions cannot usually be multiplied: unlike most classical function spaces, they do not form an algebra. For example, it is meaningless to square the Dirac delta function. Work of Schwartz from around 1954 showed this to be an intrinsic difficulty. == Algebras of generalized functions == Some solutions to the multiplication problem have been proposed. One is based on a simple definition of by Yu. V. Egorov (see also his article in Demidov's book in the book list below) that allows arbitrary operations on, and between, generalized functions. Another solution allowing multiplication is suggested by the path integral formulation of quantum mechanics. Since this is required to be equivalent to the Schrödinger theory of quantum mechanics which is invariant under coordinate transformations, this property must be shared by path integrals. This fixes all products of generalized functions as shown by H. Kleinert and A. Chervyakov. The result is equivalent to what can be derived from dimensional regularization. Several constructions of algebras of generalized functions have been proposed, among others those by Yu. M. Shirokov and those by E. Rosinger, Y. Egorov, and R. Robinson. In the first case, the multiplication is determined with some regularization of generalized function. In the second case, the algebra is constructed as multiplication of distributions. Both cases are discussed below. === Non-commutative algebra of generalized functions === The algebra of generalized functions can be built-up with an appropriate procedure of projection of a function F = F ( x ) {\displaystyle F=F(x)} to its smooth F s m o o t h {\displaystyle F_{\rm {smooth}}} and its singular F s i n g u l a r {\displaystyle F_{\rm {singular}}} parts. The product of generalized functions F {\displaystyle F} and G {\displaystyle G} appears as Such a rule applies to both the space of main functions and the space of operators which act on the space of the main functions. The associativity of multiplication is achieved; and the function signum is defined in such a way, that its square is unity everywhere (including the origin of coordinates). Note that the product of singular parts does not appear in the right-hand side of (1); in particular, δ ( x ) 2 = 0 {\displaystyle \delta (x)^{2}=0} . Such a formalism includes the conventional theory of generalized functions (without their product) as a special case. However, the resulting algebra is non-commutative: generalized functions signum and delta anticommute. Few applications of the algebra were suggested. === Multiplication of distributions === The problem of multiplication of distributions, a limitation of the Schwartz distribution theory, becomes serious for non-linear problems. Various approaches are used today. The simplest one is based on the definition of generalized function given by Yu. V. Egorov. Another approach to construct associative differential algebras is based on J.-F. Colombeau's construction: see Colombeau algebra. These are factor spaces G = M / N {\displaystyle G=M/N} of "moderate" modulo "negligible" nets of functions, where "moderateness" and "negligibility" refers to growth with respect to the index of the family. === Example: Colombeau algebra === A simple example is obtained by using the polynomial scale on N, s = { a m : N → R , n ↦ n m ; m ∈ Z } {\displaystyle s=\{a_{m}:\mathbb {N} \to \mathbb {R} ,n\mapsto n^{m};~m\in \mathbb {Z} \}} . Then for any semi normed algebra (E,P), the factor space will be G s ( E , P ) = { f ∈ E N ∣ ∀ p ∈ P , ∃ m ∈ Z : p ( f n ) = o ( n m ) } { f ∈ E N ∣ ∀ p ∈ P , ∀ m ∈ Z : p ( f n ) = o ( n m ) } . {\displaystyle G_{s}(E,P)={\frac {\{f\in E^{\mathbb {N} }\mid \forall p\in P,\exists m\in \mathbb {Z} :p(f_{n})=o(n^{m})\}}{\{f\in E^{\mathbb {N} }\mid \forall p\in P,\forall m\in \mathbb {Z} :p(f_{n})=o(n^{m})\}}}.} In particular, for (E, P)=(C,|.|) one gets (Colombeau's) generalized complex numbers (which can be "infinitely large" and "infinitesimally small" and still allow for rigorous arithmetics, very similar to nonstandard numbers). For (E, P) = (C∞(R),{pk}) (where pk is the supremum of all derivatives of order less than or equal to k on the ball of radius k) one gets Colombeau's simplified algebra. === Injection of Schwartz distributions === This algebra "contains" all distributions T of D' via the injection j(T) = (φn ∗ T)n + N, where ∗ is the convolution operation, and φn(x) = n φ(nx). This injection is non-canonical in the sense that it depends on the choice of the mollifier φ, which should be C∞, of integral one and have all its derivatives at 0 vanishing. To obtain a canonical injection, the indexing set can be modified to be N × D(R), with a convenient filter base on D(R) (functions of vanishing moments up to order q). === Sheaf structure === If (E,P) is a (pre-)sheaf of semi normed algebras on some topological space X, then Gs(E, P) will also have this property. This means that the notion of restriction will be defined, which allows to define the support of a generalized function w.r.t. a subsheaf, in particular: For the subsheaf {0}, one gets the usual support (complement of the largest open subset where the function is zero). For the subsheaf E (embedded using the canonical (constant) injection), one gets what is called the singular support, i.e., roughly speaking, the closure of the set where the generalized function is not a smooth function (for E = C∞). === Microlocal analysis === The Fourier transformation being (well-)defined for compactly supported generalized functions (component-wise), one can apply the same construction as for distributions, and define Lars Hörmander's wave front set also for generalized functions. This has an especially important application in the analysis of propagation of singularities. == Other theories == These include: the convolution quotient theory of Jan Mikusinski, based on the field of fractions of convolution algebras that are integral domains; and the theories of hyperfunctions, based (in their initial conception) on boundary values of analytic functions, and now making use of sheaf theory. == Topological groups == Bruhat introduced a class of test functions, the Schwartz–Bruhat functions, on a class of locally compact groups that goes beyond the manifolds that are the typical function domains. The applications are mostly in number theory, particularly to adelic algebraic groups. André Weil rewrote Tate's thesis in this language, characterizing the zeta distribution on the idele group; and has also applied it to the explicit formula of an L-function. == Generalized section == A further way in which the theory has been extended is as generalized sections of a smooth vector bundle. This is on the Schwartz pattern, constructing objects dual to the test objects, smooth sections of a bundle that have compact support. The most developed theory is that of De Rham currents, dual to differential forms. These are homological in nature, in the way that differential forms give rise to De Rham cohomology. They can be used to formulate a very general Stokes' theorem. == See also == Beppo-Levi space Dirac delta function Generalized eigenfunction Distribution (mathematics) Hyperfunction Laplacian of the indicator Rigged Hilbert space Limit of a distribution Generalized space Ultradistribution == Books == Schwartz, L. (1950). Théorie des distributions. Vol. 1. Paris: Hermann. OCLC 889264730. Vol. 2. OCLC 889391733 Beurling, A. (1961). On quasianalyticity and general distributions (multigraphed lectures). Summer Institute, Stanford University. OCLC 679033904. Gelʹfand, Izrailʹ Moiseevič; Vilenkin, Naum Jakovlevič (1964). Generalized Functions. Vol. I–VI. Academic Press. OCLC 728079644. Hörmander, L. (2015) [1990]. The Analysis of Linear Partial Differential Operators (2nd ed.). Springer. ISBN 978-3-642-61497-2. H. Komatsu, Introduction to the theory of distributions, Second edition, Iwanami Shoten, Tokyo, 1983. Colombeau, J.-F. (2000) [1983]. New Generalized Functions and Multiplication of Distributions. Elsevier. ISBN 978-0-08-087195-0. Vladimirov, V.S.; Drozhzhinov, Yu. N.; Zav’yalov, B.I. (2012) [1988]. Tauberian theorems for generalized functions. Springer. ISBN 978-94-009-2831-2. Oberguggenberger, M. (1992). Multiplication of distributions and applications to partial differential equations. Longman. ISBN 978-0-582-08733-0. OCLC 682138968. Morimoto, M. (1993). An introduction to Sato's hyperfunctions. American Mathematical Society. ISBN 978-0-8218-8767-7. Demidov, A.S. (2001). Generalized Functions in Mathematical Physics: Main Ideas and Concepts. Nova Science. ISBN 9781560729051. Grosser, M.; Kunzinger, M.; Oberguggenberger, Michael; Steinbauer, R. (2013) [2001]. Geometric theory of generalized functions with applications to general relativity. Springer. ISBN 978-94-015-9845-3. Estrada, R.; Kanwal, R. (2012). A distributional approach to asymptotics. Theory and applications (2nd ed.). Birkhäuser Boston. ISBN 978-0-8176-8130-2. Vladimirov, V.S. (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 978-0-415-27356-5. Kleinert, H. (2009). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets (5th ed.). World Scientific. ISBN 9789814273572. (online here). See Chapter 11 for products of generalized functions. Pilipovi, S.; Stankovic, B.; Vindas, J. (2012). Asymptotic behavior of generalized functions. World Scientific. ISBN 9789814366847. == References ==
Wikipedia/Generalized_functions
In theoretical physics, Hamiltonian field theory is the field-theoretic analogue to classical Hamiltonian mechanics. It is a formalism in classical field theory alongside Lagrangian field theory. It also has applications in quantum field theory. == Definition == The Hamiltonian for a system of discrete particles is a function of their generalized coordinates and conjugate momenta, and possibly, time. For continua and fields, Hamiltonian mechanics is unsuitable but can be extended by considering a large number of point masses, and taking the continuous limit, that is, infinitely many particles forming a continuum or field. Since each point mass has one or more degrees of freedom, the field formulation has infinitely many degrees of freedom. === One scalar field === The Hamiltonian density is the continuous analogue for fields; it is a function of the fields, the conjugate "momentum" fields, and possibly the space and time coordinates themselves. For one scalar field φ(x, t), the Hamiltonian density is defined from the Lagrangian density by H ( ϕ , π , x , t ) = ϕ ˙ π − L ( ϕ , ∇ ϕ , ∂ ϕ / ∂ t , x , t ) . {\displaystyle {\mathcal {H}}(\phi ,\pi ,\mathbf {x} ,t)={\dot {\phi }}\pi -{\mathcal {L}}(\phi ,\nabla \phi ,\partial \phi /\partial t,\mathbf {x} ,t)\,.} with ∇ the "del" or "nabla" operator, x is the position vector of some point in space, and t is time. The Lagrangian density is a function of the fields in the system, their space and time derivatives, and possibly the space and time coordinates themselves. It is the field analogue to the Lagrangian function for a system of discrete particles described by generalized coordinates. As in Hamiltonian mechanics where every generalized coordinate has a corresponding generalized momentum, the field φ(x, t) has a conjugate momentum field π(x, t), defined as the partial derivative of the Lagrangian density with respect to the time derivative of the field, π = ∂ L ∂ ϕ ˙ , ϕ ˙ ≡ ∂ ϕ ∂ t , {\displaystyle \pi ={\frac {\partial {\mathcal {L}}}{\partial {\dot {\phi }}}}\,,\quad {\dot {\phi }}\equiv {\frac {\partial \phi }{\partial t}}\,,} in which the overdot denotes a partial time derivative ∂/∂t, not a total time derivative d/dt. === Many scalar fields === For many fields φi(x, t) and their conjugates πi(x, t) the Hamiltonian density is a function of them all: H ( ϕ 1 , ϕ 2 , … , π 1 , π 2 , … , x , t ) = ∑ i ϕ i ˙ π i − L ( ϕ 1 , ϕ 2 , … ∇ ϕ 1 , ∇ ϕ 2 , … , ∂ ϕ 1 / ∂ t , ∂ ϕ 2 / ∂ t , … , x , t ) . {\displaystyle {\mathcal {H}}(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\mathbf {x} ,t)=\sum _{i}{\dot {\phi _{i}}}\pi _{i}-{\mathcal {L}}(\phi _{1},\phi _{2},\ldots \nabla \phi _{1},\nabla \phi _{2},\ldots ,\partial \phi _{1}/\partial t,\partial \phi _{2}/\partial t,\ldots ,\mathbf {x} ,t)\,.} where each conjugate field is defined with respect to its field, π i ( x , t ) = ∂ L ∂ ϕ ˙ i . {\displaystyle \pi _{i}(\mathbf {x} ,t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\phi }}_{i}}}\,.} In general, for any number of fields, the volume integral of the Hamiltonian density gives the Hamiltonian, in three spatial dimensions: H = ∫ H d 3 x . {\displaystyle H=\int {\mathcal {H}}\ d^{3}x\,.} The Hamiltonian density is the Hamiltonian per unit spatial volume. The corresponding dimension is [energy][length]−3, in SI units Joules per metre cubed, J m−3. === Tensor and spinor fields === The above equations and definitions can be extended to vector fields and more generally tensor fields and spinor fields. In physics, tensor fields describe bosons and spinor fields describe fermions. == Equations of motion == The equations of motion for the fields are similar to the Hamiltonian equations for discrete particles. For any number of fields: where again the overdots are partial time derivatives, the variational derivative with respect to the fields δ H δ ϕ i = ∂ H ∂ ϕ i − ∇ ⋅ ∂ H ∂ ( ∇ ϕ i ) , {\displaystyle {\frac {\delta H}{\delta \phi _{i}}}={\frac {\partial {\mathcal {H}}}{\partial \phi _{i}}}-\nabla \cdot {\frac {\partial {\mathcal {H}}}{\partial (\nabla \phi _{i})}}\,,} with · the dot product, must be used instead of simply partial derivatives. == Phase space == The fields φi and conjugates πi form an infinite dimensional phase space, because fields have an infinite number of degrees of freedom. == Poisson bracket == For two functions which depend on the fields φi and πi, their spatial derivatives, and the space and time coordinates, A = ∫ d 3 x A ( ϕ 1 , ϕ 2 , … , π 1 , π 2 , … , ∇ ϕ 1 , ∇ ϕ 2 , … , ∇ π 1 , ∇ π 2 , … , x , t ) , {\displaystyle A=\int d^{3}x{\mathcal {A}}\left(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\nabla \phi _{1},\nabla \phi _{2},\ldots ,\nabla \pi _{1},\nabla \pi _{2},\ldots ,\mathbf {x} ,t\right)\,,} B = ∫ d 3 x B ( ϕ 1 , ϕ 2 , … , π 1 , π 2 , … , ∇ ϕ 1 , ∇ ϕ 2 , … , ∇ π 1 , ∇ π 2 , … , x , t ) , {\displaystyle B=\int d^{3}x{\mathcal {B}}\left(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\nabla \phi _{1},\nabla \phi _{2},\ldots ,\nabla \pi _{1},\nabla \pi _{2},\ldots ,\mathbf {x} ,t\right)\,,} and the fields are zero on the boundary of the volume the integrals are taken over, the field theoretic Poisson bracket is defined as (not to be confused with the anticommutator from quantum mechanics). { A , B } ϕ , π = ∫ d 3 x ∑ i ( δ A δ ϕ i δ B δ π i − δ B δ ϕ i δ A δ π i ) , {\displaystyle \{A,B\}_{\phi ,\pi }=\int d^{3}x\sum _{i}\left({\frac {\delta {\mathcal {A}}}{\delta \phi _{i}}}{\frac {\delta {\mathcal {B}}}{\delta \pi _{i}}}-{\frac {\delta {\mathcal {B}}}{\delta \phi _{i}}}{\frac {\delta {\mathcal {A}}}{\delta \pi _{i}}}\right)\,,} where δ F / δ f {\displaystyle \delta {\mathcal {F}}/\delta f} is the variational derivative δ F δ f = ∂ F ∂ f − ∑ i ∇ i ∂ F ∂ ( ∇ i f ) . {\displaystyle {\frac {\delta {\mathcal {F}}}{\delta f}}={\frac {\partial {\mathcal {F}}}{\partial f}}-\sum _{i}\nabla _{i}{\frac {\partial {\mathcal {F}}}{\partial (\nabla _{i}f)}}\,.} Under the same conditions of vanishing fields on the surface, the following result holds for the time evolution of A (similarly for B): d A d t = { A , H } + ∂ A ∂ t {\displaystyle {\frac {dA}{dt}}=\{A,H\}+{\frac {\partial A}{\partial t}}} which can be found from the total time derivative of A, integration by parts, and using the above Poisson bracket. == Explicit time-independence == The following results are true if the Lagrangian and Hamiltonian densities are explicitly time-independent (they can still have implicit time-dependence via the fields and their derivatives), === Kinetic and potential energy densities === The Hamiltonian density is the total energy density, the sum of the kinetic energy density ( T {\displaystyle {\mathcal {T}}} ) and the potential energy density ( V {\displaystyle {\mathcal {V}}} ), H = T + V . {\displaystyle {\mathcal {H}}={\mathcal {T}}+{\mathcal {V}}\,.} === Continuity equation === Taking the partial time derivative of the definition of the Hamiltonian density above, and using the chain rule for implicit differentiation and the definition of the conjugate momentum field, gives the continuity equation: ∂ H ∂ t + ∇ ⋅ S = 0 {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial t}}+\nabla \cdot \mathbf {S} =0} in which the Hamiltonian density can be interpreted as the energy density, and S = ∂ L ∂ ( ∇ ϕ ) ∂ ϕ ∂ t {\displaystyle \mathbf {S} ={\frac {\partial {\mathcal {L}}}{\partial (\nabla \phi )}}{\frac {\partial \phi }{\partial t}}} the energy flux, or flow of energy per unit time per unit surface area. == Relativistic field theory == Covariant Hamiltonian field theory is the relativistic formulation of Hamiltonian field theory. Hamiltonian field theory usually means the symplectic Hamiltonian formalism when applied to classical field theory, that takes the form of the instantaneous Hamiltonian formalism on an infinite-dimensional phase space, and where canonical coordinates are field functions at some instant of time. This Hamiltonian formalism is applied to quantization of fields, e.g., in quantum gauge theory. In Covariant Hamiltonian field theory, canonical momenta pμi corresponds to derivatives of fields with respect to all world coordinates xμ. Covariant Hamilton equations are equivalent to the Euler–Lagrange equations in the case of hyperregular Lagrangians. Covariant Hamiltonian field theory is developed in the Hamilton–De Donder, polysymplectic, multisymplectic and k-symplectic variants. A phase space of covariant Hamiltonian field theory is a finite-dimensional polysymplectic or multisymplectic manifold. Hamiltonian non-autonomous mechanics is formulated as covariant Hamiltonian field theory on fiber bundles over the time axis, i.e. the real line R {\displaystyle \mathbb {R} } . == See also == Analytical mechanics De Donder–Weyl theory Four-vector Canonical quantization Hamiltonian fluid mechanics Covariant classical field theory Polysymplectic manifold Non-autonomous mechanics == Notes == == Citations == == References == Badin, G.; Crisciani, F. (2018). Variational Formulation of Fluid and Geophysical Fluid Dynamics - Mechanics, Symmetries and Conservation Laws -. Springer. p. 218. Bibcode:2018vffg.book.....B. doi:10.1007/978-3-319-59695-2. ISBN 978-3-319-59694-5. S2CID 125902566. Goldstein, Herbert (1980). "Chapter 12: Continuous Systems and Fields". Classical Mechanics (2nd ed.). San Francisco, CA: Addison Wesley. pp. 562–565. ISBN 0201029189. Greiner, W.; Reinhardt, J. (1996), Field Quantization, Springer, ISBN 3-540-59179-6 Fetter, A. L.; Walecka, J. D. (1980). Theoretical Mechanics of Particles and Continua. Dover. pp. 258–259. ISBN 978-0-486-43261-8.
Wikipedia/Covariant_Hamiltonian_field_theory
In mathematical physics, covariant classical field theory represents classical fields by sections of fiber bundles, and their dynamics is phrased in the context of a finite-dimensional space of fields. Nowadays, it is well known that jet bundles and the variational bicomplex are the correct domain for such a description. The Hamiltonian variant of covariant classical field theory is the covariant Hamiltonian field theory where momenta correspond to derivatives of field variables with respect to all world coordinates. Non-autonomous mechanics is formulated as covariant classical field theory on fiber bundles over the time axis R {\displaystyle \mathbb {R} } . == Examples == Many important examples of classical field theories which are of interest in quantum field theory are given below. In particular, these are the theories which make up the Standard model of particle physics. These examples will be used in the discussion of the general mathematical formulation of classical field theory. === Uncoupled theories === Scalar field theory Klein−Gordon theory Spinor theories Dirac theory Weyl theory Majorana theory Gauge theories Maxwell theory Yang–Mills theory. This is the only theory in the uncoupled theory list which contains interactions: Yang–Mills contains self-interactions. === Coupled theories === Yukawa coupling: coupling of scalar and spinor fields. Scalar electrodynamics/chromodynamics: coupling of scalar and gauge fields. Quantum electrodynamics/chromodynamics: coupling of spinor and gauge fields. Despite these being named quantum theories, the Lagrangians can be considered as those of a classical field theory. == Requisite mathematical structures == In order to formulate a classical field theory, the following structures are needed: === Spacetime === A smooth manifold M {\displaystyle M} . This is variously known as the world manifold (for emphasizing the manifold without additional structures such as a metric), spacetime (when equipped with a Lorentzian metric), or the base manifold for a more geometrical viewpoint. ==== Structures on spacetime ==== The spacetime often comes with additional structure. Examples are Metric: a (pseudo-)Riemannian metric g {\displaystyle \mathbf {g} } on M {\displaystyle M} . Metric up to conformal equivalence as well as the required structure of an orientation, needed for a notion of integration over all of the manifold M {\displaystyle M} . ==== Symmetries of spacetime ==== The spacetime M {\displaystyle M} may admit symmetries. For example, if it is equipped with a metric g {\displaystyle \mathbf {g} } , these are the isometries of M {\displaystyle M} , generated by the Killing vector fields. The symmetries form a group Aut ( M ) {\displaystyle {\text{Aut}}(M)} , the automorphisms of spacetime. In this case the fields of the theory should transform in a representation of Aut ( M ) {\displaystyle {\text{Aut}}(M)} . For example, for Minkowski space, the symmetries are the Poincaré group Iso ( 1 , 3 ) {\displaystyle {\text{Iso}}(1,3)} . === Gauge, principal bundles and connections === A Lie group G {\displaystyle G} describing the (continuous) symmetries of internal degrees of freedom. This is referred to as the gauge group. The corresponding Lie algebra through the Lie group–Lie algebra correspondence is denoted g {\displaystyle {\mathfrak {g}}} . A principal G {\displaystyle G} -bundle P {\displaystyle P} , otherwise known as a G {\displaystyle G} -torsor. This is sometimes written as P → π M {\displaystyle P\xrightarrow {\pi } M} where π {\displaystyle \pi } is the canonical projection map on P {\displaystyle P} and M {\displaystyle M} is the base manifold. ==== Connections and gauge fields ==== Here we take the view of the connection as a principal connection. In field theory this connection is also viewed as a covariant derivative ∇ {\displaystyle \nabla } whose action on various fields is defined later. A principal connection denoted A {\displaystyle {\mathcal {A}}} is a g {\displaystyle {\mathfrak {g}}} -valued 1-form on P satisfying technical conditions of 'projection' and 'right-equivariance': details found in the principal connection article. Under a trivialization this can be written as a local gauge field A μ ( x ) {\displaystyle A_{\mu }(x)} , a g {\displaystyle {\mathfrak {g}}} -valued 1-form on a trivialization patch U ⊂ M {\displaystyle U\subset M} . It is this local form of the connection which is identified with gauge fields in physics. When the base manifold M {\displaystyle M} is flat, there are simplifications which remove this subtlety. === Associated vector bundles and matter content === An associated vector bundle E → π M {\displaystyle E\xrightarrow {\pi } M} associated to the principal bundle P {\displaystyle P} through a representation ρ . {\displaystyle \rho .} For completeness, given a representation ( V , G , ρ ) {\displaystyle (V,G,\rho )} , the fiber of E {\displaystyle E} is V {\displaystyle V} . A field or matter field is a section of an associated vector bundle. The collection of these, together with gauge fields, is the matter content of the theory. === Lagrangian === A Lagrangian L {\displaystyle L} : given a fiber bundle E ′ → π M {\displaystyle E'\xrightarrow {\pi } M} , the Lagrangian is a function L : E ′ → R {\displaystyle L:E'\rightarrow \mathbb {R} } . Suppose that the matter content is given by sections of E {\displaystyle E} with fibre V {\displaystyle V} from above. Then for example, more concretely we may consider E ′ {\displaystyle E'} to be a bundle where the fibre at p {\displaystyle p} is V ⊗ T p ∗ M {\displaystyle V\otimes T_{p}^{*}M} . This then allows L {\displaystyle L} to be viewed as a functional of a field. This completes the mathematical prerequisites for a large number of interesting theories, including those given in the examples section above. == Theories on flat spacetime == When the base manifold M {\displaystyle M} is flat, that is, (Pseudo-)Euclidean space, there are many useful simplifications that make theories less conceptually difficult to deal with. The simplifications come from the observation that flat spacetime is contractible: it is then a theorem in algebraic topology that any fibre bundle over flat M {\displaystyle M} is trivial. In particular, this allows us to pick a global trivialization of P {\displaystyle P} , and therefore identify the connection globally as a gauge field A μ . {\displaystyle A_{\mu }.} Furthermore, there is a trivial connection A 0 , μ {\displaystyle A_{0,\mu }} which allows us to identify associated vector bundles as E = M × V {\displaystyle E=M\times V} , and then we need not view fields as sections but simply as functions M → V {\displaystyle M\rightarrow V} . In other words, vector bundles at different points are comparable. In addition, for flat spacetime the Levi-Civita connection is the trivial connection on the frame bundle. Then the spacetime covariant derivative on tensor or spin-tensor fields is simply the partial derivative in flat coordinates. However the gauge covariant derivative may require a non-trivial connection A μ {\displaystyle A_{\mu }} which is considered to be the gauge field of the theory. === Accuracy as a physical model === In weak gravitational curvature, flat spacetime often serves as a good approximation to weakly curved spacetime. For experiment, this approximation is good. The Standard Model is defined on flat spacetime, and has produced the most accurate precision tests of physics to date. == See also == Classical field theory Exterior algebra Lagrangian system Variational bicomplex Quantum field theory Non-autonomous mechanics Higgs field (classical) == References == Saunders, D.J., "The Geometry of Jet Bundles", Cambridge University Press, 1989, ISBN 0-521-36948-7 Bocharov, A.V. [et al.] "Symmetries and conservation laws for differential equations of mathematical physics", Amer. Math. Soc., Providence, RI, 1999, ISBN 0-8218-0958-X De Leon, M., Rodrigues, P.R., "Generalized Classical Mechanics and Field Theory", Elsevier Science Publishing, 1985, ISBN 0-444-87753-3 Griffiths, P.A., "Exterior Differential Systems and the Calculus of Variations", Boston: Birkhäuser, 1983, ISBN 3-7643-3103-8 Gotay, M.J., Isenberg, J., Marsden, J.E., Montgomery R., Momentum Maps and Classical Fields Part I: Covariant Field Theory, November 2003 arXiv:physics/9801019 Echeverria-Enriquez, A., Munoz-Lecanda, M.C., Roman-Roy, M., Geometry of Lagrangian First-order Classical Field Theories, May 1995 arXiv:dg-ga/9505004 Giachetta, G., Mangiarotti, L., Sardanashvily, G., "Advanced Classical Field Theory", World Scientific, 2009, ISBN 978-981-283-895-7 (arXiv:0811.0331 )
Wikipedia/Covariant_classical_field_theory
In physics, a Unified Field Theory (UFT) or “Theory of Everything” is a type of field theory that allows all fundamental forces of nature, including gravity, and all elementary particles to be written in terms of a single physical field. According to quantum field theory, particles are themselves the quanta of fields. Different fields in physics include vector fields such as the electromagnetic field, spinor fields whose quanta are fermionic particles such as electrons, and tensor fields such as the metric tensor field that describes the shape of spacetime and gives rise to gravitation in general relativity. Unified field theories attempt to organize these fields into a single mathematical structure. For over a century, the unified field theory has remained an open line of research. The term was coined by Albert Einstein, who attempted to unify his general theory of relativity with electromagnetism. Einstein attempted to create a classical unified field theory. Among other difficulties, this required a new explanation of particles as singularities or solitons instead of field quanta. Later attempts to unify general relativity with other forces incorporate quantum mechanics. The concept of a "Theory of Everything" or Grand Unified Theory are closely related to unified field theory. A theory of everything attempts to create a complete picture of all events in nature. Grand Unified Theories do not attempt to include the gravitational force and can therefore operate entirely within quantum field theory. The goal of a unified field theory has led to significant progress in theoretical physics. == Introduction == Unified field theory attempts to give a single elegant description of the following fields: === Forces === All four of the known fundamental forces are mediated by fields. In the Standard Model of particle physics, three of these result from the exchange of gauge bosons. These are: Strong interaction: the interaction responsible for holding quarks together to form hadrons, and holding neutrons and also protons together to form atomic nuclei. The exchange particle that mediates this force is the gluon. Electromagnetic interaction: the familiar interaction that acts on electrically charged particles. The photon is the exchange particle for this force. Weak interaction: a short-range interaction responsible for some forms of radioactivity, that acts on electrons, neutrinos, and quarks. It is mediated by the W and Z bosons. General relativity likewise describes gravitation as the result of the metric tensor field, which describes the shape of spacetime: Gravitational interaction: a long-range attractive interaction that acts on all particles. In hypothetical quantum versions of GR, the postulated exchange particle has been named the graviton. === Matter === In the Standard Model, the "matter" particles (electrons, quarks, neutrinos, etc) are described as the quanta of spinor fields. Gauge boson fields also have quanta, such as photons for the electromagnetic field. === Higgs === The Standard Model has a unique fundamental scalar field, the Higgs field, the quanta of which are called Higgs bosons. == History == === Classic theory === The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed-of-light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime. In 1915, he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional (4D) spacetime. In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. Given later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory and, two years later, that of Theodor Kaluza, who extended General Relativity to five dimensions. Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory, the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory. By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics. One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II. In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories. === Modern progress === In 1963, American physicist Sheldon Glashow proposed that the weak nuclear force, electricity, and magnetism could arise from a partially unified electroweak theory. In 1967, Pakistani Abdus Salam and American Steven Weinberg independently revised Glashow's theory by having the masses for the W particle and Z particle arise through spontaneous symmetry breaking with the Higgs mechanism. This unified theory modelled the electroweak interaction as a force mediated by four particles: the photon for the electromagnetic aspect, a neutral Z particle, and two charged W particles for the weak aspect. As a result of the spontaneous symmetry breaking, the weak force becomes short-range and the W and Z bosons acquire masses of 80.4 and 91.2 GeV/c2, respectively. Their theory was first given experimental support by the discovery of weak neutral currents in 1973. In 1983, the Z and W bosons were first produced at CERN by Carlo Rubbia's team. For their insights, Glashow, Salam, and Weinberg were awarded the Nobel Prize in Physics in 1979. Carlo Rubbia and Simon van der Meer received the Prize in 1984. After Gerardus 't Hooft showed the Glashow–Weinberg–Salam electroweak interactions to be mathematically consistent, the electroweak theory became a template for further attempts at unifying forces. In 1974, Sheldon Glashow and Howard Georgi proposed unifying the strong and electroweak interactions into the Georgi–Glashow model, the first Grand Unified Theory, which would have observable effects for energies much above 100 GeV. Since then there have been several proposals for Grand Unified Theories, e.g. the Pati–Salam model, although none is currently universally accepted. A major problem for experimental tests of such theories is the energy scale involved, which is well beyond the reach of current accelerators. Grand Unified Theories make predictions for the relative strengths of the strong, weak, and electromagnetic forces, and in 1991 LEP determined that supersymmetric theories have the correct ratio of couplings for a Georgi–Glashow Grand Unified Theory. Many Grand Unified Theories (but not Pati–Salam) predict that the proton can decay, and if this were to be seen, details of the decay products could give hints at more aspects of the Grand Unified Theory. It is at present unknown if the proton can decay, although experiments have determined a lower bound of 1035 years for its lifetime. === Current status === Theoretical physicists have not yet formulated a widely accepted, consistent theory that combines general relativity and quantum mechanics to form a theory of everything. Trying to combine the graviton with the strong and electroweak interactions leads to fundamental difficulties and the resulting theory is not renormalizable. The incompatibility of the two theories remains an outstanding problem in the field of physics. == See also == Sheldon Glashow Unification (physics) == References == == Further reading == Jeroen van Dongen Einstein's Unification, Cambridge University Press (July 26, 2010) Varadarajan, V.S. Supersymmetry for Mathematicians: An Introduction (Courant Lecture Notes), American Mathematical Society (July 2004) == External links == On the History of Unified Field Theories, by Hubert F. M. Goenner
Wikipedia/Unified_Field_Theory
In theoretical physics, thermal quantum field theory (thermal field theory for short) or finite temperature field theory is a set of methods to calculate expectation values of physical observables of a quantum field theory at finite temperature. In the Matsubara formalism, the basic idea (due to Felix Bloch) is that the expectation values of operators in a canonical ensemble ⟨ A ⟩ = Tr [ exp ⁡ ( − β H ) A ] Tr [ exp ⁡ ( − β H ) ] {\displaystyle \langle A\rangle ={\frac {{\mbox{Tr}}\,[\exp(-\beta H)A]}{{\mbox{Tr}}\,[\exp(-\beta H)]}}} may be written as expectation values in ordinary quantum field theory where the configuration is evolved by an imaginary time τ = i t ( 0 ≤ τ ≤ β ) {\displaystyle \tau =it(0\leq \tau \leq \beta )} . One can therefore switch to a spacetime with Euclidean signature, where the above trace (Tr) leads to the requirement that all bosonic and fermionic fields be periodic and antiperiodic, respectively, with respect to the Euclidean time direction with periodicity β = 1 / ( k T ) {\displaystyle \beta =1/(kT)} (we are assuming natural units ℏ = 1 {\displaystyle \hbar =1} ). This allows one to perform calculations with the same tools as in ordinary quantum field theory, such as functional integrals and Feynman diagrams, but with compact Euclidean time. Note that the definition of normal ordering has to be altered. In momentum space, this leads to the replacement of continuous frequencies by discrete imaginary (Matsubara) frequencies v n = n / β {\displaystyle v_{n}=n/\beta } and, through the de Broglie relation, to a discretized thermal energy spectrum E n = 2 n π k T {\displaystyle E_{n}=2n\pi kT} . This has been shown to be a useful tool in studying the behavior of quantum field theories at finite temperature. It has been generalized to theories with gauge invariance and was a central tool in the study of a conjectured deconfining phase transition of Yang–Mills theory. In this Euclidean field theory, real-time observables can be retrieved by analytic continuation. The Feynman rules for gauge theories in the Euclidean time formalism, were derived by C. W. Bernard. The Matsubara formalism, also referred to as imaginary time formalism, can be extended to systems with thermal variations. In this approach, the variation in the temperature is recast as a variation in the Euclidean metric. Analysis of the partition function leads to an equivalence between thermal variations and the curvature of the Euclidean space. The alternative to the use of fictitious imaginary times is to use a real-time formalism which come in two forms. A path-ordered approach to real-time formalisms includes the Schwinger–Keldysh formalism and more modern variants. The latter involves replacing a straight time contour from (large negative) real initial time t i {\displaystyle t_{i}} to t i − i β {\displaystyle t_{i}-i\beta } by one that first runs to (large positive) real time t f {\displaystyle t_{f}} and then suitably back to t i − i β {\displaystyle t_{i}-i\beta } . In fact all that is needed is one section running along the real time axis, as the route to the end point, t i − i β {\displaystyle t_{i}-i\beta } , is less important. The piecewise composition of the resulting complex time contour leads to a doubling of fields and more complicated Feynman rules, but obviates the need of analytic continuations of the imaginary-time formalism. The alternative approach to real-time formalisms is an operator based approach using Bogoliubov transformations, known as thermo field dynamics. As well as Feynman diagrams and perturbation theory, other techniques such as dispersion relations and the finite temperature analog of Cutkosky rules can also be used in the real time formulation. An alternative approach which is of interest to mathematical physics is to work with KMS states. == See also == Matsubara frequency Polyakov loop Quantum thermodynamics Quantum statistical mechanics == References ==
Wikipedia/Thermal_field_theory
The Web of Science (WoS; previously known as Web of Knowledge) is a paid-access platform that provides (typically via the internet) access to multiple databases that provide reference and citation data from academic journals, conference proceedings, and other documents in various academic disciplines. Until 1997, it was originally produced by the Institute for Scientific Information. It is currently owned by Clarivate. Web of Science currently contains 79 million records in the core collection and 171 million records on the platform. == History == A citation index is built on the fact that citations in science serve as linkages between similar research items, and lead to matching or related scientific literature, such as journal articles, conference proceedings, abstracts, etc. In addition, literature that shows the greatest impact in a particular field, or more than one discipline, can be located through a citation index. For example, a paper's influence can be determined by linking to all the papers that have cited it. In this way, current trends, patterns, and emerging fields of research can be assessed. Eugene Garfield, the "father of citation indexing of academic literature", who launched the Science Citation Index, which in turn led to the Web of Science, wrote: Citations are the formal, explicit linkages between papers that have particular points in common. A citation index is built around these linkages. It lists publications that have been cited and identifies the sources of the citations. Anyone conducting a literature search can find from one to dozens of additional papers on a subject just by knowing one that has been cited. And every paper that is found provides a list of new citations with which to continue the search. The simplicity of citation indexing is one of its main strengths. === Search answer === Web of Science "is a unifying research tool which enables the user to acquire, analyze, and disseminate database information in a timely manner". This is accomplished because of the creation of a common vocabulary, called ontology, for varied search terms and varied data. Moreover, search terms generate related information across categories. Acceptable content for Web of Science is determined by an evaluation and selection process based on the following criteria: impact, influence, timeliness, peer review, and geographic representation. Web of Science employs various search and analysis capabilities. First, citation indexing is employed, which is enhanced by the capability to search for results across disciplines. The influence, impact, history, and methodology of an idea can be followed from its first instance, notice, or referral to the present day. This technology points to a deficiency with the keyword-only method of searching. Second, subtle trends and patterns relevant to the literature or research of interest, become apparent. Broad trends indicate significant topics of the day, as well as the history relevant to both the work at hand, and particular areas of study. Third, trends can be graphically represented. == Coverage == Expanding the coverage of Web of Science, in November 2009 Thomson Reuters introduced Century of Social Sciences. This service contains files which trace social science research back to the beginning of the 20th century, and Web of Science now has indexing coverage from the year 1900 to the present. As of February 2017, the multidisciplinary coverage of the Web of Science encompasses: over a billion cited references, 90 million records, covering over 12 thousand high impact journals, and 8.2 million records across 160 thousand conference proceedings, with 15 thousand proceedings added each year. The selection is made on the basis of impact evaluations and comprise academic journals, spanning multiple academic disciplines. The coverage includes: the sciences, social sciences, the arts, and humanities, and goes across disciplines. However, Web of Science does not index all journals. There is a significant and positive correlation between the impact factor and CiteScore. However, an analysis by Elsevier, who created the journal evaluation metric CiteScore, has identified 216 journals from 70 publishers to be in the top 10 percent of the most-cited journals in their subject category based on the CiteScore while they did not have an impact factor. It appears that the impact factor does not provide comprehensive and unbiased coverage of high-quality journals. Similar results can be observed by comparing the impact factor with the SCImago Journal Rank. Furthermore, as of September 2014, the total file count of the Web of Science was over 90 million records, which included over 800 million cited references, covering 5.3 thousand social science publications in 55 disciplines. Titles of foreign-language publications are translated into English and so cannot be found by searches in the original language. In 2018, the Web of Science started embedding partial information about the open access status of works, using Unpaywall data. While marketed as a global point of reference, Scopus and WoS have been characterised as «structurally biased against research produced in non-Western countries, non-English language research, and research from the arts, humanities, and social sciences». After the 2022 Russian invasion of Ukraine, on March 11, 2022, Clarivate – which owns Web of Science – announced that it would cease all commercial activity in Russia and immediately close an office there. == Citation databases == The Web of Science Core Collection consists of six online indexing databases: Science Citation Index Expanded (SCIE), previously entitled Science Citation Index, covers more than 9,200 journals across 178 scientific disciplines. Coverage is from 1900 to present day, with over 53 million records Social Sciences Citation Index (SSCI) covers more than 3,400 journals in the social sciences. Coverage is from 1900 to present, with over 9.3 million records Arts & Humanities Citation Index (AHCI) covers more than 1,800 journals in the arts and humanities. Coverage is from 1975 to present, with over 4.9 million records Emerging Sources Citation Index (ESCI) covers more than 7,800 journals in all disciplines. Coverage is from 2005 to present, with over 3 million records Book Citation Index (BCI) covers more than 116,000 editorially selected books. Coverage is from 2005 to present, with over 53.2 million records Conference Proceedings Citation Index (CPCI) covers more than 205,000 conference proceedings. Coverage is from 1990 to present, with over 70.1 million records === Regional databases === Since 2008, the Web of Science hosts a number of regional citation indices: Chinese Science Citation Database, produced in partnership with the Chinese Academy of Sciences, was the first indexing database in a language other than English SciELO Citation Index, established in 2013, covering Brazil, Spain, Portugal, the Caribbean and South Africa, and an additional 12 countries of Latin America Korea Citation Index in 2014, with updates from the National Research Foundation of Korea Russian Science Citation Index in 2015 Arabic Regional Citation Index in 2020 === Contents === The seven citation indices listed above contain references which have been cited by other articles. One may use them to undertake cited reference search, that is, locating articles that cite an earlier, or current publication. One may search citation databases by topic, by author, by source title, and by location. Two chemistry databases, Index Chemicus and Current Chemical Reactions allow for the creation of structure drawings, thus enabling users to locate chemical compounds and reactions. === Abstracting and indexing === The following types of literature are indexed: scholarly books, peer reviewed journals, original research articles, reviews, editorials, chronologies, abstracts, as well as other items. Disciplines included in this index are agriculture, biological sciences, engineering, medical and life sciences, physical and chemical sciences, anthropology, law, library sciences, architecture, dance, music, film, and theater. Seven citation databases encompasses coverage of the above disciplines. === Other databases and products === Among other WoS databases are BIOSIS and The Zoological Record, an electronic index of zoological literature that also serves as the unofficial register of scientific names in zoology. Clarivate owns and markets numerous other products that provide data and analytics, workflow tools, and professional services to researchers, universities, research institutions, and other organizations, such as: InCites Journal Citation Reports Essential Science Indicators ScholarOne Converis == Limitations in the use of citation analysis == As with other scientific approaches, scientometrics and bibliometrics have their own limitations. In 2010, a criticism was voiced pointing toward certain deficiencies of the journal impact factor calculation process, based on Thomson Reuters Web of Science, such as: journal citation distributions usually are highly skewed towards established journals; journal impact factor properties are field-specific and can be easily manipulated by editors, or even by changing the editorial policies; this makes the entire process essentially non-transparent. Regarding the more objective journal metrics, there is a growing view that for greater accuracy it must be supplemented with article-level metrics and peer-review. Studies of methodological quality and reliability have found that "reliability of published research works in several fields may be decreasing with increasing journal rank". Thomson Reuters replied to criticism in general terms by stating that "no one metric can fully capture the complex contributions scholars make to their disciplines, and many forms of scholarly achievement should be considered." == Journal Citation Reports == == See also == == References == == External links == Official website == Further reading == Cantú-Ortiz, Francisco Javier, ed. (2017-10-25). "2. Web of Science: The First Citation Index for Data Analytics and Scientometrics". Research Analytics: Boosting University Productivity and Competitiveness through Scientometrics (1st ed.). New York City: CRC Press. pp. 15–30. doi:10.1201/9781315155890. ISBN 978-1-315-15589-0.
Wikipedia/Web_of_Science
The Science Citation Index Expanded (SCIE) is a citation index owned by Clarivate and previously by Thomson Reuters. It was created by the Eugene Garfield at the Institute for Scientific Information, launched in 1964 as Science Citation Index (SCI). It was later distributed via CD/DVD and became available online in 1997, when it acquired the current name. The indexing database covers more than 9,200 notable and significant journals, across 178 disciplines, from 1900 to the present. These are alternatively described as the world's leading journals of science and technology, because of a rigorous selection process. == Accessibility == The index is available online within Web of Science, as part of its Core Collection (there are also CD and printed editions, covering a smaller number of journals). The database allows researchers to search through over 53 million records from thousands of academic journals that were published by publishers from around the world. == Specialty citation indexes == Clarivate previously marketed several subsets of this database, termed "Specialty Citation Indexes", such as the Neuroscience Citation Index and the Chemistry Citation Index, however these databases are no longer actively maintained. The Chemistry Citation Index was first introduced by Eugene Garfield, a chemist by training. His original "search examples were based on [his] experience as a chemist". In 1992, an electronic and print form of the index was derived from a core of 330 chemistry journals, within which all areas were covered. Additional information was provided from articles selected from 4,000 other journals. All chemistry subdisciplines were covered: organic, inorganic, analytical, physical chemistry, polymer, computational, organometallic, materials chemistry, and electrochemistry. By 2002, the core journal coverage increased to 500 and related article coverage increased to 8,000 other journals. One 1980 study reported the overall citation indexing benefits for chemistry, examining the use of citations as a tool for the study of the sociology of chemistry and illustrating the use of citation data to "observe" chemistry subfields over time. == See also == Arts and Humanities Citation Index, which covers 1,130 journals, beginning with 1975. Emerging Sources Citation Index (ESCI) Google Scholar Impact factor List of academic databases and search engines Journal Citation Reports Social Sciences Citation Index, which covers 1,700 journals, beginning with 1956. == References == == Further reading == Borgman, Christine L.; Furner, Jonathan (2005). "Scholarly Communication and Bibliometrics" (PDF). Annual Review of Information Science and Technology. 36 (1): 3–72. CiteSeerX 10.1.1.210.6040. doi:10.1002/aris.1440360102. Meho, Lokman I.; Yang, Kiduk (2007). "Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar" (PDF). Journal of the American Society for Information Science and Technology. 58 (13): 2105. doi:10.1002/asi.20677. Garfield, E.; Sher, I. H. (1963). "New factors in the evaluation of scientific literature through citation indexing" (PDF). American Documentation. 14 (3): 195. doi:10.1002/asi.5090140304. Garfield, E. (1970). "Citation Indexing for Studying Science" (PDF). Nature. 227 (5259): 669–71. Bibcode:1970Natur.227..669G. doi:10.1038/227669a0. PMID 4914589. S2CID 4200369. Garfield, E. (1979). Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. Information Sciences Series. New York: Wiley-Interscience. ISBN 978-0-89495-024-7. == External links == Introduction to SCIE Master journal list Chemical Information Sources/ Author and Citation Searches. on WikiBooks. Cited Reference Searching: An Introduction. Thomson Reuters. Chemistry Citation Index. Chinweb.
Wikipedia/Science_Citation_Index_Expanded
Following is a list of the frequently occurring equations in the theory of special relativity. == Postulates of Special Relativity == To derive the equations of special relativity, one must start with two other The laws of physics are invariant under transformations between inertial frames. In other words, the laws of physics will be the same whether you are testing them in a frame 'at rest', or a frame moving with a constant velocity relative to the 'rest' frame. The speed of light in a perfect classical vacuum ( c 0 {\displaystyle c_{0}} ) is measured to be the same by all observers in inertial frames and is, moreover, finite but nonzero. This speed acts as a supremum for the speed of local transmission of information in the universe. In this context, "speed of light" really refers to the speed supremum of information transmission or of the movement of ordinary (nonnegative mass) matter, locally, as in a classical vacuum. Thus, a more accurate description would refer to c 0 {\displaystyle c_{0}} rather than the speed of light per se. However, light and other massless particles do theoretically travel at c 0 {\displaystyle c_{0}} under vacuum conditions and experiment has nonfalsified this notion with fairly high precision. Regardless of whether light itself does travel at c 0 {\displaystyle c_{0}} , though c 0 {\displaystyle c_{0}} does act as such a supremum, and that is the assumption which matters for Relativity. From these two postulates, all of special relativity follows. In the following, the relative velocity v between two inertial frames is restricted fully to the x-direction, of a Cartesian coordinate system. == Kinematics == === Lorentz transformation === The following notations are used very often in special relativity: Lorentz factor γ = 1 1 − β 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-\beta ^{2}}}}} where β = v c {\displaystyle \beta ={\frac {v}{c}}} and v is the relative velocity between two inertial frames. For two frames at rest, γ = 1, and increases with relative velocity between the two inertial frames. As the relative velocity approaches the speed of light, γ → ∞. Time dilation (different times t and t' at the same position x in same inertial frame) t ′ = γ t {\displaystyle t'=\gamma t} In this example the time measured in the frame on the vehicle, t, is known as the proper time. The proper time between two events - such as the event of light being emitted on the vehicle and the event of light being received on the vehicle - is the time between the two events in a frame where the events occur at the same location. So, above, the emission and reception of the light both took place in the vehicle's frame, making the time that an observer in the vehicle's frame would measure the proper time. Length contraction (different positions x and x' at the same instant t in the same inertial frame) ℓ ′ = ℓ γ {\displaystyle \ell '={\frac {\ell }{\gamma }}} This is the formula for length contraction. As there existed a proper time for time dilation, there exists a proper length for length contraction, which in this case is ℓ. The proper length of an object is the length of the object in the frame in which the object is at rest. Also, this contraction only affects the dimensions of the object which are parallel to the relative velocity between the object and observer. Thus, lengths perpendicular to the direction of motion are unaffected by length contraction. Lorentz transformation x ′ = γ ( x − v t ) {\displaystyle x'=\gamma \left(x-vt\right)} y ′ = y {\displaystyle y'=y\,} z ′ = z {\displaystyle z'=z\,} t ′ = γ ( t − v x c 2 ) {\displaystyle t'=\gamma \left(t-{\frac {vx}{c^{2}}}\right)} Velocity addition V x ′ = V x − v 1 − V x v c 2 {\displaystyle V'_{x}={\frac {V_{x}-v}{1-{\frac {V_{x}v}{c^{2}}}}}} V y ′ = V y γ ( 1 − V x v c 2 ) {\displaystyle V'_{y}={\frac {V_{y}}{\gamma \left(1-{\frac {V_{x}v}{c^{2}}}\right)}}} V z ′ = V z γ ( 1 − V x v c 2 ) {\displaystyle V'_{z}={\frac {V_{z}}{\gamma \left(1-{\frac {V_{x}v}{c^{2}}}\right)}}} == The metric and four-vectors == In what follows, bold sans serif is used for 4-vectors while normal bold roman is used for ordinary 3-vectors. Inner product (i.e. notion of length) a ⋅ b = η ( a , b ) {\displaystyle {\boldsymbol {\mathsf {a}}}\cdot {\boldsymbol {\mathsf {b}}}=\eta ({\boldsymbol {\mathsf {a}}},{\boldsymbol {\mathsf {b}}})} where η {\displaystyle \eta } is known as the metric tensor. In special relativity, the metric tensor is the Minkowski metric: η = ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle \eta ={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}} Space-time interval d s 2 = d x 2 + d y 2 + d z 2 − c 2 d t 2 = ( c d t d x d y d z ) ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) ( c d t d x d y d z ) {\displaystyle ds^{2}=dx^{2}+dy^{2}+dz^{2}-c^{2}dt^{2}={\begin{pmatrix}cdt&dx&dy&dz\end{pmatrix}}{\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}{\begin{pmatrix}cdt\\dx\\dy\\dz\end{pmatrix}}} In the above, ds2 is known as the spacetime interval. This inner product is invariant under the Lorentz transformation, that is, η ( a ′ , b ′ ) = η ( Λ a , Λ b ) = η ( a , b ) {\displaystyle \eta ({\boldsymbol {\mathsf {a}}}',{\boldsymbol {\mathsf {b}}}')=\eta \left(\Lambda {\boldsymbol {\mathsf {a}}},\Lambda {\boldsymbol {\mathsf {b}}}\right)=\eta ({\boldsymbol {\mathsf {a}}},{\boldsymbol {\mathsf {b}}})} The sign of the metric and the placement of the ct, ct', cdt, and cdt′ time-based terms can vary depending on the author's choice. For instance, many times the time-based terms are placed first in the four-vectors, with the spatial terms following. Also, sometimes η is replaced with −η, making the spatial terms produce negative contributions to the dot product or spacetime interval, while the time term makes a positive contribution. These differences can be used in any combination, so long as the choice of standards is followed completely throughout the computations performed. === Lorentz transforms === It is possible to express the above coordinate transformation via a matrix. To simplify things, it can be best to replace t, t′, dt, and dt′ with ct, ct', cdt, and cdt′, which has the dimensions of distance. So: x ′ = γ x − γ β c t {\displaystyle x'=\gamma x-\gamma \beta ct\,} y ′ = y {\displaystyle y'=y\,} z ′ = z {\displaystyle z'=z\,} c t ′ = γ c t − γ β x {\displaystyle ct'=\gamma ct-\gamma \beta x\,} then in matrix form: ( c t ′ x ′ y ′ z ′ ) = ( γ − γ β 0 0 − γ β γ 0 0 0 0 1 0 0 0 0 1 ) ( c t x y z ) {\displaystyle {\begin{pmatrix}ct'\\x'\\y'\\z'\end{pmatrix}}={\begin{pmatrix}\gamma &-\gamma \beta &0&0\\-\gamma \beta &\gamma &0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}{\begin{pmatrix}ct\\x\\y\\z\end{pmatrix}}} The vectors in the above transformation equation are known as four-vectors, in this case they are specifically the position four-vectors. In general, in special relativity, four-vectors can be transformed from one reference frame to another as follows: a ′ = Λ a {\displaystyle {\boldsymbol {\mathsf {a}}}'=\Lambda {\boldsymbol {\mathsf {a}}}} In the above, a ′ {\displaystyle {\boldsymbol {\mathsf {a}}}'} and a {\displaystyle {\boldsymbol {\mathsf {a}}}} are the four-vector and the transformed four-vector, respectively, and Λ is the transformation matrix, which, for a given transformation is the same for all four-vectors one might want to transform. So a ′ {\displaystyle {\boldsymbol {\mathsf {a}}}'} can be a four-vector representing position, velocity, or momentum, and the same Λ can be used when transforming between the same two frames. The most general Lorentz transformation includes boosts and rotations; the components are complicated and the transformation requires spinors. === 4-vectors and frame-invariant results === Invariance and unification of physical quantities both arise from four-vectors. The inner product of a 4-vector with itself is equal to a scalar (by definition of the inner product), and since the 4-vectors are physical quantities their magnitudes correspond to physical quantities also. == Doppler shift == General doppler shift: ν ′ = γ ν ( 1 − β cos ⁡ θ ) {\displaystyle \nu '=\gamma \nu \left(1-\beta \cos \theta \right)} Doppler shift for emitter and observer moving right towards each other (or directly away): ν ′ = ν 1 − β 1 + β {\displaystyle \nu '=\nu {\frac {\sqrt {1-\beta }}{\sqrt {1+\beta }}}} Doppler shift for emitter and observer moving in a direction perpendicular to the line connecting them: ν ′ = γ ν {\displaystyle \nu '=\gamma \nu } == See also == == References == == Sources == Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft) 3-527-26954-1, (VHC Inc.) 0-89573-752-3 Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Wiley, 2009, ISBN 978-0-470-01460-8 Relativity DeMystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-145545-0 The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, ISBN 978-0-521-57507-2. An Introduction to Mechanics, D. Kleppner, R.J. Kolenkow, Cambridge University Press, 2010, ISBN 978-0-521-19821-9
Wikipedia/Relativistic_equations
In physics, relativistic angular momentum refers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the three-dimensional quantity in classical mechanics. Angular momentum is an important dynamical quantity derived from position and momentum. It is a measure of an object's rotational motion and resistance to changes in its rotation. Also, in the same way momentum conservation corresponds to translational symmetry, angular momentum conservation corresponds to rotational symmetry – the connection between symmetries and conservation laws is made by Noether's theorem. While these concepts were originally discovered in classical mechanics, they are also true and significant in special and general relativity. In terms of abstract algebra, the invariance of angular momentum, four-momentum, and other symmetries in spacetime, are described by the Lorentz group, or more generally the Poincaré group. Physical quantities that remain separate in classical physics are naturally combined in SR and GR by enforcing the postulates of relativity. Most notably, the space and time coordinates combine into the four-position, and energy and momentum combine into the four-momentum. The components of these four-vectors depend on the frame of reference used, and change under Lorentz transformations to other inertial frames or accelerated frames. Relativistic angular momentum is less obvious. The classical definition of angular momentum is the cross product of position x with momentum p to obtain a pseudovector x × p, or alternatively as the exterior product to obtain a second order antisymmetric tensor x ∧ p. What does this combine with, if anything? There is another vector quantity not often discussed – it is the time-varying moment of mass polar-vector (not the moment of inertia) related to the boost of the centre of mass of the system, and this combines with the classical angular momentum pseudovector to form an antisymmetric tensor of second order, in exactly the same way as the electric field polar-vector combines with the magnetic field pseudovector to form the electromagnetic field antisymmetric tensor. For rotating mass–energy distributions (such as gyroscopes, planets, stars, and black holes) instead of point-like particles, the angular momentum tensor is expressed in terms of the stress–energy tensor of the rotating object. In special relativity alone, in the rest frame of a spinning object, there is an intrinsic angular momentum analogous to the "spin" in quantum mechanics and relativistic quantum mechanics, although for an extended body rather than a point particle. In relativistic quantum mechanics, elementary particles have spin and this is an additional contribution to the orbital angular momentum operator, yielding the total angular momentum tensor operator. In any case, the intrinsic "spin" addition to the orbital angular momentum of an object can be expressed in terms of the Pauli–Lubanski pseudovector. == Definitions == === Orbital 3d angular momentum === For reference and background, two closely related forms of angular momentum are given. In classical mechanics, the orbital angular momentum of a particle with instantaneous three-dimensional position vector x = (x, y, z) and momentum vector p = (px, py, pz), is defined as the axial vector L = x × p {\displaystyle \mathbf {L} =\mathbf {x} \times \mathbf {p} } which has three components, that are systematically given by cyclic permutations of Cartesian directions (e.g. change x to y, y to z, z to x, repeat) L x = y p z − z p y , L y = z p x − x p z , L z = x p y − y p x . {\displaystyle {\begin{aligned}L_{x}&=yp_{z}-zp_{y}\,,\\L_{y}&=zp_{x}-xp_{z}\,,\\L_{z}&=xp_{y}-yp_{x}\,.\end{aligned}}} A related definition is to conceive orbital angular momentum as a plane element. This can be achieved by replacing the cross product by the exterior product in the language of exterior algebra, and angular momentum becomes a contravariant second order antisymmetric tensor L = x ∧ p {\displaystyle \mathbf {L} =\mathbf {x} \wedge \mathbf {p} } or writing x = (x1, x2, x3) = (x, y, z) and momentum vector p = (p1, p2, p3) = (px, py, pz), the components can be compactly abbreviated in tensor index notation L i j = x i p j − x j p i {\displaystyle L^{ij}=x^{i}p^{j}-x^{j}p^{i}} where the indices i and j take the values 1, 2, 3. On the other hand, the components can be systematically displayed fully in a 3 × 3 antisymmetric matrix L = ( L 11 L 12 L 13 L 21 L 22 L 23 L 31 L 32 L 33 ) = ( 0 L x y L x z L y x 0 L y z L z x L z y 0 ) = ( 0 L x y − L z x − L x y 0 L y z L z x − L y z 0 ) = ( 0 x p y − y p x − ( z p x − x p z ) − ( x p y − y p x ) 0 y p z − z p y z p x − x p z − ( y p z − z p y ) 0 ) {\displaystyle {\begin{aligned}\mathbf {L} &={\begin{pmatrix}L^{11}&L^{12}&L^{13}\\L^{21}&L^{22}&L^{23}\\L^{31}&L^{32}&L^{33}\\\end{pmatrix}}={\begin{pmatrix}0&L_{xy}&L_{xz}\\L_{yx}&0&L_{yz}\\L_{zx}&L_{zy}&0\end{pmatrix}}={\begin{pmatrix}0&L_{xy}&-L_{zx}\\-L_{xy}&0&L_{yz}\\L_{zx}&-L_{yz}&0\end{pmatrix}}\\&={\begin{pmatrix}0&xp_{y}-yp_{x}&-(zp_{x}-xp_{z})\\-(xp_{y}-yp_{x})&0&yp_{z}-zp_{y}\\zp_{x}-xp_{z}&-(yp_{z}-zp_{y})&0\end{pmatrix}}\end{aligned}}} This quantity is additive, and for an isolated system, the total angular momentum of a system is conserved. === Dynamic mass moment === In classical mechanics, the three-dimensional quantity for a particle of mass m moving with velocity u N = m ( x − t u ) = m x − t p {\displaystyle \mathbf {N} =m\left(\mathbf {x} -t\mathbf {u} \right)=m\mathbf {x} -t\mathbf {p} } has the dimensions of mass moment – length multiplied by mass. It is equal to the mass of the particle or system of particles multiplied by the distance from the space origin to the centre of mass (COM) at the time origin (t = 0), as measured in the lab frame. There is no universal symbol, nor even a universal name, for this quantity. Different authors may denote it by other symbols if any (for example μ), may designate other names, and may define N to be the negative of what is used here. The above form has the advantage that it resembles the familiar Galilean transformation for position, which in turn is the non-relativistic boost transformation between inertial frames. This vector is also additive: for a system of particles, the vector sum is the resultant ∑ n N n = ∑ n m n ( x n − t u n ) = ( x C O M ∑ n m n − t ∑ n m n u n ) = M tot ( x C O M − u C O M t ) {\displaystyle \sum _{n}\mathbf {N} _{n}=\sum _{n}m_{n}\left(\mathbf {x} _{n}-t\mathbf {u} _{n}\right)=\left(\mathbf {x} _{\mathrm {COM} }\sum _{n}m_{n}-t\sum _{n}m_{n}\mathbf {u} _{n}\right)=M_{\text{tot}}(\mathbf {x} _{\mathrm {COM} }-\mathbf {u} _{\mathrm {COM} }t)} where the system's centre of mass position and velocity and total mass are respectively x C O M = ∑ n m n x n ∑ n m n , u C O M = ∑ n m n u n ∑ n m n , M tot = ∑ n m n . {\displaystyle {\begin{aligned}\mathbf {x} _{\mathrm {COM} }&={\frac {\sum _{n}m_{n}\mathbf {x} _{n}}{\sum _{n}m_{n}}},\\[3pt]\mathbf {u} _{\mathrm {COM} }&={\frac {\sum _{n}m_{n}\mathbf {u} _{n}}{\sum _{n}m_{n}}},\\[3pt]M_{\text{tot}}&=\sum _{n}m_{n}.\end{aligned}}} For an isolated system, N is conserved in time, which can be seen by differentiating with respect to time. The angular momentum L is a pseudovector, but N is an "ordinary" (polar) vector, and is therefore invariant under inversion. The resultant Ntot for a multiparticle system has the physical visualization that, whatever the complicated motion of all the particles are, they move in such a way that the system's COM moves in a straight line. This does not necessarily mean all particles "follow" the COM, nor that all particles all move in almost the same direction simultaneously, only that the collective motion of the particles is constrained in relation to the centre of mass. In special relativity, if the particle moves with velocity u relative to the lab frame, then E = γ ( u ) m 0 c 2 , p = γ ( u ) m 0 u {\displaystyle {\begin{aligned}E&=\gamma (\mathbf {u} )m_{0}c^{2},&\mathbf {p} &=\gamma (\mathbf {u} )m_{0}\mathbf {u} \end{aligned}}} where γ ( u ) = 1 1 − u ⋅ u c 2 {\displaystyle \gamma (\mathbf {u} )={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}} is the Lorentz factor and m is the mass (i.e. the rest mass) of the particle. The corresponding relativistic mass moment in terms of m, u, p, E, in the same lab frame is N = E c 2 x − p t = m γ ( u ) ( x − u t ) . {\displaystyle \mathbf {N} ={\frac {E}{c^{2}}}\mathbf {x} -\mathbf {p} t=m\gamma (\mathbf {u} )(\mathbf {x} -\mathbf {u} t).} The Cartesian components are N x = m x − p x t = E c 2 x − p x t = m γ ( u ) ( x − u x t ) N y = m y − p y t = E c 2 y − p y t = m γ ( u ) ( y − u y t ) N z = m z − p z t = E c 2 z − p z t = m γ ( u ) ( z − u z t ) {\displaystyle {\begin{aligned}N_{x}=mx-p_{x}t&={\frac {E}{c^{2}}}x-p_{x}t=m\gamma (u)(x-u_{x}t)\\N_{y}=my-p_{y}t&={\frac {E}{c^{2}}}y-p_{y}t=m\gamma (u)(y-u_{y}t)\\N_{z}=mz-p_{z}t&={\frac {E}{c^{2}}}z-p_{z}t=m\gamma (u)(z-u_{z}t)\end{aligned}}} == Special relativity == === Coordinate transformations for a boost in the x direction === Consider a coordinate frame F′ which moves with velocity v = (v, 0, 0) relative to another frame F, along the direction of the coincident xx′ axes. The origins of the two coordinate frames coincide at times t = t′ = 0. The mass–energy E = mc2 and momentum components p = (px, py, pz) of an object, as well as position coordinates x = (x, y, z) and time t in frame F are transformed to E′ = m′c2, p′ = (px′, py′, pz′), x′ = (x′, y′, z′), and t′ in F′ according to the Lorentz transformations t ′ = γ ( v ) ( t − v x c 2 ) , E ′ = γ ( v ) ( E − v p x ) x ′ = γ ( v ) ( x − v t ) , p x ′ = γ ( v ) ( p x − v E c 2 ) y ′ = y , p y ′ = p y z ′ = z , p z ′ = p z {\displaystyle {\begin{aligned}t'&=\gamma (v)\left(t-{\frac {vx}{c^{2}}}\right)\,,\quad &E'&=\gamma (v)\left(E-vp_{x}\right)\\x'&=\gamma (v)(x-vt)\,,\quad &p_{x}'&=\gamma (v)\left(p_{x}-{\frac {vE}{c^{2}}}\right)\\y'&=y\,,\quad &p_{y}'&=p_{y}\\z'&=z\,,\quad &p_{z}'&=p_{z}\\\end{aligned}}} The Lorentz factor here applies to the velocity v, the relative velocity between the frames. This is not necessarily the same as the velocity u of an object. For the orbital 3-angular momentum L as a pseudovector, we have L x ′ = y ′ p z ′ − z ′ p y ′ = L x L y ′ = z ′ p x ′ − x ′ p z ′ = γ ( v ) ( L y − v N z ) L z ′ = x ′ p y ′ − y ′ p x ′ = γ ( v ) ( L z + v N y ) {\displaystyle {\begin{aligned}L_{x}'&=y'p_{z}'-z'p_{y}'=L_{x}\\L_{y}'&=z'p_{x}'-x'p_{z}'=\gamma (v)(L_{y}-vN_{z})\\L_{z}'&=x'p_{y}'-y'p_{x}'=\gamma (v)(L_{z}+vN_{y})\\\end{aligned}}} In the second terms of Ly′ and Lz′, the y and z components of the cross product v × N can be inferred by recognizing cyclic permutations of vx = v and vy = vz = 0 with the components of N, − v N z = v z N x − v x N z = ( v × N ) y v N y = v x N y − v y N x = ( v × N ) z {\displaystyle {\begin{aligned}-vN_{z}&=v_{z}N_{x}-v_{x}N_{z}=\left(\mathbf {v} \times \mathbf {N} \right)_{y}\\vN_{y}&=v_{x}N_{y}-v_{y}N_{x}=\left(\mathbf {v} \times \mathbf {N} \right)_{z}\\\end{aligned}}} Now, Lx is parallel to the relative velocity v, and the other components Ly and Lz are perpendicular to v. The parallel–perpendicular correspondence can be facilitated by splitting the entire 3-angular momentum pseudovector into components parallel (∥) and perpendicular (⊥) to v, in each frame, L = L ∥ + L ⊥ , L ′ = L ∥ ′ + L ⊥ ′ . {\displaystyle \mathbf {L} =\mathbf {L} _{\parallel }+\mathbf {L} _{\perp }\,,\quad \mathbf {L} '=\mathbf {L} _{\parallel }'+\mathbf {L} _{\perp }'\,.} Then the component equations can be collected into the pseudovector equations L ∥ ′ = L ∥ L ⊥ ′ = γ ( v ) ( L ⊥ + v × N ) {\displaystyle {\begin{aligned}\mathbf {L} _{\parallel }'&=\mathbf {L} _{\parallel }\\\mathbf {L} _{\perp }'&=\gamma (\mathbf {v} )\left(\mathbf {L} _{\perp }+\mathbf {v} \times \mathbf {N} \right)\\\end{aligned}}} Therefore, the components of angular momentum along the direction of motion do not change, while the components perpendicular do change. By contrast to the transformations of space and time, time and the spatial coordinates change along the direction of motion, while those perpendicular do not. These transformations are true for all v, not just for motion along the xx′ axes. Considering L as a tensor, we get a similar result L ⊥ ′ = γ ( v ) ( L ⊥ + v ∧ N ) {\displaystyle \mathbf {L} _{\perp }'=\gamma (\mathbf {v} )\left(\mathbf {L} _{\perp }+\mathbf {v} \wedge \mathbf {N} \right)} where v z N x − v x N z = ( v ∧ N ) z x v x N y − v y N x = ( v ∧ N ) x y {\displaystyle {\begin{aligned}v_{z}N_{x}-v_{x}N_{z}&=\left(\mathbf {v} \wedge \mathbf {N} \right)_{zx}\\v_{x}N_{y}-v_{y}N_{x}&=\left(\mathbf {v} \wedge \mathbf {N} \right)_{xy}\\\end{aligned}}} The boost of the dynamic mass moment along the x direction is N x ′ = m ′ x ′ − p x ′ t ′ = N x N y ′ = m ′ y ′ − p y ′ t ′ = γ ( v ) ( N y + v L z c 2 ) N z ′ = m ′ z ′ − p z ′ t ′ = γ ( v ) ( N z − v L y c 2 ) {\displaystyle {\begin{aligned}N_{x}'&=m'x'-p_{x}'t'=N_{x}\\N_{y}'&=m'y'-p_{y}'t'=\gamma (v)\left(N_{y}+{\frac {vL_{z}}{c^{2}}}\right)\\N_{z}'&=m'z'-p_{z}'t'=\gamma (v)\left(N_{z}-{\frac {vL_{y}}{c^{2}}}\right)\\\end{aligned}}} Collecting parallel and perpendicular components as before N ∥ ′ = N ∥ N ⊥ ′ = γ ( v ) ( N ⊥ − 1 c 2 v × L ) {\displaystyle {\begin{aligned}\mathbf {N} _{\parallel }'&=\mathbf {N} _{\parallel }\\\mathbf {N} _{\perp }'&=\gamma (\mathbf {v} )\left(\mathbf {N} _{\perp }-{\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {L} \right)\\\end{aligned}}} Again, the components parallel to the direction of relative motion do not change, those perpendicular do change. === Vector transformations for a boost in any direction === So far these are only the parallel and perpendicular decompositions of the vectors. The transformations on the full vectors can be constructed from them as follows (throughout here L is a pseudovector for concreteness and compatibility with vector algebra). Introduce a unit vector in the direction of v, given by n = v/v. The parallel components are given by the vector projection of L or N into n L ∥ = ( L ⋅ n ) n , N ∥ = ( N ⋅ n ) n {\displaystyle \mathbf {L} _{\parallel }=(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {N} _{\parallel }=(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} } while the perpendicular component by vector rejection of L or N from n L ⊥ = L − ( L ⋅ n ) n , N ⊥ = N − ( N ⋅ n ) n {\displaystyle \mathbf {L} _{\perp }=\mathbf {L} -(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {N} _{\perp }=\mathbf {N} -(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} } and the transformations are L ′ = γ ( v ) ( L + v n × N ) − ( γ ( v ) − 1 ) ( L ⋅ n ) n N ′ = γ ( v ) ( N − v c 2 n × L ) − ( γ ( v ) − 1 ) ( N ⋅ n ) n {\displaystyle {\begin{aligned}\mathbf {L} '&=\gamma (\mathbf {v} )(\mathbf {L} +v\mathbf {n} \times \mathbf {N} )-(\gamma (\mathbf {v} )-1)(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \\\mathbf {N} '&=\gamma (\mathbf {v} )\left(\mathbf {N} -{\frac {v}{c^{2}}}\mathbf {n} \times \mathbf {L} \right)-(\gamma (\mathbf {v} )-1)(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} \\\end{aligned}}} or reinstating v = vn, L ′ = γ ( v ) ( L + v × N ) − ( γ ( v ) − 1 ) ( L ⋅ v ) v v 2 N ′ = γ ( v ) ( N − 1 c 2 v × L ) − ( γ ( v ) − 1 ) ( N ⋅ v ) v v 2 {\displaystyle {\begin{aligned}\mathbf {L} '&=\gamma (\mathbf {v} )(\mathbf {L} +\mathbf {v} \times \mathbf {N} )-(\gamma (\mathbf {v} )-1){\frac {(\mathbf {L} \cdot \mathbf {v} )\mathbf {v} }{v^{2}}}\\\mathbf {N} '&=\gamma (\mathbf {v} )\left(\mathbf {N} -{\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {L} \right)-(\gamma (\mathbf {v} )-1){\frac {(\mathbf {N} \cdot \mathbf {v} )\mathbf {v} }{v^{2}}}\\\end{aligned}}} These are very similar to the Lorentz transformations of the electric field E and magnetic field B, see Classical electromagnetism and special relativity. Alternatively, starting from the vector Lorentz transformations of time, space, energy, and momentum, for a boost with velocity v, t ′ = γ ( v ) ( t − v ⋅ r c 2 ) , r ′ = r + γ ( v ) − 1 v 2 ( r ⋅ v ) v − γ ( v ) t v , p ′ = p + γ ( v ) − 1 v 2 ( p ⋅ v ) v − γ ( v ) E c 2 v , E ′ = γ ( v ) ( E − v ⋅ p ) , {\displaystyle {\begin{aligned}t'&=\gamma (\mathbf {v} )\left(t-{\frac {\mathbf {v} \cdot \mathbf {r} }{c^{2}}}\right)\,,\\\mathbf {r} '&=\mathbf {r} +{\frac {\gamma (\mathbf {v} )-1}{v^{2}}}(\mathbf {r} \cdot \mathbf {v} )\mathbf {v} -\gamma (\mathbf {v} )t\mathbf {v} \,,\\\mathbf {p} '&=\mathbf {p} +{\frac {\gamma (\mathbf {v} )-1}{v^{2}}}(\mathbf {p} \cdot \mathbf {v} )\mathbf {v} -\gamma (\mathbf {v} ){\frac {E}{c^{2}}}\mathbf {v} \,,\\E'&=\gamma (\mathbf {v} )\left(E-\mathbf {v} \cdot \mathbf {p} \right)\,,\\\end{aligned}}} inserting these into the definitions L ′ = r ′ × p ′ , N ′ = E ′ c 2 r ′ − t ′ p ′ {\displaystyle {\begin{aligned}\mathbf {L} '&=\mathbf {r} '\times \mathbf {p} '\,,&\mathbf {N} '&={\frac {E'}{c^{2}}}\mathbf {r} '-t'\mathbf {p} '\end{aligned}}} gives the transformations. === 4d angular momentum as a bivector === In relativistic mechanics, the COM boost and orbital 3-space angular momentum of a rotating object are combined into a four-dimensional bivector in terms of the four-position X and the four-momentum P of the object M = X ∧ P {\displaystyle \mathbf {M} =\mathbf {X} \wedge \mathbf {P} } In components M α β = X α P β − X β P α {\displaystyle M^{\alpha \beta }=X^{\alpha }P^{\beta }-X^{\beta }P^{\alpha }} which are six independent quantities altogether. Since the components of X and P are frame-dependent, so is M. Three components M i j = x i p j − x j p i = L i j {\displaystyle M^{ij}=x^{i}p^{j}-x^{j}p^{i}=L^{ij}} are those of the familiar classical 3-space orbital angular momentum, and the other three M 0 i = x 0 p i − x i p 0 = c ( t p i − x i E c 2 ) = − c N i {\displaystyle M^{0i}=x^{0}p^{i}-x^{i}p^{0}=c\,\left(tp^{i}-x^{i}{\frac {E}{c^{2}}}\right)=-cN^{i}} are the relativistic mass moment, multiplied by −c. The tensor is antisymmetric; M α β = − M β α {\displaystyle M^{\alpha \beta }=-M^{\beta \alpha }} The components of the tensor can be systematically displayed as a matrix M = ( M 00 M 01 M 02 M 03 M 10 M 11 M 12 M 13 M 20 M 21 M 22 M 23 M 30 M 31 M 32 M 33 ) = ( 0 − N 1 c − N 2 c − N 3 c N 1 c 0 L 12 − L 31 N 2 c − L 12 0 L 23 N 3 c L 31 − L 23 0 ) = ( 0 − N c N T c x ∧ p ) {\displaystyle {\begin{aligned}\mathbf {M} &={\begin{pmatrix}M^{00}&M^{01}&M^{02}&M^{03}\\M^{10}&M^{11}&M^{12}&M^{13}\\M^{20}&M^{21}&M^{22}&M^{23}\\M^{30}&M^{31}&M^{32}&M^{33}\end{pmatrix}}\\[3pt]&=\left({\begin{array}{c|ccc}0&-N^{1}c&-N^{2}c&-N^{3}c\\\hline N^{1}c&0&L^{12}&-L^{31}\\N^{2}c&-L^{12}&0&L^{23}\\N^{3}c&L^{31}&-L^{23}&0\end{array}}\right)\\[3pt]&=\left({\begin{array}{c|c}0&-\mathbf {N} c\\\hline \mathbf {N} ^{\mathrm {T} }c&\mathbf {x} \wedge \mathbf {p} \\\end{array}}\right)\end{aligned}}} in which the last array is a block matrix formed by treating N as a row vector which matrix transposes to the column vector NT, and x ∧ p as a 3 × 3 antisymmetric matrix. The lines are merely inserted to show where the blocks are. Again, this tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system: M tot = ∑ n M n = ∑ n X n ∧ P n . {\displaystyle \mathbf {M} _{\text{tot}}=\sum _{n}\mathbf {M} _{n}=\sum _{n}\mathbf {X} _{n}\wedge \mathbf {P} _{n}\,.} Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields. The angular momentum tensor M is indeed a tensor, the components change according to a Lorentz transformation matrix Λ, as illustrated in the usual way by tensor index notation M ′ α β = X ′ α P ′ β − X ′ β P ′ α = Λ α γ X γ Λ β δ P δ − Λ β δ X δ Λ α γ P γ = Λ α γ Λ β δ ( X γ P δ − X δ P γ ) = Λ α γ Λ β δ M γ δ , {\displaystyle {\begin{aligned}{M'}^{\alpha \beta }&={X'}^{\alpha }{P'}^{\beta }-{X'}^{\beta }{P'}^{\alpha }\\&={\Lambda ^{\alpha }}_{\gamma }X^{\gamma }{\Lambda ^{\beta }}_{\delta }P^{\delta }-{\Lambda ^{\beta }}_{\delta }X^{\delta }{\Lambda ^{\alpha }}_{\gamma }P^{\gamma }\\&={\Lambda ^{\alpha }}_{\gamma }{\Lambda ^{\beta }}_{\delta }\left(X^{\gamma }P^{\delta }-X^{\delta }P^{\gamma }\right)\\&={\Lambda ^{\alpha }}_{\gamma }{\Lambda ^{\beta }}_{\delta }M^{\gamma \delta }\\\end{aligned}},} where, for a boost (without rotations) with normalized velocity β = v/c, the Lorentz transformation matrix elements are Λ 0 0 = γ Λ i 0 = Λ 0 i = − γ β i Λ i j = δ i j + γ − 1 β 2 β i β j {\displaystyle {\begin{aligned}{\Lambda ^{0}}_{0}&=\gamma \\{\Lambda ^{i}}_{0}&={\Lambda ^{0}}_{i}=-\gamma \beta ^{i}\\{\Lambda ^{i}}_{j}&={\delta ^{i}}_{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{i}\beta _{j}\end{aligned}}} and the covariant βi and contravariant βi components of β are the same since these are just parameters. In other words, one can Lorentz-transform the four position and four momentum separately, and then antisymmetrize those newly found components to obtain the angular momentum tensor in the new frame. === Rigid body rotation === For a particle moving in a curve, the cross product of its angular velocity ω (a pseudovector) and position x give its tangential velocity u = ω × x {\displaystyle \mathbf {u} ={\boldsymbol {\omega }}\times \mathbf {x} } which cannot exceed a magnitude of c, since in SR the translational velocity of any massive object cannot exceed the speed of light c. Mathematically this constraint is 0 ≤ |u| < c, the vertical bars denote the magnitude of the vector. If the angle between ω and x is θ (assumed to be nonzero, otherwise u would be zero corresponding to no motion at all), then |u| = |ω| |x| sin θ and the angular velocity is restricted by 0 ≤ | ω | < c | x | sin ⁡ θ {\displaystyle 0\leq |{\boldsymbol {\omega }}|<{\frac {c}{|\mathbf {x} |\sin \theta }}} The maximum angular velocity of any massive object therefore depends on the size of the object. For a given |x|, the minimum upper limit occurs when ω and x are perpendicular, so that θ = π/2 and sin θ = 1. For a rotating rigid body rotating with an angular velocity ω, the u is tangential velocity at a point x inside the object. For every point in the object, there is a maximum angular velocity. The angular velocity (pseudovector) is related to the angular momentum (pseudovector) through the moment of inertia tensor I L = I ⋅ ω ⇌ L i = I i j ω j {\displaystyle \mathbf {L} =\mathbf {I} \cdot {\boldsymbol {\omega }}\quad \rightleftharpoons \quad L_{i}=I_{ij}\omega _{j}} (the dot · denotes tensor contraction on one index). The relativistic angular momentum is also limited by the size of the object. == Spin in special relativity == === Four-spin === A particle may have a "built-in" angular momentum independent of its motion, called spin and denoted s. It is a 3d pseudovector like orbital angular momentum L. The spin has a corresponding spin magnetic moment, so if the particle is subject to interactions (like electromagnetic fields or spin-orbit coupling), the direction of the particle's spin vector will change, but its magnitude will be constant. The extension to special relativity is straightforward. For some lab frame F, let F′ be the rest frame of the particle and suppose the particle moves with constant 3-velocity u. Then F′ is boosted with the same velocity and the Lorentz transformations apply as usual; it is more convenient to use β = u/c. As a four-vector in special relativity, the four-spin S generally takes the usual form of a four-vector with a timelike component st and spatial components s, in the lab frame S ≡ ( S 0 , S 1 , S 2 , S 3 ) = ( s t , s x , s y , s z ) {\displaystyle \mathbf {S} \equiv \left(S^{0},S^{1},S^{2},S^{3}\right)=(s_{t},s_{x},s_{y},s_{z})} although in the rest frame of the particle, it is defined so the timelike component is zero and the spatial components are those of particle's actual spin vector, in the notation here s′, so in the particle's frame S ′ ≡ ( S ′ 0 , S ′ 1 , S ′ 2 , S ′ 3 ) = ( 0 , s x ′ , s y ′ , s z ′ ) {\displaystyle \mathbf {S} '\equiv \left({S'}^{0},{S'}^{1},{S'}^{2},{S'}^{3}\right)=\left(0,s_{x}',s_{y}',s_{z}'\right)} Equating norms leads to the invariant relation s t 2 − s ⋅ s = − s ′ ⋅ s ′ {\displaystyle s_{t}^{2}-\mathbf {s} \cdot \mathbf {s} =-\mathbf {s} '\cdot \mathbf {s} '} so if the magnitude of spin is given in the rest frame of the particle and lab frame of an observer, the magnitude of the timelike component st is given in the lab frame also. The covariant constraint on the spin is orthogonality to the velocity vector, U α S α = 0 {\displaystyle U_{\alpha }S^{\alpha }=0} In 3-vector notation for explicitness, the transformations are s t = β ⋅ s s ′ = s + γ 2 γ + 1 β ( β ⋅ s ) − γ β s t {\displaystyle {\begin{aligned}s_{t}&={\boldsymbol {\beta }}\cdot \mathbf {s} \\\mathbf {s} '&=\mathbf {s} +{\frac {\gamma ^{2}}{\gamma +1}}{\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {s} \right)-\gamma {\boldsymbol {\beta }}s_{t}\end{aligned}}} The inverse relations s t = γ β ⋅ s ′ s = s ′ + γ 2 γ + 1 β ( β ⋅ s ′ ) {\displaystyle {\begin{aligned}s_{t}&=\gamma {\boldsymbol {\beta }}\cdot \mathbf {s} '\\\mathbf {s} &=\mathbf {s} '+{\frac {\gamma ^{2}}{\gamma +1}}{\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {s} '\right)\end{aligned}}} are the components of spin the lab frame, calculated from those in the particle's rest frame. Although the spin of the particle is constant for a given particle, it appears to be different in the lab frame. === The Pauli–Lubanski pseudovector === The Pauli–Lubanski pseudovector S μ = 1 2 ε μ ν ρ σ J ν ρ P σ , {\displaystyle S_{\mu }={\frac {1}{2}}\varepsilon _{\mu \nu \rho \sigma }J^{\nu \rho }P^{\sigma },} applies to both massive and massless particles. == Spin–orbital decomposition == In general, the total angular momentum tensor splits into an orbital component and a spin component, J μ ν = M μ ν + S μ ν . {\displaystyle J^{\mu \nu }=M^{\mu \nu }+S^{\mu \nu }~.} This applies to a particle, a mass–energy–momentum distribution, or field. == Angular momentum of a mass–energy–momentum distribution == === Angular momentum from the mass–energy–momentum tensor === The following is a summary from MTW. Throughout for simplicity, Cartesian coordinates are assumed. In special and general relativity, a distribution of mass–energy–momentum, e.g. a fluid, or a star, is described by the stress–energy tensor Tβγ (a second order tensor field depending on space and time). Since T00 is the energy density, Tj0 for j = 1, 2, 3 is the jth component of the object's 3d momentum per unit volume, and Tij form components of the stress tensor including shear and normal stresses, the orbital angular momentum density about the position 4-vector Xβ is given by a 3rd order tensor M α β γ = ( X α − X ¯ α ) T β γ − ( X β − X ¯ β ) T α γ {\displaystyle {\mathcal {M}}^{\alpha \beta \gamma }=\left(X^{\alpha }-{\bar {X}}^{\alpha }\right)T^{\beta \gamma }-\left(X^{\beta }-{\bar {X}}^{\beta }\right)T^{\alpha \gamma }} This is antisymmetric in α and β. In special and general relativity, T is a symmetric tensor, but in other contexts (e.g., quantum field theory), it may not be. Let Ω be a region of 4d spacetime. The boundary is a 3d spacetime hypersurface ("spacetime surface volume" as opposed to "spatial surface area"), denoted ∂Ω where "∂" means "boundary". Integrating the angular momentum density over a 3d spacetime hypersurface yields the angular momentum tensor about X, M α β ( X ¯ ) = ∮ ∂ Ω M α β γ d Σ γ {\displaystyle M^{\alpha \beta }\left({\bar {X}}\right)=\oint _{\partial \Omega }{\mathcal {M}}^{\alpha \beta \gamma }d\Sigma _{\gamma }} where dΣγ is the volume 1-form playing the role of a unit vector normal to a 2d surface in ordinary 3d Euclidean space. The integral is taken over the coordinates X, not X (i.e. Y). The integral within a spacelike surface of constant time is M i j = ∮ ∂ Ω M i j 0 d Σ 0 = ∮ ∂ Ω [ ( X i − Y i ) T j 0 − ( X j − Y j ) T i 0 ] d x d y d z {\displaystyle M^{ij}=\oint _{\partial \Omega }{\mathcal {M}}^{ij0}d\Sigma _{0}=\oint _{\partial \Omega }\left[\left(X^{i}-Y^{i}\right)T^{j0}-\left(X^{j}-Y^{j}\right)T^{i0}\right]dx\,dy\,dz} which collectively form the angular momentum tensor. === Angular momentum about the centre of mass === There is an intrinsic angular momentum in the centre-of-mass frame, in other words, the angular momentum about any event X COM = ( X COM 0 , X COM 1 , X COM 2 , X COM 3 ) {\displaystyle \mathbf {X} _{\text{COM}}=\left(X_{\text{COM}}^{0},X_{\text{COM}}^{1},X_{\text{COM}}^{2},X_{\text{COM}}^{3}\right)} on the wordline of the object's center of mass. Since T00 is the energy density of the object, the spatial coordinates of the center of mass are given by X COM i = 1 m 0 ∫ ∂ Ω X i T 00 d x d y d z {\displaystyle X_{\text{COM}}^{i}={\frac {1}{m_{0}}}\int _{\partial \Omega }X^{i}T^{00}dxdydz} Setting Y = XCOM obtains the orbital angular momentum density about the centre-of-mass of the object. === Angular momentum conservation === The conservation of energy–momentum is given in differential form by the continuity equation ∂ γ T β γ = 0 {\displaystyle \partial _{\gamma }T^{\beta \gamma }=0} where ∂γ is the four-gradient. (In non-Cartesian coordinates and general relativity this would be replaced by the covariant derivative). The total angular momentum conservation is given by another continuity equation ∂ γ J α β γ = 0 {\displaystyle \partial _{\gamma }{\mathcal {J}}^{\alpha \beta \gamma }=0} The integral equations use Gauss' theorem in spacetime ∫ V ∂ γ T β γ c d t d x d y d z = ∮ ∂ V T β γ d 3 Σ γ = 0 ∫ V ∂ γ J α β γ c d t d x d y d z = ∮ ∂ V J α β γ d 3 Σ γ = 0 {\displaystyle {\begin{aligned}\int _{\mathcal {V}}\partial _{\gamma }T^{\beta \gamma }\,cdt\,dx\,dy\,dz&=\oint _{\partial {\mathcal {V}}}T^{\beta \gamma }d^{3}\Sigma _{\gamma }=0\\\int _{\mathcal {V}}\partial _{\gamma }{\mathcal {J}}^{\alpha \beta \gamma }\,cdt\,dx\,dy\,dz&=\oint _{\partial {\mathcal {V}}}{\mathcal {J}}^{\alpha \beta \gamma }d^{3}\Sigma _{\gamma }=0\end{aligned}}} == Torque in special relativity == The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time: Γ = d M d τ = X ∧ F {\displaystyle {\boldsymbol {\Gamma }}={\frac {d\mathbf {M} }{d\tau }}=\mathbf {X} \wedge \mathbf {F} } or in tensor components: Γ α β = X α F β − X β F α {\displaystyle \Gamma _{\alpha \beta }=X_{\alpha }F_{\beta }-X_{\beta }F_{\alpha }} where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass. == Angular momentum as the generator of spacetime boosts and rotations == The angular momentum tensor is the generator of boosts and rotations for the Lorentz group. Lorentz boosts can be parametrized by rapidity, and a 3d unit vector n pointing in the direction of the boost, which combine into the "rapidity vector" ζ = ζ n = n tanh − 1 ⁡ β {\displaystyle {\boldsymbol {\zeta }}=\zeta \mathbf {n} =\mathbf {n} \tanh ^{-1}\beta } where β = v/c is the speed of the relative motion divided by the speed of light. Spatial rotations can be parametrized by the axis–angle representation, the angle θ and a unit vector a pointing in the direction of the axis, which combine into an "axis-angle vector" θ = θ a {\displaystyle {\boldsymbol {\theta }}=\theta \mathbf {a} } Each unit vector only has two independent components, the third is determined from the unit magnitude. Altogether there are six parameters of the Lorentz group; three for rotations and three for boosts. The (homogeneous) Lorentz group is 6-dimensional. The boost generators K and rotation generators J can be combined into one generator for Lorentz transformations; M the antisymmetric angular momentum tensor, with components M 0 i = − M i 0 = K i , M i j = ε i j k J k . {\displaystyle M^{0i}=-M^{i0}=K_{i}\,,\quad M^{ij}=\varepsilon _{ijk}J_{k}\,.} and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix ω, with entries: ω 0 i = − ω i 0 = ζ i , ω i j = ε i j k θ k , {\displaystyle \omega _{0i}=-\omega _{i0}=\zeta _{i}\,,\quad \omega _{ij}=\varepsilon _{ijk}\theta _{k}\,,} where the summation convention over the repeated indices i, j, k has been used to prevent clumsy summation signs. The general Lorentz transformation is then given by the matrix exponential Λ ( ζ , θ ) = exp ⁡ ( 1 2 ω α β M α β ) = exp ⁡ ( ζ ⋅ K + θ ⋅ J ) {\displaystyle \Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})=\exp \left({\frac {1}{2}}\omega _{\alpha \beta }M^{\alpha \beta }\right)=\exp \left({\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} \right)} and the summation convention has been applied to the repeated matrix indices α and β. The general Lorentz transformation Λ is the transformation law for any four vector A = (A0, A1, A2, A3), giving the components of this same 4-vector in another inertial frame of reference A ′ = Λ ( ζ , θ ) A {\displaystyle \mathbf {A} '=\Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})\mathbf {A} } The angular momentum tensor forms 6 of the 10 generators of the Poincaré group, the other four are the components of the four-momentum for spacetime translations. == Angular momentum in general relativity == The angular momentum of test particles in a gently curved background is more complicated in GR but can be generalized in a straightforward manner. If the Lagrangian is expressed with respect to angular variables as the generalized coordinates, then the angular momenta are the functional derivatives of the Lagrangian with respect to the angular velocities. Referred to Cartesian coordinates, these are typically given by the off-diagonal shear terms of the spacelike part of the stress–energy tensor. If the spacetime supports a Killing vector field tangent to a circle, then the angular momentum about the axis is conserved. One also wishes to study the effect of a compact, rotating mass on its surrounding spacetime. The prototype solution is of the Kerr metric, which describes the spacetime around an axially symmetric black hole. It is obviously impossible to draw a point on the event horizon of a Kerr black hole and watch it circle around. However, the solution does support a constant of the system that acts mathematically similarly to an angular momentum. In general relativity where gravitational waves exist, the asymptotic symmetry group in asymptotically flat spacetimes is not the expected ten-dimensional Poincaré group of special relativity, but the infinite-dimensional group formulated in 1962 by Bondi, van der Burg, Metzner, and Sachs, the so-called BMS group, which contains an infinite superset of the four spacetime translations, named supertranslations. Despite half a century of research, difficulties with “supertranslation ambiguity” persisted in fundamental notions like the angular momentum carried away by gravitational waves. In 2020, novel supertranslation-invariant definitions of angular momentum began to be formulated by different researchers. Supertranslation invariance of angular momentum and other Lorentz charges in general relativity continues to be an active area of research. == See also == Thomas precession – Relativistic correction Angular momentum of light – Physical quantity carried in photons Two-body problem in general relativity Relativistic mechanics – Theory of motion and forces for objects close to the speed of light Mathisson–Papapetrou–Dixon equations – General relativity equation == References == == Further reading == === Special relativity === R. Torretti (1996). Relativity and Geometry. Dover Books on Physics Series. Courier Dover Publications. ISBN 0-486-69046-6. === General relativity === L. Blanchet; A. Spallicci; B. Whiting (2011). Mass and motion in general relativity. Fundamental theories of physics. Vol. 162. Springer. p. 87. ISBN 978-90-481-3015-3. M. Ludvigsen (1999). General Relativity: A Geometric Approach. Cambridge University Press. p. 77. ISBN 0-521-63976-X. N. Ashby, D.F. Bartlett, W.Wyss (1990). General Relativity and Gravitation 1989: Proceedings of the 12th International Conference on General Relativity and Gravitation. Cambridge University Press. ISBN 0-521-38428-1.{{cite book}}: CS1 maint: multiple names: authors list (link) B.L. Hu; M.P. Ryan; M.P. Ryan; C.V. Vishveshwara (2005). Directions in General Relativity: Volume 1: Proceedings of the 1993. Directions in General Relativity: Proceedings of the 1993 International Symposium, Maryland: Papers in Honor of Charles Misner. Vol. 1. Cambridge University Press. p. 347. ISBN 0-521-02139-1. A. Papapetrou (1974). Lectures on General Relativity. Springer. ISBN 90-277-0514-3. == External links == N. Menicucci (2001). "Relativistic Angular Momentum" (PDF). "Special Relativity" (PDF). Archived from the original (PDF) on 2013-11-04. Retrieved 2013-10-30. Wang, Mu-Tao (2023). "Angular momentum and supertranslation in general relativity". arXiv:2303.02424 [gr-qc].
Wikipedia/Angular_momentum_tensor
In science, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors, where the result is a scalar. When the force F is constant and the angle θ between the force and the displacement s is also constant, then the work done is given by: W = F s cos ⁡ θ {\displaystyle W=Fs\cos {\theta }} If the force is variable, then work is given by the line integral: W = ∫ F ⋅ d s {\displaystyle W=\int \mathbf {F} \cdot d\mathbf {s} } where d s {\displaystyle d\mathbf {s} } is the tiny change in displacement vector. Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. == History == The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it. === Early concepts of work === Although work was not formally used until 1826, similar concepts existed before then. Early names for the same concept included moment of activity, quantity of action, latent live force, dynamic effect, efficiency, and even force. In 1637, the French philosopher René Descartes wrote: Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet. In 1686, the German philosopher Gottfried Leibniz wrote: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard. In 1759, John Smeaton described a quantity that he called "power" "to signify the exertion of strength, gravitation, impulse, or pressure, as to produce motion." Smeaton continues that this quantity can be calculated if "the weight raised is multiplied by the height to which it can be raised in a given time," making this definition remarkably similar to Coriolis's. === Etymology and modern usage === The term work (or mechanical work), and the use of the work-energy principle in mechanics, was introduced in the late 1820s independently by French mathematician Gaspard-Gustave Coriolis and French Professor of Applied Mechanics Jean-Victor Poncelet. Both scientists were pursuing a view of mechanics suitable for studying the dynamics and power of machines, for example steam engines lifting buckets of water out of flooded ore mines. According to Rene Dugas, French engineer and historian, it is to Solomon of Caux "that we owe the term work in the sense that it is used in mechanics now". The concept of virtual work, and the use of variational methods in mechanics, preceded the introduction of "mechanical work" but was originally called "virtual moment". It was re-named once the terminology of Poncelet and Coriolis was adopted. == Units == The SI unit of work is the joule (J), named after English physicist James Prescott Joule (1818-1889). According to the International Bureau of Weights and Measures it is defined as "the work done when the point of application of 1 MKS unit of force [newton] moves a distance of 1 metre in the direction of the force." The dimensionally equivalent newton-metre (N⋅m) is sometimes used as the measuring unit for work, but this can be confused with the measurement unit of torque. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton-metres is a torque measurement, or a measurement of work. Another unit for work is the foot-pound, which comes from the English system of measurement. As the unit name suggests, it is the product of pounds for the unit of force and feet for the unit of displacement. One joule is approximately equal to 0.7376 ft-lbs. Non-SI units of work include the newton-metre, erg, the foot-pound, the foot-poundal, the kilowatt hour, the litre-atmosphere, and the horsepower-hour. Due to work having the same physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU and calorie, are used as a measuring unit. == Work and energy == The work W done by a constant force of magnitude F on a point that moves a displacement s in a straight line in the direction of the force is the product W = F ⋅ s {\displaystyle W=\mathbf {F} \cdot \mathbf {s} } For example, if a force of 10 newtons (F = 10 N) acts along a point that travels 2 metres (s = 2 m), then W = Fs = (10 N) (2 m) = 20 J. This is approximately the work done lifting a 1 kg object from ground level to over a person's head against the force of gravity. The work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance. Work is closely related to energy. Energy shares the same unit of measurement with work (Joules) because the energy from the object doing work is transferred to the other objects it interacts with when work is being done. The work–energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in kinetic energy is caused by an equal amount of negative work done by the resultant force. Thus, if the net work is positive, then the particle's kinetic energy increases by the amount of the work. If the net work done is negative, then the particle's kinetic energy decreases by the amount of work. From Newton's second law, it can be shown that work on a free (no fields), rigid (no internal degrees of freedom) body, is equal to the change in kinetic energy Ek corresponding to the linear velocity and angular velocity of that body, W = Δ E k . {\displaystyle W=\Delta E_{\text{k}}.} The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative. Therefore, work on an object that is merely displaced in a conservative force field, without change in velocity or rotation, is equal to minus the change of potential energy Ep of the object, W = − Δ E p . {\displaystyle W=-\Delta E_{\text{p}}.} These formulas show that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions, and units, of energy. The work/energy principles discussed here are identical to electric work/energy principles. == Constraint forces == Constraint forces determine the object's displacement in the system, limiting it within a range. For example, in the case of a slope plus gravity, the object is stuck to the slope and, when attached to a taut string, it cannot move in an outwards direction to make the string any 'tauter'. It eliminates all displacements in that direction, that is, the velocity in the direction of the constraint is limited to 0, so that the constraint forces do not perform work on the system. For a mechanical system, constraint forces eliminate movement in directions that characterize the constraint. Thus the virtual work done by the forces of constraint is zero, a result which is only true if friction forces are excluded. Fixed, frictionless constraint forces do not perform work on the system, as the angle between the motion and the constraint forces is always 90°. Examples of workless constraints are: rigid interconnections between particles, sliding motion on a frictionless surface, and rolling contact without slipping. For example, in a pulley system like the Atwood machine, the internal forces on the rope and at the supporting pulley do no work on the system. Therefore, work need only be computed for the gravitational forces acting on the bodies. Another example is the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the centre of the circle. This force does zero work because it is perpendicular to the velocity of the ball. The magnetic force on a charged particle is F = qv × B, where q is the charge, v is the velocity of the particle, and B is the magnetic field. The result of a cross product is always perpendicular to both of the original vectors, so F ⊥ v. The dot product of two perpendicular vectors is always zero, so the work W = F ⋅ v = 0, and the magnetic force does not do work. It can change the direction of motion but never change the speed. == Mathematical calculation == For moving objects, the quantity of work/time (power) is integrated along the trajectory of the point of application of the force. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is known as instantaneous power. Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus, the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application. Work is the result of a force on a point that follows a curve X, with a velocity v, at each instant. The small amount of work δW that occurs over an instant of time dt is calculated as δ W = F ⋅ d s = F ⋅ v d t {\displaystyle \delta W=\mathbf {F} \cdot d\mathbf {s} =\mathbf {F} \cdot \mathbf {v} dt} where the F ⋅ v is the power over the instant dt. The sum of these small amounts of work over the trajectory of the point yields the work, W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F ⋅ d s d t d t = ∫ C F ⋅ d s , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} \,dt=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\tfrac {d\mathbf {s} }{dt}}\,dt=\int _{C}\mathbf {F} \cdot d\mathbf {s} ,} where C is the trajectory from x(t1) to x(t2). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent. If the force is always directed along this line, and the magnitude of the force is F, then this integral simplifies to W = ∫ C F d s {\displaystyle W=\int _{C}F\,ds} where s is displacement along the line. If F is constant, in addition to being directed along the line, then the integral simplifies further to W = ∫ C F d s = F ∫ C d s = F s {\displaystyle W=\int _{C}F\,ds=F\int _{C}ds=Fs} where s is the displacement of the point along the line. This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product F ⋅ ds = F cos θ ds, where θ is the angle between the force vector and the direction of movement, that is W = ∫ C F ⋅ d s = F s cos ⁡ θ . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} =Fs\cos \theta .} When a force component is perpendicular to the displacement of the object (such as when a body moves in a circular path under a central force), no work is done, since the cosine of 90° is zero. Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a frictionless ideal centrifuge. === Work done by a variable force === Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component (F cos(θ), where θ is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows: Thus, the work done for a variable force can be expressed as a definite integral of force over displacement. If the displacement as a variable of time is given by ∆x(t), then work done by the variable force from t1 to t2 is: W = ∫ t 1 t 2 F ( t ) ⋅ v ( t ) d t = ∫ t 1 t 2 P ( t ) d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} (t)\cdot \mathbf {v} (t)dt=\int _{t_{1}}^{t_{2}}P(t)dt.} Thus, the work done for a variable force can be expressed as a definite integral of power over time. === Torque and rotation === A force couple results from equal and opposite forces, acting on two different points of a rigid body. The sum (resultant) of these forces may cancel, but their effect on the body is the couple or torque T. The work of the torque is calculated as δ W = T ⋅ ω d t , {\displaystyle \delta W=\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt,} where the T ⋅ ω is the power over the instant dt. The sum of these small amounts of work over the trajectory of the rigid body yields the work, W = ∫ t 1 t 2 T ⋅ ω d t . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt.} This integral is computed along the trajectory of the rigid body with an angular velocity ω that varies with time, and is therefore said to be path dependent. If the angular velocity vector maintains a constant direction, then it takes the form, ω = ϕ ˙ S , {\displaystyle {\boldsymbol {\omega }}={\dot {\phi }}\mathbf {S} ,} where ϕ {\displaystyle \phi } is the angle of rotation about the constant unit vector S. In this case, the work of the torque becomes, W = ∫ t 1 t 2 T ⋅ ω d t = ∫ t 1 t 2 T ⋅ S d ϕ d t d t = ∫ C T ⋅ S d ϕ , {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot {\boldsymbol {\omega }}\,dt=\int _{t_{1}}^{t_{2}}\mathbf {T} \cdot \mathbf {S} {\frac {d\phi }{dt}}dt=\int _{C}\mathbf {T} \cdot \mathbf {S} \,d\phi ,} where C is the trajectory from ϕ ( t 1 ) {\displaystyle \phi (t_{1})} to ϕ ( t 2 ) {\displaystyle \phi (t_{2})} . This integral depends on the rotational trajectory ϕ ( t ) {\displaystyle \phi (t)} , and is therefore path-dependent. If the torque τ {\displaystyle \tau } is aligned with the angular velocity vector so that, T = τ S , {\displaystyle \mathbf {T} =\tau \mathbf {S} ,} and both the torque and angular velocity are constant, then the work takes the form, W = ∫ t 1 t 2 τ ϕ ˙ d t = τ ( ϕ 2 − ϕ 1 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\tau {\dot {\phi }}\,dt=\tau (\phi _{2}-\phi _{1}).} This result can be understood more simply by considering the torque as arising from a force of constant magnitude F, being applied perpendicularly to a lever arm at a distance r {\displaystyle r} , as shown in the figure. This force will act through the distance along the circular arc l = s = r ϕ {\displaystyle l=s=r\phi } , so the work done is W = F s = F r ϕ . {\displaystyle W=Fs=Fr\phi .} Introduce the torque τ = Fr, to obtain W = F r ϕ = τ ϕ , {\displaystyle W=Fr\phi =\tau \phi ,} as presented above. Notice that only the component of torque in the direction of the angular velocity vector contributes to the work. == Work and potential energy == The scalar product of a force F and the velocity v of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, C = x(t), defines the work input to the system by the force. === Path dependence === Therefore, the work done by a force F on an object that travels along a curve C is given by the line integral: W = ∫ C F ⋅ d x = ∫ t 1 t 2 F ⋅ v d t , {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt,} where dx(t) defines the trajectory C and v is the velocity along this trajectory. In general this integral requires that the path along which the velocity is defined, so the evaluation of work is said to be path dependent. The time derivative of the integral for work yields the instantaneous power, d W d t = P ( t ) = F ⋅ v . {\displaystyle {\frac {dW}{dt}}=P(t)=\mathbf {F} \cdot \mathbf {v} .} === Path independence === If the work for an applied force is independent of the path, then the work done by the force, by the gradient theorem, defines a potential function which is evaluated at the start and end of the trajectory of the point of application. This means that there is a potential function U(x), that can be evaluated at the two points x(t1) and x(t2) to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is W = ∫ C F ⋅ d x = ∫ x ( t 1 ) x ( t 2 ) F ⋅ d x = U ( x ( t 1 ) ) − U ( x ( t 2 ) ) . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{\mathbf {x} (t_{1})}^{\mathbf {x} (t_{2})}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} (t_{1}))-U(\mathbf {x} (t_{2})).} The function U(x) is called the potential energy associated with the applied force. The force derived from such a potential function is said to be conservative. Examples of forces that have potential energies are gravity and spring forces. In this case, the gradient of work yields ∇ W = − ∇ U = − ( ∂ U ∂ x , ∂ U ∂ y , ∂ U ∂ z ) = F , {\displaystyle \nabla W=-\nabla U=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,} and the force F is said to be "derivable from a potential." Because the potential U defines a force F at every point x in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity V of the body, that is P ( t ) = − ∇ U ⋅ v = F ⋅ v . {\displaystyle P(t)=-\nabla U\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .} === Work by gravity === In the absence of other forces, gravity results in a constant downward acceleration of every freely moving object. Near Earth's surface the acceleration due to gravity is g = 9.8 m⋅s−2 and the gravitational force on an object of mass m is Fg = mg. It is convenient to imagine this gravitational force concentrated at the center of mass of the object. If an object with weight mg is displaced upwards or downwards a vertical distance y2 − y1, the work W done on the object is: W = F g ( y 2 − y 1 ) = F g Δ y = m g Δ y {\displaystyle W=F_{g}(y_{2}-y_{1})=F_{g}\Delta y=mg\Delta y} where Fg is weight (pounds in imperial units, and newtons in SI units), and Δy is the change in height y. Notice that the work done by gravity depends only on the vertical movement of the object. The presence of friction does not affect the work done on the object by its weight. ==== Gravity in 3D space ==== The force of gravity exerted by a mass M on another mass m is given by F = − G M m r 2 r ^ = − G M m r 3 r , {\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}{\hat {\mathbf {r} }}=-{\frac {GMm}{r^{3}}}\mathbf {r} ,} where r is the position vector from M to m and r̂ is the unit vector in the direction of r. Let the mass m move at the velocity v; then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by W = − ∫ r ( t 1 ) r ( t 2 ) G M m r 3 r ⋅ d r = − ∫ t 1 t 2 G M m r 3 r ⋅ v d t . {\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.} Notice that the position and velocity of the mass m are given by r = r e r , v = d r d t = r ˙ e r + r θ ˙ e t , {\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\frac {d\mathbf {r} }{dt}}={\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t},} where er and et are the radial and tangential unit vectors directed relative to the vector from M to m, and we use the fact that d e r / d t = θ ˙ e t . {\displaystyle d\mathbf {e} _{r}/dt={\dot {\theta }}\mathbf {e} _{t}.} Use this to simplify the formula for work of gravity to, W = − ∫ t 1 t 2 G m M r 3 ( r e r ) ⋅ ( r ˙ e r + r θ ˙ e t ) d t = − ∫ t 1 t 2 G m M r 3 r r ˙ d t = G M m r ( t 2 ) − G M m r ( t 1 ) . {\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{r})\cdot \left({\dot {r}}\mathbf {e} _{r}+r{\dot {\theta }}\mathbf {e} _{t}\right)dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.} This calculation uses the fact that d d t r − 1 = − r − 2 r ˙ = − r ˙ r 2 . {\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.} The function U = − G M m r , {\displaystyle U=-{\frac {GMm}{r}},} is the gravitational potential function, also known as gravitational potential energy. The negative sign follows the convention that work is gained from a loss of potential energy. === Work by a spring === Consider a spring that exerts a horizontal force F = (−kx, 0, 0) that is proportional to its deflection in the x direction independent of how a body moves. The work of this spring on a body moving along the space with the curve X(t) = (x(t), y(t), z(t)), is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ 0 t F ⋅ v d t = − ∫ 0 t k x v x d t = − 1 2 k x 2 . {\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} dt=-\int _{0}^{t}kxv_{x}dt=-{\frac {1}{2}}kx^{2}.} For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvxdt, over time t is ⁠1/2⁠x2. The work is the product of the distance times the spring force, which is also dependent on distance; hence the x2 result. === Work by a gas === The work W {\displaystyle W} done by a body of gas on its surroundings is: W = ∫ a b P d V {\displaystyle W=\int _{a}^{b}P\,dV} where P is pressure, V is volume, and a and b are initial and final volumes. == Work–energy principle == The principle of work and kinetic energy (also known as the work–energy principle) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle. That is, the work W done by the resultant force on a particle equals the change in the particle's kinetic energy E k {\displaystyle E_{\text{k}}} , W = Δ E k = 1 2 m v 2 2 − 1 2 m v 1 2 {\displaystyle W=\Delta E_{\text{k}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}} where v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} are the speeds of the particle before and after the work is done, and m is its mass. The derivation of the work–energy principle begins with Newton's second law of motion and the resultant force on a particle. Computation of the scalar product of the force with the velocity of the particle evaluates the instantaneous power added to the system. (Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power.) The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of acceleration with velocity. The fact that the work–energy principle eliminates the constraint forces underlies Lagrangian mechanics. This section focuses on the work–energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the thermal energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another. === Derivation for a particle moving along a straight line === In the case the resultant force F is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line. The relation between the net force and the acceleration is given by the equation F = ma (Newton's second law), and the particle displacement s can be expressed by the equation s = v 2 2 − v 1 2 2 a {\displaystyle s={\frac {v_{2}^{2}-v_{1}^{2}}{2a}}} which follows from v 2 2 = v 1 2 + 2 a s {\displaystyle v_{2}^{2}=v_{1}^{2}+2as} (see Equations of motion). The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains: W = F s = m a s = m a v 2 2 − v 1 2 2 a = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=ma{\frac {v_{2}^{2}-v_{1}^{2}}{2a}}={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} Other derivation: W = F s = m a s = m v 2 2 − v 1 2 2 s s = 1 2 m v 2 2 − 1 2 m v 1 2 = Δ E k {\displaystyle W=Fs=mas=m{\frac {v_{2}^{2}-v_{1}^{2}}{2s}}s={\frac {1}{2}}mv_{2}^{2}-{\frac {1}{2}}mv_{1}^{2}=\Delta E_{\text{k}}} In the general case of rectilinear motion, when the net force F is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle: W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F v d t = ∫ t 1 t 2 m a v d t = m ∫ t 1 t 2 v d v d t d t = m ∫ v 1 v 2 v d v = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=\int _{t_{1}}^{t_{2}}F\,v\,dt=\int _{t_{1}}^{t_{2}}ma\,v\,dt=m\int _{t_{1}}^{t_{2}}v\,{\frac {dv}{dt}}\,dt=m\int _{v_{1}}^{v_{2}}v\,dv={\tfrac {1}{2}}m\left(v_{2}^{2}-v_{1}^{2}\right).} === General derivation of the work–energy principle for a particle === For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. It is known as the work–energy principle: W = ∫ t 1 t 2 F ⋅ v d t = m ∫ t 1 t 2 a ⋅ v d t = m 2 ∫ t 1 t 2 d v 2 d t d t = m 2 ∫ v 1 2 v 2 2 d v 2 = m v 2 2 2 − m v 1 2 2 = Δ E k {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot \mathbf {v} dt=m\int _{t_{1}}^{t_{2}}\mathbf {a} \cdot \mathbf {v} dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {dv^{2}}{dt}}\,dt={\frac {m}{2}}\int _{v_{1}^{2}}^{v_{2}^{2}}dv^{2}={\frac {mv_{2}^{2}}{2}}-{\frac {mv_{1}^{2}}{2}}=\Delta E_{\text{k}}} The identity a ⋅ v = 1 2 d v 2 d t {\textstyle \mathbf {a} \cdot \mathbf {v} ={\frac {1}{2}}{\frac {dv^{2}}{dt}}} requires some algebra. From the identity v 2 = v ⋅ v {\textstyle v^{2}=\mathbf {v} \cdot \mathbf {v} } and definition a = d v d t {\textstyle \mathbf {a} ={\frac {d\mathbf {v} }{dt}}} it follows d v 2 d t = d ( v ⋅ v ) d t = d v d t ⋅ v + v ⋅ d v d t = 2 d v d t ⋅ v = 2 a ⋅ v . {\displaystyle {\frac {dv^{2}}{dt}}={\frac {d(\mathbf {v} \cdot \mathbf {v} )}{dt}}={\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} +\mathbf {v} \cdot {\frac {d\mathbf {v} }{dt}}=2{\frac {d\mathbf {v} }{dt}}\cdot \mathbf {v} =2\mathbf {a} \cdot \mathbf {v} .} The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case. === Derivation for a particle in constrained movement === In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion. It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work–energy principle. To see this, consider a particle P that follows the trajectory X(t) with a force F acting on it. Isolate the particle from its environment to expose constraint forces R, then Newton's Law takes the form F + R = m X ¨ , {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }},} where m is the mass of the particle. ==== Vector formulation ==== Note that n dots above a vector indicates its nth time derivative. The scalar product of each side of Newton's law with the velocity vector yields F ⋅ X ˙ = m X ¨ ⋅ X ˙ , {\displaystyle \mathbf {F} \cdot {\dot {\mathbf {X} }}=m{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point X(t1) to the point X(t2) to obtain ∫ t 1 t 2 F ⋅ X ˙ d t = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t . {\displaystyle \int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt.} The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time t1 to time t2. This can also be written as W = ∫ t 1 t 2 F ⋅ X ˙ d t = ∫ X ( t 1 ) X ( t 2 ) F ⋅ d X . {\displaystyle W=\int _{t_{1}}^{t_{2}}\mathbf {F} \cdot {\dot {\mathbf {X} }}dt=\int _{\mathbf {X} (t_{1})}^{\mathbf {X} (t_{2})}\mathbf {F} \cdot d\mathbf {X} .} This integral is computed along the trajectory X(t) of the particle and is therefore path dependent. The right side of the first integral of Newton's equations can be simplified using the following identity 1 2 d d t ( X ˙ ⋅ X ˙ ) = X ¨ ⋅ X ˙ , {\displaystyle {\frac {1}{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})={\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }},} (see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy, Δ K = m ∫ t 1 t 2 X ¨ ⋅ X ˙ d t = m 2 ∫ t 1 t 2 d d t ( X ˙ ⋅ X ˙ ) d t = m 2 X ˙ ⋅ X ˙ ( t 2 ) − m 2 X ˙ ⋅ X ˙ ( t 1 ) = 1 2 m Δ v 2 , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\ddot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}({\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }})dt={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{2})-{\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}(t_{1})={\frac {1}{2}}m\Delta \mathbf {v} ^{2},} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 X ˙ ⋅ X ˙ = 1 2 m v 2 {\displaystyle K={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}={\frac {1}{2}}m{\mathbf {v} ^{2}}} ==== Tangential and normal components ==== It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory X(t), such that X ˙ = v T and X ¨ = v ˙ T + v 2 κ N , {\displaystyle {\dot {\mathbf {X} }}=v\mathbf {T} \quad {\text{and}}\quad {\ddot {\mathbf {X} }}={\dot {v}}\mathbf {T} +v^{2}\kappa \mathbf {N} ,} where v = | X ˙ | = X ˙ ⋅ X ˙ . {\displaystyle v=|{\dot {\mathbf {X} }}|={\sqrt {{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}}}.} Then, the scalar product of velocity with acceleration in Newton's second law takes the form Δ K = m ∫ t 1 t 2 v ˙ v d t = m 2 ∫ t 1 t 2 d d t v 2 d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) , {\displaystyle \Delta K=m\int _{t_{1}}^{t_{2}}{\dot {v}}v\,dt={\frac {m}{2}}\int _{t_{1}}^{t_{2}}{\frac {d}{dt}}v^{2}\,dt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}),} where the kinetic energy of the particle is defined by the scalar quantity, K = m 2 v 2 = m 2 X ˙ ⋅ X ˙ . {\displaystyle K={\frac {m}{2}}v^{2}={\frac {m}{2}}{\dot {\mathbf {X} }}\cdot {\dot {\mathbf {X} }}.} The result is the work–energy principle for particle dynamics, W = Δ K . {\displaystyle W=\Delta K.} This derivation can be generalized to arbitrary rigid body systems. === Moving in a straight line (skid to a stop) === Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to F. The constraint forces between the vehicle and the road define R, and we have F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} For convenience let the trajectory be along the X-axis, so X = (d, 0) and the velocity is V = (v, 0), then R ⋅ V = 0, and F ⋅ V = Fxv, where Fx is the component of F along the X-axis, so F x v = m v ˙ v . {\displaystyle F_{x}v=m{\dot {v}}v.} Integration of both sides yields ∫ t 1 t 2 F x v d t = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}F_{x}vdt={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} If Fx is constant along the trajectory, then the integral of velocity is distance, so F x ( d ( t 2 ) − d ( t 1 ) ) = m 2 v 2 ( t 2 ) − m 2 v 2 ( t 1 ) . {\displaystyle F_{x}(d(t_{2})-d(t_{1}))={\frac {m}{2}}v^{2}(t_{2})-{\frac {m}{2}}v^{2}(t_{1}).} As an example consider a car skidding to a stop, where k is the coefficient of friction and w is the weight of the car. Then the force along the trajectory is Fx = −kw. The velocity v of the car can be determined from the length s of the skid using the work–energy principle, k w s = w 2 g v 2 , or v = 2 k s g . {\displaystyle kws={\frac {w}{2g}}v^{2},\quad {\text{or}}\quad v={\sqrt {2ksg}}.} This formula uses the fact that the mass of the vehicle is m = w/g. === Coasting down an inclined surface (gravity racing) === Consider the case of a vehicle that starts at rest and coasts down an inclined surface (such as mountain road), the work–energy principle helps compute the minimum distance that the vehicle travels to reach a velocity V, of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be greater than if these forces are neglected. Let the trajectory of the vehicle following the road be X(t) which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity F = (0, 0, w), while the force of the road on the vehicle is the constraint force R. Newton's second law yields, F + R = m X ¨ . {\displaystyle \mathbf {F} +\mathbf {R} =m{\ddot {\mathbf {X} }}.} The scalar product of this equation with the velocity, V = (vx, vy, vz), yields w v z = m V ˙ V , {\displaystyle wv_{z}=m{\dot {V}}V,} where V is the magnitude of V. The constraint forces between the vehicle and the road cancel from this equation because R ⋅ V = 0, which means they do no work. Integrate both sides to obtain ∫ t 1 t 2 w v z d t = m 2 V 2 ( t 2 ) − m 2 V 2 ( t 1 ) . {\displaystyle \int _{t_{1}}^{t_{2}}wv_{z}dt={\frac {m}{2}}V^{2}(t_{2})-{\frac {m}{2}}V^{2}(t_{1}).} The weight force w is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore, w Δ z = m 2 V 2 . {\displaystyle w\Delta z={\frac {m}{2}}V^{2}.} Recall that V(t1)=0. Notice that this result does not depend on the shape of the road followed by the vehicle. In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance s in feet down a 6% grade to reach the velocity V is at least s = Δ z 0.06 = 8.3 V 2 g , or s = 8.3 88 2 32.2 ≈ 2000 f t . {\displaystyle s={\frac {\Delta z}{0.06}}=8.3{\frac {V^{2}}{g}},\quad {\text{or}}\quad s=8.3{\frac {88^{2}}{32.2}}\approx 2000\mathrm {ft} .} This formula uses the fact that the weight of the vehicle is w = mg. == Work of forces acting on a rigid body == The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque. To see this, let the forces F1, F2, ..., Fn act on the points X1, X2, ..., Xn in a rigid body. The trajectories of Xi, i = 1, ..., n are defined by the movement of the rigid body. This movement is given by the set of rotations [A(t)] and the trajectory d(t) of a reference point in the body. Let the coordinates xi i = 1, ..., n define these points in the moving rigid body's reference frame M, so that the trajectories traced in the fixed frame F are given by X i ( t ) = [ A ( t ) ] x i + d ( t ) i = 1 , … , n . {\displaystyle \mathbf {X} _{i}(t)=[A(t)]\mathbf {x} _{i}+\mathbf {d} (t)\quad i=1,\ldots ,n.} The velocity of the points Xi along their trajectories are V i = ω × ( X i − d ) + d ˙ , {\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }},} where ω is the angular velocity vector obtained from the skew symmetric matrix [ Ω ] = A ˙ A T , {\displaystyle [\Omega ]={\dot {A}}A^{\mathsf {T}},} known as the angular velocity matrix. The small amount of work by the forces over the small displacements δri can be determined by approximating the displacement by δr = vδt so δ W = F 1 ⋅ V 1 δ t + F 2 ⋅ V 2 δ t + … + F n ⋅ V n δ t {\displaystyle \delta W=\mathbf {F} _{1}\cdot \mathbf {V} _{1}\delta t+\mathbf {F} _{2}\cdot \mathbf {V} _{2}\delta t+\ldots +\mathbf {F} _{n}\cdot \mathbf {V} _{n}\delta t} or δ W = ∑ i = 1 n F i ⋅ ( ω × ( X i − d ) + d ˙ ) δ t . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot ({\boldsymbol {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+{\dot {\mathbf {d} }})\delta t.} This formula can be rewritten to obtain δ W = ( ∑ i = 1 n F i ) ⋅ d ˙ δ t + ( ∑ i = 1 n ( X i − d ) × F i ) ⋅ ω δ t = ( F ⋅ d ˙ + T ⋅ ω ) δ t , {\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot {\dot {\mathbf {d} }}\delta t+\left(\sum _{i=1}^{n}\left(\mathbf {X} _{i}-\mathbf {d} \right)\times \mathbf {F} _{i}\right)\cdot {\boldsymbol {\omega }}\delta t=\left(\mathbf {F} \cdot {\dot {\mathbf {d} }}+\mathbf {T} \cdot {\boldsymbol {\omega }}\right)\delta t,} where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body. == References == == Bibliography == Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd ed., extended version ed.). W. H. Freeman. ISBN 0-87901-432-6. == External links == Work–energy principle
Wikipedia/Work-energy_theorem
In physics, a body force is a force that acts throughout the volume of a body. Forces due to gravity, electric fields and magnetic fields are examples of body forces. Body forces contrast with contact forces or surface forces which are exerted to the surface of an object. Fictitious forces such as the centrifugal force, Euler force, and the Coriolis effect are other examples of body forces. == Definition == === Qualitative === A body force is simply a type of force, and so it has the same dimensions as force, [M][L][T]−2. However, it is often convenient to talk about a body force in terms of either the force per unit volume or the force per unit mass. If the force per unit volume is of interest, it is referred to as the force density throughout the system. A body force is distinct from a contact force in that the force does not require contact for transmission. Thus, common forces associated with pressure gradients and conductive and convective heat transmission are not body forces as they require contact between systems to exist. Radiation heat transfer, on the other hand, is a perfect example of a body force. More examples of common body forces include; Gravity, Electric forces acting on an object charged throughout its volume, Magnetic forces acting on currents within an object, such as the braking force that results from eddy currents, Fictitious forces (or inertial forces) can be viewed as body forces. Common inertial forces are, Centrifugal force, Coriolis force, Euler force (or transverse force), which occurs in a rotating reference frame when the rate of rotation of the frame is changing However, fictitious forces are not actually forces. Rather they are corrections to Newton's second law when it is formulated in an accelerating reference frame. (Gravity can also be considered a fictitious force in the context of General Relativity.) === Quantitative === The body force density is defined so that the volume integral (throughout a volume of interest) of it gives the total force acting throughout the body; F b o d y = ∫ V f ( r ) d V , {\displaystyle \mathbf {F} _{\mathrm {body} }=\int \limits _{V}\mathbf {f} (\mathbf {r} )\mathrm {d} V\,,} where dV is an infinitesimal volume element, and f is the external body force density field acting on the system. == Acceleration == Like any other force, a body force will cause an object to accelerate. For a non-rigid object, Newton's second law applied to a small volume element is f ( r ) = ρ ( r ) a ( r ) {\displaystyle \mathbf {f} (\mathbf {r} )=\rho (\mathbf {r} )\mathbf {a} (\mathbf {r} )} , where ρ(r) is the mass density of the substance, ƒ the force density, and a(r) is acceleration, all at point r. == The case of gravity == In the case of a body in the gravitational field on a planet surface, a(r) is nearly constant (g) and uniform. Near the Earth g = 9.81 m s − 2 {\displaystyle g=9.81{\text{ }}\mathrm {ms} ^{-2}} . In this case simply F b o d y = ∫ V ρ ( r ) g d V = ∫ V ρ ( r ) d V ⋅ g = m g {\displaystyle \mathbf {F} _{\mathrm {body} }=\int \limits _{V}\rho (\mathbf {r} )\mathbf {g} \mathrm {d} V=\int \limits _{V}\rho (\mathbf {r} )\mathrm {d} V\cdot \mathbf {g} =m\mathbf {g} } where m is the mass of the body. == See also == Action at a distance Fictitious force Force density Non-contact force Normal force Surface force == References ==
Wikipedia/Body_forces
Peridynamics is a non-local formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures. Originally, bond-based peridynamic was introduced, wherein, internal interaction forces between a material point and all the other ones with which it can interact, are modeled as a central force field. This type of force field can be imagined as a mesh of bonds connecting each point of the body with every other interacting point within a certain distance which depends on a material property, called the peridynamic horizon. Later, to overcome bond-based framework limitations for the material Poisson's ratio ( 1 / 3 {\displaystyle 1/3} for plane stress and 1 / 4 {\displaystyle 1/4} for plane strain in two-dimesional configurations; 1 / 4 {\displaystyle 1/4} for three-dimensional ones), state-base peridynamics, has been formulated. Its characteristic feature is that the force exchanged between a point and another one is influenced by the deformation state of all other bonds relative to its interaction zone. The characteristic feature of peridynamics, which makes it different from classical local mechanics, is the presence of finite-range bonds between any two points of the material body: it is a feature that approaches such formulations as discrete meso-scale theories of matter. == Etymology == The term peridynamic, as an adjective, was proposed in the year 2000 and comes from the prefix peri-, which means all around, near, or surrounding; and the root dyna, which means force or power. The term peridynamics, as a noun, is a shortened form of the phrase peridynamic model of solid mechanics. == Purpose == A fracture is a mathematical singularity to which the classical equations of continuum mechanics cannot be applied directly. The peridynamic theory has been proposed with the purpose of mathematically models fractures formation and dynamic in elastic materials. It is founded on integral equations, in contrast with classical continuum mechanics, which is based on partial differential equations. Since partial derivatives do not exist on crack surfaces and other geometric singularities, the classical equations of continuum mechanics cannot be applied directly when such features are present in a deformation. The integral equations of the peridynamic theory hold true also on singularities and can be applied directly, because they do not require partial derivatives. The ability to apply the same equations directly at all points in a mathematical model of a deforming structure helps the peridynamic approach to avoid the need for the special techniques of fracture mechanics like xFEM. For example, in peridynamics, there is no need for a separate crack growth law based on a stress intensity factor. == Definition and basic terminology == In the context of peridynamic theory, physical bodies are treated as constituted by a continuous points mesh which can exchange long-range mutual interaction forces, within a maximum and well established distance δ > 0 {\displaystyle \delta >0} : the peridynamic horizon radius. This perspective approaches much more to molecular dynamics than macroscopic bodies, and as a consequence, is not based on the concept of stress tensor (which is a local concept) and drift toward the notion of pairwise force that a material point x {\displaystyle {\bf {x}}} exchanges within its peridynamic horizon. With a Lagrangian point of view, suited for small displacements, the peridynamic horizon is considered fixed in the reference configuration and, then, deforms with the body. Consider a material body represented by Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} , where n {\displaystyle n} can be either 1, 2 or 3. The body has a positive density ρ {\displaystyle \rho } . Its reference configuration at the initial time is denoted by Ω 0 ⊂ R n {\displaystyle \Omega _{0}\subset \mathbb {R} ^{n}} . It is important to note that the reference configuration can either be the stress-free configuration or a specific configuration of the body chosen as a reference. In the context of peridynamics, every point in Ω {\displaystyle \Omega } interacts with all the points x ′ {\displaystyle {\bf {x}}'} within a certain neighborhood defined by d ( x , x ′ ) ≤ δ {\displaystyle d({\bf {x}},{\bf {x}}')\leq \delta } , where δ > 0 {\displaystyle \delta >0} and d ( ⋅ , ⋅ ) {\displaystyle d(\cdot ,\cdot )} represents a suitable distance function on Ω 0 {\displaystyle \Omega _{0}} . This neighborhood is often referred to as B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} in the literature. It is commonly known as the horizon or the family of x {\displaystyle {\bf {x}}} . The kinematics of x {\displaystyle {\bf {x}}} is described in terms of its displacement from the reference position, denoted as u ( x , t ) : Ω 0 × R + → R n {\displaystyle {\bf {u}}({\bf {x}},t):\Omega _{0}\times \mathbb {R} ^{+}\rightarrow \mathbb {R} ^{n}} . Consequently, the position of x {\displaystyle {\bf {x}}} at a specific time t {\displaystyle t} is determined by y ( x , t ) := x + u ( x , t ) {\displaystyle {\bf {y}}({\bf {x}},t):={\bf {x}}+{\bf {u}}({\bf {x}},t)} . Furthermore, for each pair of interacting points, the change in the length of the bond relative to the initial configuration is tracked over time through the relative strain s ( x , x ′ , t ) {\displaystyle s({\bf {x}},{\bf {x}}',t)} , which can be expressed as: s ( x , x ′ , t ) = | u ( x ′ , t ) − u ( x , t ) | | x ′ − x | , {\displaystyle s\left({\bf {x}},{\bf {x}}',t\right)={\frac {\left|{\bf {u}}\left({\bf {x}}^{\prime },t\right)-{\bf {u}}({\bf {x}},t)\right|}{\left|{\bf {x}}^{\prime }-{\bf {x}}\right|}},} where | ⋅ | {\displaystyle |\cdot |} denotes the Euclidean norm and x ′ ∈ B δ ( x ) ∩ Ω 0 {\displaystyle {\bf {x}}'\in B_{\delta }({\bf {x}})\cap \Omega _{0}} . The interaction between any x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {x'}}} is referred to as a bond. These pairwise bonds have varying lengths over time in response to the force per unit volume squared, denoted as f ≡ f ( x ′ , x , u ( x ′ ) , u ( x ) , t ) {\displaystyle {\bf {f}}\equiv {\bf {f}}({\bf {x}}',{\bf {x}},{\bf {u}}({\bf {x}}'),{\bf {u}}({\bf {x}}),t)} . This force is commonly known as the pairwise force function or peridynamic kernel, and it encompasses all the constitutive (material-dependent) properties. It describes how the internal forces depend on the deformation. It's worth noting that the dependence of u {\displaystyle {\bf {u}}} on t {\displaystyle t} has been omitted here for the sake of simplicity in notation. Additionally, an external forcing term, b ( x , t ) {\displaystyle \mathbf {b} ({\bf {x}},t)} , is introduced, which results in the following equation of motion, representing the fundamental equation of peridynamics: ρ u t t ( x , t ) = F ( x , t ) . {\displaystyle {\rho {\bf {u}}_{tt}({\bf {x}},t)={\bf {F}}({\bf {x}},t)}\,.} where the integral term F ( x , t ) {\displaystyle {\bf {F}}({\bf {x}},t)} is the sum of all of the internal and external per-unit-volume forces acting on x {\displaystyle {\bf {x}}} : F ( x , t ) := ∫ Ω 0 ∩ B δ ( x ) f ( x ′ , x , u ( x ′ ) , u ( x ) ) d V x ′ + b ( x , t ) . {\displaystyle {{\bf {F}}({\bf {x}},t):=\int _{\Omega _{0}\cap B_{\delta }({\bf {x}})}{\bf {f}}\left({\bf {x}}',{\bf {x}},{\bf {u}}\left({\bf {x}}'\right),{\bf {u}}({\bf {x}})\right)dV_{{\bf {x}}'}+{\bf {b}}({\bf {x}},t)}\,.} The vector valued function f {\displaystyle {\bf {f}}} is the force density that x ′ {\displaystyle {\bf {x'}}} exerts on x {\displaystyle {\bf {x}}} . This force density depends on the relative displacement and relative position vectors between x ′ {\displaystyle {\bf {x'}}} and x {\displaystyle {\bf {x}}} . The dimension of f {\displaystyle {\bf {f}}} is [ N / m 6 ] {\displaystyle [N/m^{6}]} . == Bond-based peridynamics == In this formulation of peridynamics, the kernel is determined by the nature of internal forces and physical constraints that governs the interaction between only two material points. For the sake of brevity, the following quantities are defined ξ := x ′ − x {\displaystyle {\bf {\bf {\xi }}}:={\bf {x}}'-{\bf {x}}} and η := u ( x ′ ) − u ( x ) {\displaystyle {\bf {\eta }}:={\bf {u}}({\bf {x}}')-{\bf {u}}({\bf {x}})} so that f ( x ′ − x , u ( x ′ ) − u ( x ) ) ≡ f ( ξ , η ) {\displaystyle {\bf {f}}({\bf {x}}'-{\bf {x}},{\bf {u}}({\bf {x}}')-{\bf {u}}({\bf {x}}))\equiv {\bf {{f}({\bf {\xi }},{\bf {\eta }})}}} === Actio et reactio principle === For any x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {x'}}} belonging to the neighborhood B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} , the following relationship holds: f ( − η , − ξ ) = − f ( η , ξ ) {\displaystyle {\bf {f}}(-\eta ,-\xi )=-{\bf {f}}(\eta ,\xi )} . This expression reflects the principle of action and reaction, commonly known as Newton's third law. It guarantees the conservation of linear momentum in a system composed of mutually interacting particles. === Angular momentum conservation === For any x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {{x}'}}} belonging to the neighborhood B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} , the following condition holds: ( ξ + η ) × f ( ξ , η ) = 0 {\displaystyle (\xi +\eta )\times {\bf {f}}(\xi ,\eta )=0} . This condition arises from considering the relative deformed ray-vector connecting x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {{x}'}}} as ξ + η {\displaystyle \xi +\eta } . The condition is satisfied if and only if the pairwise force density vector has the same direction as the relative deformed ray-vector. In other words, f ( ξ , η ) = f ( ξ , η ) ( ξ + η ) {\displaystyle {\bf {f}}(\xi ,\eta )=f(\xi ,\eta )(\xi +\eta )} for all ξ {\displaystyle \xi } and η {\displaystyle \eta } , where f ( ξ , η ) {\displaystyle f(\xi ,\eta )} is a scalar-valued function. === Hyperelastic material === An hyperelastic material is a material with constitutive relation such that: ∫ Γ f ( ξ , η ) ⋅ d η = 0 , ∀ closed curve Γ , ∀ ξ ≠ 0 , {\displaystyle \int _{\Gamma }{\bf {f}}({\bf {\xi }},{\bf {\eta }})\cdot d{\bf {\eta }}=0\,,\quad \forall {\text{ closed curve }}\Gamma ,\ \ \ \ \forall {\bf {\xi }}\neq {\bf {{0},}}} or, equivalently, by Stokes' theorem ∇ η × f ( ξ , η ) = 0 {\displaystyle \nabla _{\bf {\eta }}\times {\bf {f}}({\bf {\xi }},{\bf {\eta }})={\bf {{0}\,}}} , ∀ ξ , η {\displaystyle \forall \,{\bf {\xi }},\,{\bf {\eta }}} and, thus, f ( ξ , η ) = ∇ η Φ ( ξ , η ) ∀ ξ , η . {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})=\nabla _{\bf {\eta }}\Phi ({\bf {\xi }},\,{\bf {\eta }})\,\forall {\bf {\xi }},\,{\bf {\eta }}\,.} In the equation above Φ ( ξ , η ) {\displaystyle \Phi ({\bf {\xi }},{\bf {\eta }})} is the scalar valued potential function in C 2 ( R n ∖ { 0 } × R n ) {\displaystyle C^{2}(\mathbb {R} ^{n}\setminus {\bf {{\{0\}}\times \mathbb {R} ^{n})}}} . Due to the necessity of satisfying angular momentum conservation, the condition below on the scalar valued function f ( ξ , η ) {\displaystyle f({\bf {\xi }},{\bf {\eta }})} follows ∂ f ( ξ , η ) ∂ η = g ( ξ , η ) ( ξ + η ) . {\displaystyle {\frac {\partial f({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}=g({\bf {\xi }},{\bf {\eta }})({\bf {\xi }}+{\bf {\eta }}).} where g ( ξ , η ) {\displaystyle g({\bf {\xi }},{\bf {\eta }})} is a scalar valued function. Integrating both sides of the equation, the following condition on g ( ξ , η ) {\displaystyle g({\bf {\xi }},{\bf {\eta }})} is obtained f ( ξ , η ) = h ( | ξ + η | , ξ ) ( ξ + η ) {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})=h(|{\bf {\xi }}+{\bf {\eta }}|,{\bf {\xi }})({\bf {\xi }}+{\bf {\eta }})} , for h ( | ξ + η | , ξ ) {\displaystyle h(|{\bf {\xi }}+{\bf {\eta }}|,{\bf {\xi }})} a scalar valued function. The elastic nature of f {\displaystyle {\bf {f}}} is evident: the interaction force depends only on the initial relative position between points x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {x}}'} and the modulus of their relative position, | ξ + η | {\displaystyle |{\bf {\xi }}+{\bf {\eta }}|} , in the deformed configuration Ω t {\displaystyle \Omega _{t}} at time t {\displaystyle t} . Applying the isotropy hypothesis, the dependence on vector ξ {\displaystyle {\bf {\xi }}} can be substituted with a dependence on its modulus | ξ | {\displaystyle |{\bf {\xi }}|} , f ( ξ , η ) = h ( | ξ + η | , | ξ | ) ( ξ + η ) . {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})=h(|{\bf {\xi }}+{\bf {\eta }}|,|{\bf {\xi }}|)({\bf {\xi }}+{\bf {\eta }}).} Bond forces can, thus, be considered as modeling a spring net that connects each point x ∈ Ω 0 {\displaystyle {\bf {x}}\in \Omega _{0}} pairwise with x ′ ∈ B δ ( x ) ∩ Ω 0 {\displaystyle {\bf {x}}'\in B_{\delta }({\bf {x}})\cap \Omega _{0}} . === Linear elastic material === If | η | ≪ 1 {\displaystyle |{\bf {\eta }}|\ll 1} , the peridynamic kernel can be linearised around η = 0 {\displaystyle {\bf {\eta }}={\bf {0}}} : f ( ξ , η ) ≈ f ( ξ , 0 ) + ∂ f ( ξ , η ) ∂ η | η = 0 η ; {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})\approx {\bf {f}}({\bf {\xi }},{\bf {{0})+\left.{\frac {\partial {\bf {f}}({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}\right|_{{\bf {\eta }}={\bf {0}}}{\bf {\eta }};}}} then, a second-order micro-modulus tensor can be defined as C ( ξ ) = ∂ f ( ξ , η ) ∂ η | η = 0 = ξ ⊗ ∂ f ( ξ , η ) ∂ η | η = 0 + f 0 I {\displaystyle {\bf {C}}({\bf {\xi }})=\left.{\frac {\partial {\bf {f}}({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}\right|_{{\bf {\eta }}={\bf {0}}}={\bf {\xi }}\otimes \left.{\frac {\partial f({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}\right|_{{\bf {\eta }}={\bf {0}}}+f_{0}I} where f 0 := f ( ξ , 0 ) {\displaystyle f_{0}:=f({\bf {\xi }},{\bf {0}})} and I {\displaystyle I} is the identity tensor. Following application of linear momentum balance, elasticity and isotropy condition, the micro-modulus tensor can be expressed in this form C ( ξ ) = λ ( | ξ | ) ξ ⊗ ξ + f 0 I . {\displaystyle {\bf {C}}({\bf {\xi }})=\lambda (|{\bf {\xi }}|){\bf {\xi }}\otimes {\bf {\xi }}+f_{0}I.} Therefore, for a linearised hyperelastic material, its peridynamic kernel holds the following structure f ( ξ , η ) ≈ f ( ξ , 0 ) + ( λ ( | ξ | ) ξ ⊗ ξ + f 0 I ) η . {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})\approx {\bf {f}}({\bf {\xi }},{\bf {0}})+\left(\lambda (|{\bf {\xi }}|){\bf {\xi }}\otimes {\bf {\xi }}+f_{0}I\right){\bf {\eta }}.} === Expressions for the peridynamic kernel === The peridynamic kernel is a versatile function that characterizes the constitutive behavior of materials within the framework of peridynamic theory. One commonly employed formulation of the kernel is used to describe a class of materials known as prototype micro-elastic brittle (PMB) materials. In the case of isotropic PMB materials, the pairwise force is assumed to be linearly proportional to the finite stretch experienced by the material, defined as s := ( | ξ + η | − | ξ | ) / | ξ | {\displaystyle s:=(|{\bf {\xi }}+{\bf {\eta }}|-|{\bf {\xi }}|)/|{\bf {\xi }}|} , so that f ( η , ξ ) = f ( | ξ + η | , | ξ | ) n , {\displaystyle \mathbf {f} ({\bf {\eta }},{\bf {\xi }})=f(|{\bf {\xi }}+{\bf {\eta }}|,|{\bf {\xi }}|){\bf {{n},}}} where n := ( ξ + η ) / | ξ + η | {\displaystyle {\bf {{n}:=({\bf {\xi }}+{\bf {\eta }})/|{\bf {\xi }}+{\bf {\eta }}|}}} and where the scalar function f {\displaystyle f} is defined as follow f = c s μ ( s , t ) = c | ξ + η | − | ξ | | ξ | μ ( s , t ) , {\displaystyle f=cs\mu (s,t)=c\;{\frac {|{\bf {\xi }}+{\bf {\eta }}|-|{\bf {\xi }}|}{|{\bf {\xi }}|}}\mu (s,t),} with μ ( s , t ) = { 1 , if s ( t ′ , ξ ) < s 0 , 0 , otherwise, for all 0 ≤ t ′ ≤ t ; {\displaystyle \mu (s,t)=\left\{{\begin{array}{ll}1\,,&{\text{ if }}s\left(t^{\prime },{\bf {\xi }}\right)<s_{0}\,,\\0\,,&{\text{ otherwise, }}\end{array}}\ \ \ \ {\text{ for all }}0\leq t^{\prime }\leq t\right.;} The constant c {\displaystyle c} is referred to as the micro-modulus constant, and the function μ ( s , t ) {\displaystyle \mu (s,t)} serves to indicate whether, at a given time t ′ ≤ t {\displaystyle t'\leq t} , the bond stretch s {\displaystyle s} associated with the pair ( x , x ′ ) {\displaystyle ({\bf {x,\,x'}})} has surpassed the critical value s 0 {\displaystyle s_{0}} . If the critical value is exceeded, the bond is considered broken, and a pairwise force of zero is assigned for all t ≥ t ′ {\displaystyle t\geq t'} . After a comparison between the strain energy density value obtained under isotropic extension respectively employing peridynamics and classical continuum theory framework, the physical coherent value of micro-modulus c {\displaystyle c} can be found c = 18 k π δ 4 , {\displaystyle c={\frac {18k}{\pi \delta ^{4}}},} where k {\displaystyle k} is the material bulk modulus. Following the same approach the micro-modulus constant c {\displaystyle c} can be extended to c ( ξ , δ ) {\displaystyle c({\bf {\xi }},\delta )} , where c {\displaystyle c} is now a micro-modulus function. This function provides a more detailed description of how the intensity of pairwise forces is distributed over the peridynamic horizon B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} . Intuitively, the intensity of forces decreases as the distance between x {\displaystyle {\bf {x}}} and x ′ ∈ B δ ( x ) {\displaystyle {\bf {x}}'\in B_{\delta }({\bf {x}})} increases, but the specific manner in which this decrease occurs can vary. The micro-modulus function is expressed as c ( ξ , δ ) := c ( 0 , δ ) k ( ξ , δ ) , {\displaystyle c({\bf {\xi }},\delta ):=c({\bf {{0},\delta )k({\bf {\xi }},\delta )\,,}}} where the constant c ( 0 , δ ) {\displaystyle c({\bf {{0},\delta )}}} is obtained by comparing peridynamic strain density with the classical mechanical theories; k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} is a function defined on Ω 0 {\displaystyle \Omega _{0}} with the following properties (given the restrictions of momentum conservation and isotropy) { k ( ξ , δ ) = k ( − ξ , δ ) , lim ξ → 0 k ( ξ , δ ) = max ξ ∈ R n { k ( ξ , δ ) } , lim ξ → δ k ( ξ , δ ) = 0 , ∫ R n lim δ → 0 k ( ξ , δ ) d x = ∫ R n Δ ( ξ ) d x = 1 , {\displaystyle \left\{{\begin{array}{l}k({\bf {\xi }},\delta )=k(-{\bf {\xi }},\delta )\,,\\\lim _{{\bf {\xi }}\rightarrow {\bf {0}}}k({\bf {\xi }},\delta )=\max _{{\bf {\xi }}\ \in \mathbb {R} ^{n}}\{k({\bf {\xi }},\delta )\}\,,\\\lim _{{\bf {\xi }}\rightarrow \delta }k({\bf {\xi }},\delta )=0\,,\\\int _{\mathbb {R} ^{n}}\lim _{\delta \rightarrow 0}k({\bf {\xi }},\delta )d{\bf {x}}=\int _{\mathbb {R} ^{n}}\Delta ({\bf {\xi }})d{\bf {x}}=1\,,\end{array}}\right.} where Δ ( ξ ) {\displaystyle \Delta ({\bf {\xi }})} is the Dirac delta function. ==== Cylindrical micro-modulus ==== The simplest expression for the micro-modulus function is c ( 0 , δ ) k ( ξ , δ ) = c 1 B δ ( x ′ ) {\displaystyle c({\bf {{0},\delta )k({\bf {\xi }},\delta )=c{\bf {{1}_{B_{\delta }({\bf {x}}')}}}}}} , where 1 A {\displaystyle {\bf {{1}_{A}}}} : X → R {\displaystyle X\rightarrow \mathbb {R} } is the indicator function of the subset A ⊂ X {\displaystyle A\subset X} , defined as 1 A ( x ) := { 1 , x ∈ A , 0 , x ∉ A , ; {\displaystyle \mathbf {1} _{A}(x):={\begin{cases}1,&x\in A\,,\\0,&x\notin A\,,\end{cases}}\;\;;} ==== Triangular micro-modulus ==== It is characterized by k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} to a be a linear function k ( ξ , δ ) = ( 1 − | ξ | δ ) 1 B δ ( x ′ ) . {\displaystyle k({\bf {\xi }},\delta )=\left(1-{\frac {|{\bf {\xi }}|}{\delta }}\right){\bf {{1}_{B_{\delta }({\bf {x}}')}.}}} ==== Normal micro-modulus ==== If one wants to reflects the fact that most common discrete physical systems are characterized by a Maxwell-Boltzmann distribution, in order to include this behavior in peridynamics, the following expression for k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} can be utilized k ( ξ , δ ) = e − ( | ξ | / δ ) 2 1 B δ ( x ′ ) ; {\displaystyle k({\bf {\xi }},\delta )=e^{-(|{\bf {\xi }}|/\delta )^{2}}{\bf {{1}_{B_{\delta }({\bf {x}}')};}}} ==== Quartic micro-modulus ==== In the literature one can find also the following expression for the k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} function k ( ξ , δ ) = ( 1 − ( ξ δ ) 2 ) 2 1 B δ ( x ′ ) . {\displaystyle k({\bf {\xi }},\delta )=\left(1-\left({\frac {\xi }{\delta }}\right)^{2}\right)^{2}{\bf {{1}_{B_{\delta }({\bf {x}}')}.}}} Overall, depending on the specific material property to be modeled, there exists a wide range of expressions for the micro-modulus and, in general, for the peridynamic kernel. The above list is, thus, not exhaustive. == Damage == Damage is incorporated in the pairwise force function by allowing bonds to break when their elongation exceeds some prescribed value. After a bond breaks, it no longer sustains any force, and the endpoints are effectively disconnected from each other. When a bond breaks, the force it was carrying is redistributed to other bonds that have not yet broken. This increased load makes it more likely that these other bonds will break. The process of bond breakage and load redistribution, leading to further breakage, is how cracks grow in the peridynamic model. Analytically, the bond breaking is specified inside the expression of the peridynamic kernel, by the function μ ( s , t ) = { 1 , if s ( t ′ , ξ ) < s 0 , 0 , otherwise, for all 0 ≤ t ′ ≤ t ; {\displaystyle \mu (s,t)=\left\{{\begin{array}{ll}1\,,&{\text{ if }}s\left(t^{\prime },{\bf {\xi }}\right)<s_{0}\,,\\0\,,&{\text{ otherwise, }}\end{array}}\ \ \ \ {\text{ for all }}0\leq t^{\prime }\leq t\right.;} If the graph of f ( s , t ) {\displaystyle {\bf {f}}(s,t)} versus bond stretching s {\displaystyle s} is plotted, the action of the bond breaking function μ {\displaystyle \mu } in the fracture formation is clear. However, not only abrupt fracture can be modeled in the peridynamic framework and more general expressions for μ {\displaystyle \mu } can be employed. == State-based peridynamics == The theory described above assumes that each peridynamic bond responds independently of all the others. This is an oversimplification for most materials and leads to restrictions on the types of materials that can be modeled. In particular, this assumption implies that any isotropic linear elastic solid is restricted to a Poisson ratio of 1/4. To address this lack of generality, the idea of peridynamic states was introduced. This framework allows the force density in each bond to depend on the stretches in all the bonds connected to its endpoints, in addition to its own stretch. For example, the force in a bond could depend on the net volume changes at the endpoints. The effect of this volume change, relative to the effect of the bond stretch, determines the Poisson ratio. With peridynamic states, any material that can be modeled within the standard theory of continuum mechanics can be modeled as a peridynamic material, while retaining the advantages of the peridynamic theory for fracture. Mathematically the equation of the internal and external force term F ( x , t ) := ∫ Ω 0 ∩ B δ ( x ) f ( x ′ , x , u ( x ′ ) , u ( x ) ) d V x ′ + b ( x , t ) . {\displaystyle {{\bf {F}}({\bf {x}},t):=\int _{\Omega _{0}\cap B_{\delta }({\bf {x}})}{\bf {f}}\left({\bf {x}}',{\bf {x}},{\bf {u}}\left({\bf {x}}'\right),{\bf {u}}({\bf {x}})\right)dV_{{\bf {x}}'}+{\bf {b}}({\bf {x}},t)}\,.} used in the bond-based formulations is substituted by F ( x , t ) := ∫ B δ ( x ) { T _ [ x , t ] ⟨ x ′ − x ⟩ − T _ [ x ′ , t ] ⟨ x − x ′ ⟩ } d V x ′ + b ( x , t ) , {\displaystyle {\bf {F}}({\bf {x}},t):=\int _{B_{\delta }({\bf {x}})}\left\{{\underline {\mathbf {T} }}[\mathbf {x} ,t]\left\langle \mathbf {x} ^{\prime }-\mathbf {x} \right\rangle -{\underline {\mathbf {T} }}\left[\mathbf {x} ^{\prime },t\right]\left\langle \mathbf {x} -\mathbf {x} ^{\prime }\right\rangle \right\}dV_{\mathbf {x} ^{\prime }}+\mathbf {b} (\mathbf {x} ,t),} where T _ {\displaystyle {\underline {\mathbf {T} }}} is the force vector state field. A general m-order state A _ ⟨ ⋅ ⟩ : B δ ( x ) → L m . {\displaystyle {\underline {\mathbf {A} }}\langle \cdot \rangle :B_{\delta }({\bf {x}})\rightarrow {\mathcal {L}}_{m}.} is a mathematical object similar to a tensor, with the exception that it is in general non-linear; in general non-continuous; is not finite dimensional. Vector states are states of order equal to 2. For so called simple material, T _ {\displaystyle {\underline {\mathbf {T} }}} is defined as T _ := T ^ _ ( Y _ ) {\displaystyle {\underline {\mathbf {T} }}:={\underline {\mathbf {\hat {T}} }}({\underline {\mathbf {Y} }})} where T ^ _ : V → V {\displaystyle {\underline {\mathbf {\hat {T}} }}:{\mathcal {V}}\rightarrow {\mathcal {V}}} is a Riemann-integrable function on B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} , and Y _ {\displaystyle {\underline {\mathbf {Y} }}} is called deformation vector state field and is defined by the following relation Y _ [ x , t ] ⟨ ξ ⟩ = y ( x + ξ , t ) − y ( x , t ) ∀ x ∈ Ω 0 , ξ ∈ B δ ( x ) , t ≥ 0 {\displaystyle {\underline {\mathbf {Y} }}[\mathbf {x} ,t]\langle {\boldsymbol {\xi }}\rangle =\mathbf {y} (\mathbf {x} +{\boldsymbol {\xi }},t)-\mathbf {y} (\mathbf {x} ,t)\quad \forall \mathbf {x} \in \Omega _{0},\xi \in B_{\delta }({\bf {x}}),t\geq 0} thus Y _ ⟨ x ′ − x ⟩ {\displaystyle {\underline {\mathbf {Y} }}\left\langle \mathbf {x} ^{\prime }-\mathbf {x} \right\rangle } is the image of the bond x ′ − x {\displaystyle \mathbf {x} ^{\prime }-\mathbf {x} } under the deformation such that Y _ ⟨ ξ ⟩ = 0 if and only if ξ = 0 , {\displaystyle {\underline {\mathbf {Y} }}\langle {\boldsymbol {\xi }}\rangle =\mathbf {0} {\text{ if and only if }}{\boldsymbol {\xi }}=\mathbf {0} ,} which means that two distinct particles never occupy the same point as the deformation progresses. It can be proved that balance of linear momentum follow from the definition of F ( x , t ) {\displaystyle {\bf {F}}({\bf {x,\,t}})} , while, if the constitutive relation is such that ∫ B δ ( x ) Y _ ⟨ ξ ⟩ × T _ ⟨ ξ ⟩ d V ξ = 0 ∀ Y _ ∈ V {\displaystyle \int _{B_{\delta }({\bf {x}})}{\underline {\mathbf {Y} }}\langle {\boldsymbol {\xi }}\rangle \times {\underline {\mathbf {T} }}\langle {\boldsymbol {\xi }}\rangle dV_{\boldsymbol {\xi }}=0\quad \forall {\underline {\mathbf {Y} }}\in {\mathcal {V}}} the force vector state field satisfy balance of angular momentum. == Applications == The growing interest in peridynamics come from its capability to fill the gap between atomistic theories of matter and classical local continuum mechanics. It is applied effectively to micro-scale phenomena, such as crack formation and propagation, wave dispersion, intra-granular fracture. These phenomena can be described by appropriate adjustment of the peridynamic horizon radius, which is directly linked to the extent of non-local interactions between points within the material. In addition to the aforementioned research fields, peridynamics' non-local approach to discontinuities has found applications in various other areas. In geo-mechanics, it has been employed to study water-induced soil cracks, geo-material failure, rocks fragmentation, and so on. In biology, peridynamics has been used to model long-range interactions in living tissues, cellular ruptures, cracking of bio-membranes, and more. Furthermore, peridynamics has been extended to thermal diffusion theory, enabling the modeling of heat conduction in materials with discontinuities, defects, inhomogeneities, and cracks. It has also been applied to study advection-diffusion phenomena in multi-phase fluids and to construct models for transient advection-diffusion problems. With its versatility, peridynamics has been used in various multi-physics analyses, including micro-structural analysis, fatigue and heat conduction in composite materials, galvanic corrosion in metals, electricity-induced cracks in dielectric materials and more. == See also == Continuum mechanics Fracture mechanics Movable cellular automaton Molecular dynamics Non-local operator Singularity == References == == Further reading == Bobaru, Florin; Foster, John T.; Geubelle, Philippe H.; Silling, Stewart A., eds. (2016). Handbook of peridynamic modeling. Advances in applied mathematics. Boca Raton London New York: CRC Press, Taylor & Francis Group, a Chapman & Hall book. ISBN 978-1-4822-3044-4. Oterkus, Erkan; Oterkus, Selda; Madenci, Erdogan (2021-04-24). Peridynamic Modeling, Numerical Techniques, and Applications. Elsevier. ISBN 978-0-12-820441-2. Rabczuk, Timon; Ren, Huilong; Zhuang, Xiaoying (2023-02-15). Computational Methods Based on Peridynamics and Nonlocal Operators: Theory and Applications. Springer Nature. ISBN 978-3-031-20906-2. D’Elia, Marta; Li, Xingjie; Seleson, Pablo; Tian, Xiaochuan; Yu, Yue (March 2022). "A Review of Local-to-Nonlocal Coupling Methods in Nonlocal Diffusion and Nonlocal Mechanics". Journal of Peridynamics and Nonlocal Modeling. 4 (1): 1–50. arXiv:1912.06668. doi:10.1007/s42102-020-00038-7. ISSN 2522-896X. S2CID 257114051. Bobaru, Florin; Chen, Ziguang; Jafarzadeh, Siavash (2023-12-01). Corrosion Damage and Corrosion-Assisted Fracture: Peridynamic Modelling and Computations. Elsevier. ISBN 978-0-12-823174-6. == External links == Implementation of finite element and finite difference approximation of Nonlocal models Peridigm, an open-source computational peridynamics code PeriDoX open-source repository for peridynamics and its documentation PeriLab open-source repository for peridynamics written in Julia Sandia Laboratory-Peridynamics Website on peridynamics
Wikipedia/Peridynamics
Surface force denoted fs is the force that acts across an internal or external surface element in a material body. Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces. Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area. == Equations for surface force == === Surface force due to pressure === f s = p ⋅ A {\displaystyle f_{s}=p\cdot A\ } , where f = force, p = pressure, and A = area on which a uniform pressure acts == Examples == === Pressure related surface force === Since pressure is f o r c e a r e a = N m 2 {\displaystyle {\frac {\mathit {force}}{\mathit {area}}}=\mathrm {\frac {N}{m^{2}}} } , and area is a ( l e n g t h ) ⋅ ( w i d t h ) = m ⋅ m = m 2 {\displaystyle (length)\cdot (width)=\mathrm {m\cdot m} =\mathrm {m^{2}} } , a pressure of 5 N m 2 = 5 P a {\displaystyle 5\ \mathrm {\frac {N}{m^{2}}} =5\ \mathrm {Pa} } over an area of 20 m 2 {\displaystyle 20\ \mathrm {m^{2}} } will produce a surface force of ( 5 P a ) ⋅ ( 20 m 2 ) = 100 N {\displaystyle (5\ \mathrm {Pa} )\cdot (20\ \mathrm {m^{2}} )=100\ \mathrm {N} } . == See also == Body force Contact force == References ==
Wikipedia/Surface_forces
Solid mechanics (also known as mechanics of solids) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents. Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical engineering, for geology, and for many branches of physics and chemistry such as materials science. It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants. One of the most common practical applications of solid mechanics is the Euler–Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them. Solid mechanics is a vast subject because of the wide range of solid materials available, such as steel, wood, concrete, biological materials, textiles, geological materials, and plastics. == Fundamental aspects == A solid is a material that can support a substantial amount of shearing force over a given time scale during a natural or industrial process or action. This is what distinguishes solids from fluids, because fluids also support normal forces which are those forces that are directed perpendicular to the material plane across from which they act and normal stress is the normal force per unit area of that material plane. Shearing forces in contrast with normal forces, act parallel rather than perpendicular to the material plane and the shearing force per unit area is called shear stress. Therefore, solid mechanics examines the shear stress, deformation and the failure of solid materials and structures. The most common topics covered in solid mechanics include: stability of structures - examining whether structures can return to a given equilibrium after disturbance or partial/complete failure, see Structure mechanics dynamical systems and chaos - dealing with mechanical systems highly sensitive to their given initial position thermomechanics - analyzing materials with models derived from principles of thermodynamics biomechanics - solid mechanics applied to biological materials e.g. bones, heart tissue geomechanics - solid mechanics applied to geological materials e.g. ice, soil, rock vibrations of solids and structures - examining vibration and wave propagation from vibrating particles and structures i.e. vital in mechanical, civil, mining, aeronautical, maritime/marine, aerospace engineering fracture and damage mechanics - dealing with crack-growth mechanics in solid materials composite materials - solid mechanics applied to materials made up of more than one compound e.g. reinforced plastics, reinforced concrete, fiber glass variational formulations and computational mechanics - numerical solutions to mathematical equations arising from various branches of solid mechanics e.g. finite element method (FEM) experimental mechanics - design and analysis of experimental methods to examine the behavior of solid materials and structures == Relationship to continuum mechanics == As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics. == Response models == A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity. This region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, non-linear material models are becoming more common. These are basic models that describe how a solid responds to an applied stress: Elasticity – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load, can be described by the linear elasticity equations such as Hooke's law. Viscoelasticity – These are materials that behave elastically, but also have damping: when the stress is applied and removed, work has to be done against the damping effects and is converted in heat within the material resulting in a hysteresis loop in the stress–strain curve. This implies that the material response has time-dependence. Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, deformation that occurs after yield is permanent. Viscoplasticity - Combines theories of viscoelasticity and plasticity and applies to materials like gels and mud. Thermoelasticity - There is coupling of mechanical with thermal responses. In general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fourier's law of heat conduction, as opposed to advanced theories with physically more realistic models. == Timeline == 1452–1519 Leonardo da Vinci made many contributions 1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures 1660: Hooke's law by Robert Hooke 1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains Newton's laws of motion 1750: Euler–Bernoulli beam equation 1700–1782: Daniel Bernoulli introduced the principle of virtual work 1707–1783: Leonhard Euler developed the theory of buckling of columns 1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures 1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of least work as a special case 1874: Otto Mohr formalized the idea of a statically indeterminate structure. 1922: Timoshenko corrects the Euler–Bernoulli beam equation 1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames. 1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework 1942: R. Courant divided a domain into finite subregions 1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today == See also == Strength of materials - Specific definitions and the relationships between stress and strain. Applied mechanics Materials science Continuum mechanics Fracture mechanics Impact (mechanics) Solid-state physics Rigid body == References == === Notes === === Bibliography === L.D. Landau, E.M. Lifshitz, Course of Theoretical Physics: Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J.E. Marsden, T.J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P.C. Chou, N. J. Pagano, Elasticity: Tensor, Dyadic, and Engineering Approaches, Dover, ISBN 0-486-66958-0 R.W. Ogden, Non-linear Elastic Deformation, Dover, ISBN 0-486-69648-0 S. Timoshenko and J.N. Goodier," Theory of elasticity", 3d ed., New York, McGraw-Hill, 1970. G.A. Holzapfel, Nonlinear Solid Mechanics: A Continuum Approach for Engineering, Wiley, 2000 A.I. Lurie, Theory of Elasticity, Springer, 1999. L.B. Freund, Dynamic Fracture Mechanics, Cambridge University Press, 1990. R. Hill, The Mathematical Theory of Plasticity, Oxford University, 1950. J. Lubliner, Plasticity Theory, Macmillan Publishing Company, 1990. J. Ignaczak, M. Ostoja-Starzewski, Thermoelasticity with Finite Wave Speeds, Oxford University Press, 2010. D. Bigoni, Nonlinear Solid Mechanics: Bifurcation Theory and Material Instability, Cambridge University Press, 2012. Y. C. Fung, Pin Tong and Xiaohong Chen, Classical and Computational Solid Mechanics, 2nd Edition, World Scientific Publishing, 2017, ISBN 978-981-4713-64-1.
Wikipedia/Theory_of_elasticity
In the case of finite deformations, the Piola–Kirchhoff stress tensors (named for Gabrio Piola and Gustav Kirchhoff) express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations and rotations, the Cauchy and Piola–Kirchhoff tensors are identical. Whereas the Cauchy stress tensor σ {\displaystyle {\boldsymbol {\sigma }}} relates stresses in the current configuration, the deformation gradient and strain tensors are described by relating the motion to the reference configuration; thus not all tensors describing the state of the material are in either the reference or current configuration. Describing the stress, strain and deformation either in the reference or current configuration would make it easier to define constitutive models (for example, the Cauchy Stress tensor is variant to a pure rotation, while the deformation strain tensor is invariant; thus creating problems in defining a constitutive model that relates a varying tensor, in terms of an invariant one during pure rotation; as by definition constitutive models have to be invariant to pure rotations). The 1st Piola–Kirchhoff stress tensor, P {\displaystyle {\boldsymbol {P}}} is one possible solution to this problem. It defines a family of tensors, which describe the configuration of the body in either the current or the reference state. The first Piola–Kirchhoff stress tensor, P {\displaystyle {\boldsymbol {P}}} , relates forces in the present ("spatial") configuration with areas in the reference ("material") configuration. P = J σ F − T {\displaystyle {\boldsymbol {P}}=J~{\boldsymbol {\sigma }}~{\boldsymbol {F}}^{-T}~} where F {\displaystyle {\boldsymbol {F}}} is the deformation gradient and J = det F {\displaystyle J=\det {\boldsymbol {F}}} is the Jacobian determinant. In terms of components with respect to an orthonormal basis, the first Piola–Kirchhoff stress is given by P i L = J σ i k F L k − 1 = J σ i k ∂ X L ∂ x k {\displaystyle P_{iL}=J~\sigma _{ik}~F_{Lk}^{-1}=J~\sigma _{ik}~{\cfrac {\partial X_{L}}{\partial x_{k}}}~\,\!} Because it relates different coordinate systems, the first Piola–Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The first Piola–Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress. If the material rotates without a change in stress state (rigid rotation), the components of the first Piola–Kirchhoff stress tensor will vary with material orientation. The first Piola–Kirchhoff stress is energy conjugate to the deformation gradient. It relates forces in the current configuration to areas in the reference configuration. The second Piola–Kirchhoff stress tensor, S {\displaystyle {\boldsymbol {S}}} , relates forces in the reference configuration to areas in the reference configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the reference configuration. S = J F − 1 ⋅ σ ⋅ F − T . {\displaystyle {\boldsymbol {S}}=J~{\boldsymbol {F}}^{-1}\cdot {\boldsymbol {\sigma }}\cdot {\boldsymbol {F}}^{-T}~.} In index notation with respect to an orthonormal basis, S I L = J F I k − 1 F L m − 1 σ k m = J ∂ X I ∂ x k ∂ X L ∂ x m σ k m {\displaystyle S_{IL}=J~F_{Ik}^{-1}~F_{Lm}^{-1}~\sigma _{km}=J~{\cfrac {\partial X_{I}}{\partial x_{k}}}~{\cfrac {\partial X_{L}}{\partial x_{m}}}~\sigma _{km}\!\,\!} This tensor, a one-point tensor, is symmetric. If the material rotates without a change in stress state (rigid rotation), the components of the second Piola–Kirchhoff stress tensor remain constant, irrespective of material orientation. The second Piola–Kirchhoff stress tensor is energy conjugate to the Green–Lagrange finite strain tensor. == References == J. Bonet and R. W. Wood, Nonlinear Continuum Mechanics for Finite Element Analysis, Cambridge University Press.
Wikipedia/Piola-Kirchhoff_stress_tensor
A crystallographic defect is an interruption of the regular patterns of arrangement of atoms or molecules in crystalline solids. The positions and orientations of particles, which are repeating at fixed distances determined by the unit cell parameters in crystals, exhibit a periodic crystal structure, but this is usually imperfect. Several types of defects are often characterized: point defects, line defects, planar defects, bulk defects. Topological homotopy establishes a mathematical method of characterization. == Point defects == Point defects are defects that occur only at or around a single lattice point. They are not extended in space in any dimension. Strict limits for how small a point defect is are generally not defined explicitly. However, these defects typically involve at most a few extra or missing atoms. Larger defects in an ordered structure are usually considered dislocation loops. For historical reasons, many point defects, especially in ionic crystals, are called centers: for example a vacancy in many ionic solids is called a luminescence center, a color center, or F-center. These dislocations permit ionic transport through crystals leading to electrochemical reactions. These are frequently specified using Kröger–Vink notation. Vacancy defects are lattice sites which would be occupied in a perfect crystal, but are vacant. If a neighboring atom moves to occupy the vacant site, the vacancy moves in the opposite direction to the site which used to be occupied by the moving atom. The stability of the surrounding crystal structure guarantees that the neighboring atoms will not simply collapse around the vacancy. In some materials, neighboring atoms actually move away from a vacancy, because they experience attraction from atoms in the surroundings. A vacancy (or pair of vacancies in an ionic solid) is sometimes called a Schottky defect. Interstitial defects are atoms that occupy a site in the crystal structure at which there is usually not an atom. They are generally high energy configurations. Small atoms (mostly impurities) in some crystals can occupy interstices without high energy, such as hydrogen in palladium. A nearby pair of a vacancy and an interstitial is often called a Frenkel defect or Frenkel pair. This is caused when an ion moves into an interstitial site and creates a vacancy. Due to fundamental limitations of material purification methods, materials are never 100% pure, which by definition induces defects in crystal structure. In the case of an impurity, the atom is often incorporated at a regular atomic site in the crystal structure. This is neither a vacant site nor is the atom on an interstitial site and it is called a substitutional defect. The atom is not supposed to be anywhere in the crystal, and is thus an impurity. In some cases where the radius of the substitutional atom (ion) is substantially smaller than that of the atom (ion) it is replacing, its equilibrium position can be shifted away from the lattice site. These types of substitutional defects are often referred to as off-center ions. There are two different types of substitutional defects: Isovalent substitution and aliovalent substitution. Isovalent substitution is where the ion that is substituting the original ion is of the same oxidation state as the ion it is replacing. Aliovalent substitution is where the ion that is substituting the original ion is of a different oxidation state than the ion it is replacing. Aliovalent substitutions change the overall charge within the ionic compound, but the ionic compound must be neutral. Therefore, a charge compensation mechanism is required. Hence either one of the metals is partially or fully oxidised or reduced, or ion vacancies are created. Antisite defects occur in an ordered alloy or compound when atoms of different type exchange positions. For example, some alloys have a regular structure in which every other atom is a different species; for illustration assume that type A atoms sit on the corners of a cubic lattice, and type B atoms sit in the center of the cubes. If one cube has an A atom at its center, the atom is on a site usually occupied by a B atom, and is thus an antisite defect. This is neither a vacancy nor an interstitial, nor an impurity. Topological defects are regions in a crystal where the normal chemical bonding environment is topologically different from the surroundings. For instance, in a perfect sheet of graphite (graphene) all atoms are in rings containing six atoms. If the sheet contains regions where the number of atoms in a ring is different from six, while the total number of atoms remains the same, a topological defect has formed. An example is the Stone Wales defect in nanotubes, which consists of two adjacent 5-membered and two 7-membered atom rings. Amorphous solids may contain defects. These are naturally somewhat hard to define, but sometimes their nature can be quite easily understood. For instance, in ideally bonded amorphous silica all Si atoms have 4 bonds to O atoms and all O atoms have 2 bonds to Si atom. Thus e.g. an O atom with only one Si bond (a dangling bond) can be considered a defect in silica. Moreover, defects can also be defined in amorphous solids based on empty or densely packed local atomic neighbourhoods, and the properties of such 'defects' can be shown to be similar to normal vacancies and interstitials in crystals. Complexes can form between different kinds of point defects. For example, if a vacancy encounters an impurity, the two may bind together if the impurity is too large for the lattice. Interstitials can form 'split interstitial' or 'dumbbell' structures where two atoms effectively share an atomic site, resulting in neither atom actually occupying the site. == Line defects == Line defects can be described by gauge theories. Dislocations are linear defects, around which the atoms of the crystal lattice are misaligned. There are two basic types of dislocations, the edge dislocation and the screw dislocation. "Mixed" dislocations, combining aspects of both types, are also common. Edge dislocations are caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the adjacent planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. The analogy with a stack of paper is apt: if a half a piece of paper is inserted in a stack of paper, the defect in the stack is only noticeable at the edge of the half sheet. The screw dislocation is more difficult to visualise, but basically comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes of atoms in the crystal lattice. The presence of dislocation results in lattice strain (distortion). The direction and magnitude of such distortion is expressed in terms of a Burgers vector (b). For an edge type, b is perpendicular to the dislocation line, whereas in the cases of the screw type it is parallel. In metallic materials, b is aligned with close-packed crystallographic directions and its magnitude is equivalent to one interatomic spacing. Dislocations can move if the atoms from one of the surrounding planes break their bonds and rebond with the atoms at the terminating edge. It is the presence of dislocations and their ability to readily move (and interact) under the influence of stresses induced by external loads that leads to the characteristic malleability of metallic materials. Dislocations can be observed using transmission electron microscopy, field ion microscopy and atom probe techniques. Deep-level transient spectroscopy has been used for studying the electrical activity of dislocations in semiconductors, mainly silicon. Disclinations are line defects corresponding to "adding" or "subtracting" an angle around a line. Basically, this means that if you track the crystal orientation around the line defect, you get a rotation. Usually, they were thought to play a role only in liquid crystals, but recent developments suggest that they might have a role also in solid materials, e.g. leading to the self-healing of cracks. == Planar defects == Grain boundaries occur where the crystallographic direction of the lattice abruptly changes. This usually occurs when two crystals begin growing separately and then meet. Antiphase boundaries occur in ordered alloys: in this case, the crystallographic direction remains the same, but each side of the boundary has an opposite phase: For example, if the ordering is usually ABABABAB (hexagonal close-packed crystal), an antiphase boundary takes the form of ABABBABA. Stacking faults occur in a number of crystal structures, but the common example is in close-packed structures. They are formed by a local deviation of the stacking sequence of layers in a crystal. An example would be the ABABCABAB stacking sequence. A twin boundary is a defect that introduces a plane of mirror symmetry in the ordering of a crystal. For example, in cubic close-packed crystals, the stacking sequence of a twin boundary would be ABCABCBACBA. On planes of single crystals, steps between atomically flat terraces can also be regarded as planar defects. It has been shown that such defects and their geometry have significant influence on the adsorption of organic molecules == Bulk defects == Three-dimensional macroscopic or bulk defects, such as pores, cracks, or inclusions Voids — small regions where there are no atoms, and which can be thought of as clusters of vacancies Impurities can cluster together to form small regions of a different phase. These are often called precipitates. == Mathematical classification methods == A successful mathematical classification method for physical lattice defects, which works not only with the theory of dislocations and other defects in crystals but also, e.g., for disclinations in liquid crystals and for excitations in superfluid 3He, is homotopy theory, a branch of topology. == Computer simulation methods == Density functional theory, classical molecular dynamics and kinetic Monte Carlo simulations are widely used to study the properties of defects in solids with computer simulations. Simulating jamming of hard spheres of different sizes and/or in containers with non-commeasurable sizes using the Lubachevsky–Stillinger algorithm can be an effective technique for demonstrating some types of crystallographic defects. == See also == Bjerrum defect Crystallographic defects in diamond Kröger–Vink notation F-center == References == == Further reading == Hagen Kleinert, Gauge Fields in Condensed Matter, Vol. II, "Stresses and defects", pp. 743–1456, World Scientific (Singapore, 1989); Paperback ISBN 9971-5-0210-0 Hermann Schmalzried: Solid State Reactions. Verlag Chemie, Weinheim 1981, ISBN 3-527-25872-8.
Wikipedia/Crystallographic_defects
In quantum mechanics, an atomic orbital ( ) is a function describing the location and wave-like behavior of an electron in an atom. This function describes an electron's charge distribution around the atom's nucleus, and can be used to calculate the probability of finding an electron in a specific region around the nucleus. Each orbital in an atom is characterized by a set of values of three quantum numbers n, ℓ, and mℓ, which respectively correspond to electron's energy, its orbital angular momentum, and its orbital angular momentum projected along a chosen axis (magnetic quantum number). The orbitals with a well-defined magnetic quantum number are generally complex-valued. Real-valued orbitals can be formed as linear combinations of mℓ and −mℓ orbitals, and are often labeled using associated harmonic polynomials (e.g., xy, x2 − y2) which describe their angular structure. An orbital can be occupied by a maximum of two electrons, each with its own projection of spin m s {\displaystyle m_{s}} . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number ℓ = 0, 1, 2, and 3 respectively. These names, together with their n values, are used to describe electron configurations of atoms. They are derived from description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for ℓ > 3 continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between letters "i" and "j". Atomic orbitals are basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing submicroscopic behavior of electrons in matter. In this model, the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of periodic table arises naturally from total number of electrons that occupy a complete set of s, p, d, and f orbitals, respectively, though for higher values of quantum number n, particularly when the atom bears a positive charge, energies of certain sub-shells become very similar and so, the order in which they are said to be populated by electrons (e.g., Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can be rationalized only somewhat arbitrarily. == Electron properties == With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by wave–particle duality. In this sense, electrons have the following properties: Wave-like properties: Electrons do not orbit a nucleus in the manner of a planet orbiting a star, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency. The electrons are never in a single point location, though the probability of interacting with the electron at a single point can be found from the electron's wave function. The electron's charge acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function. Particle-like properties: The number of electrons orbiting a nucleus can be only an integer. Electrons jump between orbitals like particles. For example, if one photon strikes the electrons, only one electron changes state as a result. Electrons retain particle-like properties such as: each wave state has the same electric charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition. Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when one electron is present. When more electrons are added, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection ("electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle. One should remember that these orbital 'states', as described here, are merely eigenstates of an electron in its orbit. An actual electron exists in a superposition of states, which is like a weighted average, but with complex number weights. So, for instance, an electron could be in a pure eigenstate (2, 1, 0), or a mixed state ⁠1/2⁠(2, 1, 0) + ⁠1/2⁠ i {\displaystyle i} (2, 1, 1), or even the mixed state ⁠2/5⁠(2, 1, 0) + ⁠3/5⁠ i {\displaystyle i} (2, 1, 1). For each eigenstate, a property has an eigenvalue. So, for the three states just mentioned, the value of n {\displaystyle n} is 2, and the value of l {\displaystyle l} is 1. For the second and third states, the value for m l {\displaystyle m_{l}} is a superposition of 0 and 1. As a superposition of states, it is ambiguous—either exactly 0 or exactly 1—not an intermediate or average value like the fraction ⁠1/2⁠. A superposition of eigenstates (2, 1, 1) and (3, 2, 1) would have an ambiguous n {\displaystyle n} and l {\displaystyle l} , but m l {\displaystyle m_{l}} would definitely be 1. Eigenstates make it easier to deal with the math. You can choose a different basis of eigenstates by superimposing eigenstates from any other basis (see Real orbitals below). === Formal quantum mechanical definition === Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrödinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wave function, even though many electrons are not in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. === Types of orbital === Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for orbitals are usually spherical coordinates (r, θ, φ) in atoms and Cartesian (x, y, z) in polyatomic molecules. The advantage of spherical coordinates here is that an orbital wave function is a product of three factors each dependent on a single coordinate: ψ(r, θ, φ) = R(r) Θ(θ) Φ(φ). The angular factors of atomic orbitals Θ(θ) Φ(φ) generate s, p, d, etc. functions as real combinations of spherical harmonics Yℓm(θ, φ) (where ℓ and m are quantum numbers). There are typically three mathematical forms for the radial functions R(r) which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons: The hydrogen-like orbitals are derived from the exact solutions of the Schrödinger equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on distance r from the nucleus has radial nodes and decays as e − α r {\displaystyle e^{-\alpha r}} . The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does a hydrogen-like orbital. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as e − α r 2 {\displaystyle e^{-\alpha r^{2}}} . Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like orbitals. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. == History == The term orbital was introduced by Robert S. Mulliken in 1932 as short for one-electron orbital wave function. Niels Bohr explained around 1913 that electrons might revolve around a compact nucleus with definite angular momentum. Bohr's model was an improvement on the 1911 explanations of Ernest Rutherford, that of the electron moving around a nucleus. Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electron behavior as early as 1904. These theories were each built upon new observations starting with simple understanding and becoming more correct and complex. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics. === Early models === With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolve in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. === Bohr atom === In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were permitted to have only discrete values of angular momentum, quantized in units ħ. This constraint automatically allowed only certain electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step toward the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum; so a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation for hydrogen) and remains empty. === Modern conceptions and connections to the Heisenberg uncertainty principle === Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require infinite particle momentum. In chemistry, Erwin Schrödinger, Linus Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom. == Orbital names == === Orbital notation and subshells === Orbitals have been given names, which are usually given in the form: X t y p e {\displaystyle X\,\mathrm {type} \ } where X is the energy level corresponding to the principal quantum number n; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number ℓ. For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level (n = 1) and has an angular quantum number of ℓ = 0, denoted as s. Orbitals with ℓ = 1, 2 and 3 are denoted as p, d and f respectively. The set of orbitals for a given n and ℓ is called a subshell, denoted X t y p e y {\displaystyle X\,\mathrm {type} ^{y}\ } . The superscript y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and ℓ = 1. === X-ray notation === There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For n = 1, 2, 3, 4, 5, ..., the letters associated with those numbers are K, L, M, N, O, ... respectively. == Hydrogen-like orbitals == The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron (He+, Li2+, etc.) is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom). For atoms with two or more electrons, the governing equations can be solved only with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: n, ℓ, and mℓ. The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of a hydrogen-like atom are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number n first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, n determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of ℓ are even more closely related, and are said to comprise a "subshell". == Quantum numbers == Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed. === Complex orbitals === In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows: The principal quantum number n describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells. The azimuthal quantum number ℓ describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where n is some integer n0, ℓ ranges across all (integer) values satisfying the relation 0 ≤ ℓ ≤ n 0 − 1 {\displaystyle 0\leq \ell \leq n_{0}-1} . For instance, the n = 1 shell has only orbitals with ℓ = 0 {\displaystyle \ell =0} , and the n = 2 shell has only orbitals with ℓ = 0 {\displaystyle \ell =0} , and ℓ = 1 {\displaystyle \ell =1} . The set of orbitals associated with a particular value of ℓ are sometimes collectively called a subshell. The magnetic quantum number, m ℓ {\displaystyle m_{\ell }} , describes the projection of the orbital angular momentum along a chosen axis. It determines the magnitude of the current circulating around that axis and the orbital contribution to the magnetic moment of an electron via the Ampèrian loop model. Within a subshell ℓ {\displaystyle \ell } , m ℓ {\displaystyle m_{\ell }} obtains the integer values in the range − ℓ ≤ m ℓ ≤ ℓ {\displaystyle -\ell \leq m_{\ell }\leq \ell } . The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of m ℓ {\displaystyle m_{\ell }} available in that subshell. Empty cells represent subshells that do not exist. Subshells are usually identified by their n {\displaystyle n} - and ℓ {\displaystyle \ell } -values. n {\displaystyle n} is represented by its numerical value, but ℓ {\displaystyle \ell } is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with n = 2 {\displaystyle n=2} and ℓ = 0 {\displaystyle \ell =0} as a '2s subshell'. Each electron also has angular momentum in the form of quantum mechanical spin given by spin s = ⁠1/2⁠. Its projection along a specified axis is given by the spin magnetic quantum number, ms, which can be +⁠1/2⁠ or −⁠1/2⁠. These values are also called "spin up" or "spin down" respectively. The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (n, ℓ, m), these two electrons must differ in their spin projection ms. The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing m = +1 from m = −1. As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experiment—where an atom is exposed to a magnetic field—provides one such example. === Real orbitals === Instead of the complex orbitals described above, it is common, especially in the chemistry literature, to use real atomic orbitals. These real orbitals arise from simple linear combinations of complex orbitals. Using the Condon–Shortley phase convention, real orbitals are related to complex orbitals in the same way that the real spherical harmonics are related to complex spherical harmonics. Letting ψ n , ℓ , m {\displaystyle \psi _{n,\ell ,m}} denote a complex orbital with quantum numbers n, ℓ, and m, the real orbitals ψ n , ℓ , m real {\displaystyle \psi _{n,\ell ,m}^{\text{real}}} may be defined by ψ n , ℓ , m real = { 2 ( − 1 ) m Im { ψ n , ℓ , | m | } for m < 0 ψ n , ℓ , | m | for m = 0 2 ( − 1 ) m Re { ψ n , ℓ , | m | } for m > 0 = { i 2 ( ψ n , ℓ , − | m | − ( − 1 ) m ψ n , ℓ , | m | ) for m < 0 ψ n , ℓ , | m | for m = 0 1 2 ( ψ n , ℓ , − | m | + ( − 1 ) m ψ n , ℓ , | m | ) for m > 0 {\displaystyle {\begin{aligned}\psi _{n,\ell ,m}^{\text{real}}&={\begin{cases}{\sqrt {2}}(-1)^{m}{\text{Im}}\left\{\psi _{n,\ell ,|m|}\right\}&{\text{ for }}m<0\\[2pt]\psi _{n,\ell ,|m|}&{\text{ for }}m=0\\[2pt]{\sqrt {2}}(-1)^{m}{\text{Re}}\left\{\psi _{n,\ell ,|m|}\right\}&{\text{ for }}m>0\end{cases}}\\[4pt]&={\begin{cases}{\frac {i}{\sqrt {2}}}\left(\psi _{n,\ell ,-|m|}-(-1)^{m}\psi _{n,\ell ,|m|}\right)&{\text{ for }}m<0\\[2pt]\psi _{n,\ell ,|m|}&{\text{ for }}m=0\\[4pt]{\frac {1}{\sqrt {2}}}\left(\psi _{n,\ell ,-|m|}+(-1)^{m}\psi _{n,\ell ,|m|}\right)&{\text{ for }}m>0\end{cases}}\end{aligned}}} If ψ n , ℓ , m ( r , θ , ϕ ) = R n l ( r ) Y ℓ m ( θ , ϕ ) {\displaystyle \psi _{n,\ell ,m}(r,\theta ,\phi )=R_{nl}(r)Y_{\ell }^{m}(\theta ,\phi )} , with R n l ( r ) {\displaystyle R_{nl}(r)} the radial part of the orbital, this definition is equivalent to ψ n , ℓ , m real ( r , θ , ϕ ) = R n l ( r ) Y ℓ m ( θ , ϕ ) {\displaystyle \psi _{n,\ell ,m}^{\text{real}}(r,\theta ,\phi )=R_{nl}(r)Y_{\ell m}(\theta ,\phi )} where Y ℓ m {\displaystyle Y_{\ell m}} is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic Y ℓ m {\displaystyle Y_{\ell }^{m}} . Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In real hydrogen-like orbitals, quantum numbers n and ℓ have the same interpretation and significance as their complex counterparts, but m is no longer a good quantum number (but its absolute value is). Some real orbitals are given specific names beyond the simple ψ n , ℓ , m {\displaystyle \psi _{n,\ell ,m}} designation. Orbitals with quantum number ℓ = 0, 1, 2, 3, 4, 5, 6... are called s, p, d, f, g, h, i, ... orbitals. With this one can already assign names to complex orbitals such as 2 p ± 1 = ψ 2 , 1 , ± 1 {\displaystyle 2{\text{p}}_{\pm 1}=\psi _{2,1,\pm 1}} ; the first symbol is the n quantum number, the second character is the symbol for that particular ℓ quantum number and the subscript is the m quantum number. As an example of how the full orbital names are generated for real orbitals, one may calculate ψ n , 1 , ± 1 real {\displaystyle \psi _{n,1,\pm 1}^{\text{real}}} . From the table of spherical harmonics, ψ n , 1 , ± 1 = R n , 1 Y 1 ± 1 = ∓ R n , 1 3 / 8 π ⋅ ( x ± i y ) / r {\textstyle \psi _{n,1,\pm 1}=R_{n,1}Y_{1}^{\pm 1}=\mp R_{n,1}{\sqrt {3/8\pi }}\cdot (x\pm iy)/r} with r = x 2 + y 2 + z 2 {\textstyle r={\sqrt {x^{2}+y^{2}+z^{2}}}} . Then ψ n , 1 , + 1 real = R n , 1 3 4 π ⋅ x r ψ n , 1 , − 1 real = R n , 1 3 4 π ⋅ y r {\displaystyle {\begin{aligned}\psi _{n,1,+1}^{\text{real}}&=R_{n,1}{\sqrt {\frac {3}{4\pi }}}\cdot {\frac {x}{r}}\\\psi _{n,1,-1}^{\text{real}}&=R_{n,1}{\sqrt {\frac {3}{4\pi }}}\cdot {\frac {y}{r}}\end{aligned}}} Likewise ψ n , 1 , 0 = R n , 1 3 / 4 π ⋅ z / r {\textstyle \psi _{n,1,0}=R_{n,1}{\sqrt {3/4\pi }}\cdot z/r} . As a more complicated example: ψ n , 3 , + 1 real = R n , 3 1 4 21 2 π ⋅ x ⋅ ( 5 z 2 − r 2 ) r 3 {\displaystyle \psi _{n,3,+1}^{\text{real}}=R_{n,3}{\frac {1}{4}}{\sqrt {\frac {21}{2\pi }}}\cdot {\frac {x\cdot (5z^{2}-r^{2})}{r^{3}}}} In all these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in x, y, z appearing in the numerator. We ignore any terms in the z, r polynomial except for the term with the highest exponent in z. We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the n {\displaystyle n} and ℓ {\displaystyle \ell } quantum numbers. ψ n , 1 , − 1 real = n p y = i 2 ( n p − 1 + n p + 1 ) ψ n , 1 , 0 real = n p z = 2 p 0 ψ n , 1 , + 1 real = n p x = 1 2 ( n p − 1 − n p + 1 ) ψ n , 3 , + 1 real = n f x z 2 = 1 2 ( n f − 1 − n f + 1 ) {\displaystyle {\begin{aligned}\psi _{n,1,-1}^{\text{real}}&=n{\text{p}}_{y}={\frac {i}{\sqrt {2}}}\left(n{\text{p}}_{-1}+n{\text{p}}_{+1}\right)\\\psi _{n,1,0}^{\text{real}}&=n{\text{p}}_{z}=2{\text{p}}_{0}\\\psi _{n,1,+1}^{\text{real}}&=n{\text{p}}_{x}={\frac {1}{\sqrt {2}}}\left(n{\text{p}}_{-1}-n{\text{p}}_{+1}\right)\\\psi _{n,3,+1}^{\text{real}}&=nf_{xz^{2}}={\frac {1}{\sqrt {2}}}\left(nf_{-1}-nf_{+1}\right)\end{aligned}}} The expression above all use the Condon–Shortley phase convention which is favored by quantum physicists. Other conventions exist for the phase of the spherical harmonics. Under these different conventions the p x {\displaystyle {\text{p}}_{x}} and p y {\displaystyle {\text{p}}_{y}} orbitals may appear, for example, as the sum and difference of p + 1 {\displaystyle {\text{p}}_{+1}} and p − 1 {\displaystyle {\text{p}}_{-1}} , contrary to what is shown above. Below is a list of these Cartesian polynomial names for the atomic orbitals. There does not seem to be reference in the literature as to how to abbreviate the long Cartesian spherical harmonic polynomials for ℓ > 3 {\displaystyle \ell >3} so there does not seem be consensus on the naming of g {\displaystyle g} orbitals or higher according to this nomenclature. == Shapes of orbitals == Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density | ψ(r, θ, φ) |2 has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although | ψ |2 as the square of an absolute value is everywhere non-negative, the sign of the wave function ψ(r, θ, φ) is often indicated in each subregion of the orbital picture. Sometimes the ψ function is graphed to show its phases, rather than | ψ(r, θ, φ) |2 which shows probability density but has no phase (which is lost when taking absolute value, since ψ(r, θ, φ) is a complex number). |ψ(r, θ, φ)|2 orbital graphs tend to have less spherical, thinner lobes than ψ(r, θ, φ) graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, to show wave function phase, shows mostly ψ(r, θ, φ) graphs. The lobes can be seen as standing wave interference patterns between the two counter-rotating, ring-resonant traveling wave m and −m modes; the projection of the orbital onto the xy plane has a resonant m wavelength around the circumference. Although rarely shown, the traveling wave solutions can be seen as rotating banded tori; the bands represent phase information. For each m there are two standing wave solutions ⟨m⟩ + ⟨−m⟩ and ⟨m⟩ − ⟨−m⟩. If m = 0, the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. If ℓ = 0 there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. Nodal planes and nodal spheres are surfaces on which the probability density vanishes. The number of nodal surfaces is controlled by the quantum numbers n and ℓ. An orbital with azimuthal quantum number ℓ has ℓ radial nodal planes passing through the origin. For example, the s orbitals (ℓ = 0) are spherically symmetric and have no nodal planes, whereas the p orbitals (ℓ = 1) have a single nodal plane between the lobes. The number of nodal spheres equals n−ℓ−1, consistent with the restriction ℓ ≤ n−1 on the quantum numbers. The principal quantum number controls the total number of nodal surfaces which is n−1. Loosely speaking, n is energy, ℓ is analogous to eccentricity, and m is orientation. In general, n determines size and energy of the orbital for a given nucleus; as n increases, the size of the orbital increases. The higher nuclear charge Z of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the size of the atom remains very roughly constant, even as the number of electrons increases. Also in general terms, ℓ determines an orbital's shape, and mℓ its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on mℓ also. Together, the whole set of orbitals for a given ℓ and n fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes. The single s orbitals ( ℓ = 0 {\displaystyle \ell =0} ) are shaped like spheres. For n = 1 it is roughly a solid ball (densest at center and fades outward exponentially), but for n ≥ 2, each single s orbital is made of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s orbitals for all n numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the outcome (see the figure at right). The shapes of p, d and f orbitals are described verbally here and shown graphically in the Orbitals table below. The three p orbitals for n = 2 have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of mℓ. The overall result is a lobe pointing along each direction of the primary axes. Four of the five d orbitals for n = 3 look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the center along the x and y axes themselves. The fifth and final d orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair. There are seven f orbitals, each with shapes more complex than those of the d orbitals. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with n values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of n (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of n further increase the number of radial nodes, for each type of orbital. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the px, py, and pz are the same shape. Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number ℓ of the same shell n (e.g., all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ℓ) is spherical. This is known as Unsöld's theorem. === Orbitals table === This table shows the real hydrogen-like wave functions for all atomic orbitals up to 7s, and therefore covers the occupied orbitals in the ground state of all elements in the periodic table up to radium and some beyond. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The pz orbital is the same as the p0 orbital, but the px and py are formed by taking linear combinations of the p+1 and p−1 orbitals (which is why they are listed under the m = ±1 label). Also, the p+1 and p−1 are not the same shape as the p0, since they are pure spherical harmonics. * No elements with 6f, 7d or 7f electrons have been discovered yet. † Elements with 7p electrons have been discovered, but their electronic configurations are only predicted – save the exceptional Lr, which fills 7p1 instead of 6d1. ‡ For the elements whose highest occupied orbital is a 6d orbital, only some electronic configurations have been confirmed. (Mt, Ds, Rg and Cn are still missing). These are the real-valued orbitals commonly used in chemistry. Only the m = 0 {\displaystyle m=0} orbitals where are eigenstates of the orbital angular momentum operator, L ^ z {\displaystyle {\hat {L}}_{z}} . The columns with m = ± 1 , ± 2 , ⋯ {\displaystyle m=\pm 1,\pm 2,\cdots } are combinations of two eigenstates. See comparison in the following picture: === Qualitative understanding of shapes === The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism). This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum. A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity. Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system ψ(r, θ) and the wave functions for a vibrating sphere are three-coordinate ψ(r, θ, φ). s-type drum modes and wave functions None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. p-type drum modes and wave functions d-type drum modes == Orbital energy == In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by n {\displaystyle n} . The n = 1 {\displaystyle n=1} orbital has the lowest possible energy in the atom. Each successively higher value of n {\displaystyle n} has a higher energy, but the difference decreases as n {\displaystyle n} increases. For high n {\displaystyle n} , the energy becomes so high that the electron can easily escape the atom. In single electron atoms, all levels with different ℓ {\displaystyle \ell } within a given n {\displaystyle n} are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken slightly in the solution to the Dirac equation (where energy depends on n and another quantum number j), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on its orbital, but also on its interactions with other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on n {\displaystyle n} but also on ℓ {\displaystyle \ell } . Higher values of ℓ {\displaystyle \ell } are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When ℓ = 2 {\displaystyle \ell =2} , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s orbital in the next higher shell; when ℓ = 3 {\displaystyle \ell =3} the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms with higher atomic number, the ℓ {\displaystyle \ell } of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers n {\displaystyle n} of electrons becomes less and less important in their energy placement. The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with n {\displaystyle n} and ℓ {\displaystyle \ell } given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known. == Electron placement and the periodic table == Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as the spin magnetic quantum number ms. Thus, two electrons may occupy a single orbital, so long as they have different values of ms. Because ms takes one of only two values (⁠1/2⁠ or −⁠1/2⁠), at most two electrons can occupy each orbital. Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom. The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same ℓ-state (but the n associated with that ℓ-state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see Electron configuration § Atoms: Aufbau principle and Madelung rule). The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. === Relativistic effects === For elements with high atomic number Z, the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high-Z atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium. In the Bohr model, an n = 1 electron has a velocity given by v = Zαc, where Z is the atomic number, α is the fine-structure constant, and c is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with Z > 137 is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of Z due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than Z. The critical Z value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron–positron pairs, does not occur until Z is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron–positron production from these effects has been claimed to be observed. There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes. === pp hybridization (conjectured) === In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon. == Transitions between orbitals == Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: State n = 1, ℓ = 0, mℓ = 0 and ms = +⁠1/2⁠ State n = 2, ℓ = 0, mℓ = 0 and ms = −⁠1/2⁠ By quantum theory, state 1 has a fixed energy of E1, and state 2 has a fixed energy of E2. Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly E2 − E1. If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly E2 − E1 will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2. The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model. The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron. == See also == == References == McCaw, Charles S. (2015). Orbitals: With Applications in Atomic Spectra. Singapore: World Scientific Publishing Company. ISBN 9781783264162. Tipler, Paul; Llewellyn, Ralph (2003). Modern Physics (4 ed.). New York: W. H. Freeman and Company. ISBN 978-0-7167-4345-3. Scerri, Eric (2007). The Periodic Table, Its Story and Its Significance. New York: Oxford University Press. ISBN 978-0-19-530573-9. Levine, Ira (2014). Quantum Chemistry (7th ed.). Pearson Education. ISBN 978-0-321-80345-0. Griffiths, David (2000). Introduction to Quantum Mechanics (2 ed.). Benjamin Cummings. ISBN 978-0-13-111892-8. Cohen, Irwin; Bustard, Thomas (1966). "Atomic Orbitals: Limitations and Variations". J. Chem. Educ. 43 (4): 187. Bibcode:1966JChEd..43..187C. doi:10.1021/ed043p187. == External links == 3D representation of hydrogenic orbitals The Orbitron, a visualization of all common and uncommon atomic orbitals, from 1s to 7g Grand table Still images of many orbitals
Wikipedia/Atomic_orbital_model
Molecular scale electronics, also called single-molecule electronics, is a branch of nanotechnology that uses single molecules, or nanoscale collections of single molecules, as electronic components. Because single molecules constitute the smallest stable structures imaginable, this miniaturization is the ultimate goal for shrinking electrical circuits. The field is often termed simply as "molecular electronics", but this term is also used to refer to the distantly related field of conductive polymers and organic electronics, which uses the properties of molecules to affect the bulk properties of a material. A nomenclature distinction has been suggested so that molecular materials for electronics refers to this latter field of bulk applications, while molecular scale electronics refers to the nanoscale single-molecule applications treated here. == Fundamental concepts == Conventional electronics have traditionally been made from bulk materials. Ever since their invention in 1958, the performance and complexity of integrated circuits has undergone exponential growth, a trend named Moore’s law, as feature sizes of the embedded components have shrunk accordingly. As the structures shrink, the sensitivity to deviations increases. In a few technology generations, the composition of the devices must be controlled to a precision of a few atoms for the devices to work. With bulk methods growing increasingly demanding and costly as they near inherent limits, the idea was born that the components could instead be built up atom by atom in a chemistry lab (bottom up) versus carving them out of bulk material (top down). This is the idea behind molecular electronics, with the ultimate miniaturization being components contained in single molecules. In single-molecule electronics, the bulk material is replaced by single molecules. Instead of forming structures by removing or applying material after a pattern scaffold, the atoms are put together in a chemistry lab. In this way, billions of billions of copies are made simultaneously (typically more than 1020 molecules are made at once) while the composition of molecules are controlled down to the last atom. The molecules used have properties that resemble traditional electronic components such as a wire, transistor or rectifier. Single-molecule electronics is an emerging field, and entire electronic circuits consisting exclusively of molecular sized compounds are still very far from being realized. However, the unceasing demand for more computing power, along with the inherent limits of lithographic methods as of 2016, make the transition seem unavoidable. Currently, the focus is on discovering molecules with interesting properties and on finding ways to obtain reliable and reproducible contacts between the molecular components and the bulk material of the electrodes. == Theoretical basis == Molecular electronics operates at distances of less than 100 nanometers. The miniaturization down to single molecules brings the scale down to a regime where quantum mechanics effects are important. In conventional electronic components, electrons can be filled in or drawn out more or less like a continuous flow of electric charge. In contrast, in molecular electronics the transfer of one electron alters the system significantly. For example, when an electron has been transferred from a source electrode to a molecule, the molecule gets charged up, which makes it far harder for the next electron to transfer (see also Coulomb blockade). The significant amount of energy due to charging must be accounted for when making calculations about the electronic properties of the setup, and is highly sensitive to distances to conducting surfaces nearby. The theory of single-molecule devices is especially interesting since the system under consideration is an open quantum system in nonequilibrium (driven by voltage). In the low bias voltage regime, the nonequilibrium nature of the molecular junction can be ignored, and the current-voltage traits of the device can be calculated using the equilibrium electronic structure of the system. However, in stronger bias regimes a more sophisticated treatment is required, as there is no longer a variational principle. In the elastic tunneling case (where the passing electron does not exchange energy with the system), the formalism of Rolf Landauer can be used to calculate the transmission through the system as a function of bias voltage, and hence the current. In inelastic tunneling, an elegant formalism based on the non-equilibrium Green's functions of Leo Kadanoff and Gordon Baym, and independently by Leonid Keldysh was advanced by Ned Wingreen and Yigal Meir. This Meir-Wingreen formulation has been used to great success in the molecular electronics community to examine the more difficult and interesting cases where the transient electron exchanges energy with the molecular system (for example through electron-phonon coupling or electronic excitations). Further, connecting single molecules reliably to a larger scale circuit has proven a great challenge, and constitutes a significant hindrance to commercialization. == Examples == Common for molecules used in molecular electronics is that the structures contain many alternating double and single bonds (see also Conjugated system). This is done because such patterns delocalize the molecular orbitals, making it possible for electrons to move freely over the conjugated area. === Wires === The sole purpose of molecular wires is to electrically connect different parts of a molecular electrical circuit. As the assembly of these and their connection to a macroscopic circuit is still not mastered, the focus of research in single-molecule electronics is primarily on the functionalized molecules: molecular wires are characterized by containing no functional groups and are hence composed of plain repetitions of a conjugated building block. Among these are the carbon nanotubes that are quite large compared to the other suggestions but have shown very promising electrical properties. The main problem with the molecular wires is to obtain good electrical contact with the electrodes so that electrons can move freely in and out of the wire. === Transistors === Single-molecule transistors are fundamentally different from the ones known from bulk electronics. The gate in a conventional (field-effect) transistor determines the conductance between the source and drain electrode by controlling the density of charge carriers between them, whereas the gate in a single-molecule transistor controls the possibility of a single electron to jump on and off the molecule by modifying the energy of the molecular orbitals. One of the effects of this difference is that the single-molecule transistor is almost binary: it is either on or off. This opposes its bulk counterparts, which have quadratic responses to gate voltage. It is the quantization of charge into electrons that is responsible for the markedly different behavior compared to bulk electronics. Because of the size of a single molecule, the charging due to a single electron is significant and provides means to turn a transistor on or off (see Coulomb blockade). For this to work, the electronic orbitals on the transistor molecule cannot be too well integrated with the orbitals on the electrodes. If they are, an electron cannot be said to be located on the molecule or the electrodes and the molecule will function as a wire. A popular group of molecules, that can work as the semiconducting channel material in a molecular transistor, is the oligopolyphenylenevinylenes (OPVs) that works by the Coulomb blockade mechanism when placed between the source and drain electrode in an appropriate way. Fullerenes work by the same mechanism and have also been commonly used. Semiconducting carbon nanotubes have also been demonstrated to work as channel material but although molecular, these molecules are sufficiently large to behave almost as bulk semiconductors. The size of the molecules, and the low temperature of the measurements being conducted, makes the quantum mechanical states well defined. Thus, it is being researched if the quantum mechanical properties can be used for more advanced purposes than simple transistors (e.g. spintronics). Physicists at the University of Arizona, in collaboration with chemists from the University of Madrid, have designed a single-molecule transistor using a ring-shaped molecule similar to benzene. Physicists at Canada's National Institute for Nanotechnology have designed a single-molecule transistor using styrene. Both groups expect (the designs were experimentally unverified as of June 2005) their respective devices to function at room temperature, and to be controlled by a single electron. === Rectifiers (diodes) === Molecular rectifiers are mimics of their bulk counterparts and have an asymmetric construction so that the molecule can accept electrons in one end but not the other. The molecules have an electron donor (D) in one end and an electron acceptor (A) in the other. This way, the unstable state D+ – A− will be more readily made than D− – A+. The result is that an electric current can be drawn through the molecule if the electrons are added through the acceptor end, but less easily if the reverse is attempted. == Methods == One of the biggest problems with measuring on single molecules is to establish reproducible electrical contact with only one molecule and doing so without shortcutting the electrodes. Because the current photolithographic technology is unable to produce electrode gaps small enough to contact both ends of the molecules tested (on the order of nanometers), alternative strategies are applied. === Molecular gaps === One way to produce electrodes with a molecular sized gap between them is break junctions, in which a thin electrode is stretched until it breaks. Another is electromigration. Here a current is led through a thin wire until it melts and the atoms migrate to produce the gap. Further, the reach of conventional photolithography can be enhanced by chemically etching or depositing metal on the electrodes. Probably the easiest way to conduct measurements on several molecules is to use the tip of a scanning tunneling microscope (STM) to contact molecules adhered at the other end to a metal substrate. === Anchoring === A popular way to anchor molecules to the electrodes is to make use of sulfur's high chemical affinity to gold. In these setups, the molecules are synthesized so that sulfur atoms are placed strategically to function as crocodile clips connecting the molecules to the gold electrodes. Though useful, the anchoring is non-specific and thus anchors the molecules randomly to all gold surfaces. Further, the contact resistance is highly dependent on the precise atomic geometry around the site of anchoring and thereby inherently compromises the reproducibility of the connection. To circumvent the latter issue, experiments has shown that fullerenes could be a good candidate for use instead of sulfur because of the large conjugated π-system that can electrically contact many more atoms at once than one atom of sulfur. === Fullerene nanoelectronics === In polymers, classical organic molecules are composed of both carbon and hydrogen (and sometimes additional compounds such as nitrogen, chlorine or sulphur). They are obtained from petrol and can often be synthesized in large amounts. Most of these molecules are insulating when their length exceeds a few nanometers. However, naturally occurring carbon is conducting, especially graphite recovered from coal or encountered otherwise. From a theoretical viewpoint, graphite is a semi-metal, a category in between metals and semi-conductors. It has a layered structure, each sheet being one atom thick. Between each sheet, the interactions are weak enough to allow an easy manual cleavage. Tailoring the graphite sheet to obtain well defined nanometer-sized objects remains a challenge. However, by the close of the twentieth century, chemists were exploring methods to fabricate extremely small graphitic objects that could be considered single molecules. After studying the interstellar conditions under which carbon is known to form clusters, Richard Smalley's group (Rice University, Texas) set up an experiment in which graphite was vaporized via laser irradiation. Mass spectrometry revealed that clusters containing specific magic numbers of atoms were stable, especially those clusters of 60 atoms. Harry Kroto, an English chemist who assisted in the experiment, suggested a possible geometry for these clusters – atoms covalently bound with the exact symmetry of a soccer ball. Coined buckminsterfullerenes, buckyballs, or C60, the clusters retained some properties of graphite, such as conductivity. These objects were rapidly envisioned as possible building blocks for molecular electronics. == Problems == === Artifacts === When trying to measure electronic traits of molecules, artificial phenomena can occur that can be hard to distinguish from truly molecular behavior. Before they were discovered, these artifacts have mistakenly been published as being features pertaining to the molecules in question. Applying a voltage drop on the order of volts across a nanometer sized junction results in a very strong electrical field. The field can cause metal atoms to migrate and eventually close the gap by a thin filament, which can be broken again when carrying a current. The two levels of conductance imitate molecular switching between a conductive and an isolating state of a molecule. Another encountered artifact is when the electrodes undergo chemical reactions due to the high field strength in the gap. When the voltage bias is reversed, the reaction will cause hysteresis in the measurements that can be interpreted as being of molecular origin. A metallic grain between the electrodes can act as a single electron transistor by the mechanism described above, thus resembling the traits of a molecular transistor. This artifact is especially common with nanogaps produced by the electromigration method. == History and progress == In their treatment of so-called donor-acceptor complexes in the 1940s, Robert Mulliken and Albert Szent-Györgyi advanced the concept of charge transfer in molecules. They subsequently further refined the study of both charge transfer and energy transfer in molecules. Likewise, a 1974 paper from Mark Ratner and Ari Aviram illustrated a theoretical molecular rectifier. In 1988, Aviram described in detail a theoretical single-molecule field-effect transistor. Further concepts were proposed by Forrest Carter of the Naval Research Laboratory, including single-molecule logic gates. A wide range of ideas were presented, under his aegis, at a conference entitled Molecular Electronic Devices in 1988. These were theoretical constructs and not concrete devices. The direct measurement of the electronic traits of individual molecules awaited the development of methods for making molecular-scale electrical contacts. This was no easy task. Thus, the first experiment directly-measuring the conductance of a single molecule was only reported in 1995 on a single C60 molecule by C. Joachim and J. K. Gimzewsky in their seminal Physical Review Letter paper and later in 1997 by Mark Reed and co-workers on a few hundred molecules. Since then, this branch of the field has advanced rapidly. Likewise, as it has grown possible to measure such properties directly, the theoretical predictions of the early workers have been confirmed substantially. The concept of molecular electronics was published in 1974 when Aviram and Ratner suggested an organic molecule that could work as a rectifier. Having both huge commercial and fundamental interest, much effort was put into proving its feasibility, and 16 years later in 1990, the first demonstration of an intrinsic molecular rectifier was realized by Ashwell and coworkers for a thin film of molecules. The first measurement of the conductance of a single molecule was realised in 1994 by C. Joachim and J. K. Gimzewski and published in 1995 (see the corresponding Phys. Rev. Lett. paper). This was the conclusion of 10 years of research started at IBM TJ Watson, using the scanning tunnelling microscope tip apex to switch a single molecule as already explored by A. Aviram, C. Joachim and M. Pomerantz at the end of the 1980s (see their seminal Chem. Phys. Lett. paper during this period). The trick was to use a UHV Scanning Tunneling microscope to allow the tip apex to gently touch the top of a single C60 molecule adsorbed on an Au(110) surface. A resistance of 55 MOhms was recorded along with a low voltage linear I-V. The contact was certified by recording the I-z current distance property, which allows measurement of the deformation of the C60 cage under contact. This first experiment was followed by the reported result using a mechanical break junction method to connect two gold electrodes to a sulfur-terminated molecular wire by Mark Reed and James Tour in 1997. The scanning tunneling microscope (STM) and later the atomic force microscope (AFM) have facilitated manipulating single-molecule electronics. Also, theoretical advances in molecular electronics have facilitated further understanding of non-adiabatic charge transfer events at electrode-electrolyte interfaces. A single-molecule amplifier was implemented by C. Joachim and J.K. Gimzewski in IBM Zurich. This experiment, involving one C60 molecule, demonstrated that one such molecule can provide gain in a circuit via intramolecular quantum interference effects alone. A collaboration of researchers at Hewlett-Packard (HP) and University of California, Los Angeles (UCLA), led by James Heath, Fraser Stoddart, R. Stanley Williams, and Philip Kuekes, has developed molecular electronics based on rotaxanes and catenanes. Work is also occurring on the use of single-wall carbon nanotubes as field-effect transistors. Most of this work is being done by International Business Machines (IBM). Some specific reports of a field-effect transistor based on molecular self-assembled monolayers were shown to be fraudulent in 2002 as part of the Schön scandal. The Aviram-Ratner model for a unimolecular rectifier has been confirmed experimentally. Many rectifying molecules have so far been identified, and the number and efficiency of these systems is growing rapidly. Supramolecular electronics is a new field involving electronics at a supramolecular level. An important issue in molecular electronics is the determination of the resistance of a single molecule (both theoretical and experimental). For example, Bumm, et al. used STM to analyze a single molecular switch in a self-assembled monolayer to determine how conductive such a molecule can be. Another problem faced by this field is the difficulty of performing direct characterization since imaging at the molecular scale is often difficult in many experimental devices. == See also == Molecular electronics Single-molecule magnet Stereoelectronics Organic semiconductor Conductive polymer Molecular conductance Comparison of software for molecular mechanics modeling Unconventional computing == References ==
Wikipedia/Single-molecule_electronics
John Dalton (; 5 or 6 September 1766 – 27 July 1844) was an English chemist, physicist and meteorologist. He introduced the atomic theory into chemistry. He also researched colour blindness; as a result, the umbrella term for red-green congenital colour blindness disorders is Daltonism in several languages. == Early life == John Dalton was born on 5 or 6 September 1766 into a Quaker family in Eaglesfield, near Cockermouth, in Cumberland, England. His father was a weaver. He received his early education from his father and from Quaker John Fletcher, who ran a private school in the nearby village of Pardshaw Hall. Dalton's family was too poor to support him for long and he began to earn his living, from the age of ten, in the service of wealthy local Quaker Elihu Robinson. == Early career == When he was 15, Dalton joined his older brother Jonathan in running a Quaker school in Kendal, Westmorland, about 45 miles (72 km) from his home. Around the age of 23, Dalton may have considered studying law or medicine, but his relatives did not encourage him, perhaps because being a Dissenter, he was barred from attending English universities. He acquired much scientific knowledge from informal instruction by John Gough, a blind philosopher who was gifted in the sciences and arts. At 27, he was appointed teacher of mathematics and natural philosophy at the "Manchester Academy" in Manchester, a dissenting academy (the lineal predecessor, following a number of changes of location, of Harris Manchester College, Oxford). He remained for seven years, until the college's worsening financial situation led to his resignation. Dalton began a new career as a private tutor in the same two subjects. == Scientific work == === Meteorology === Dalton's early life was influenced by a prominent Quaker, Elihu Robinson, a competent meteorologist and instrument maker, from Eaglesfield, Cumberland, who interested him in problems of mathematics and meteorology. During his years in Kendal, Dalton contributed solutions to problems and answered questions on various subjects in The Ladies' Diary and the Gentleman's Diary. In 1787 at age 21 he began his meteorological diary in which, during the succeeding 57 years, he entered more than 200,000 observations. He rediscovered George Hadley's theory of atmospheric circulation (now known as the Hadley cell) around this time. In 1793 Dalton's first publication, Meteorological Observations and Essays, contained the seeds of several of his later discoveries but despite the originality of his treatment, little attention was paid to them by other scholars. A second work by Dalton, Elements of English Grammar (or A new system of grammatical instruction: for the use of schools and academies), was published in 1801. ==== Measuring mountains ==== After leaving the Lake District, Dalton returned annually to spend his holidays studying meteorology, something which involved a lot of hill-walking. Until the advent of aeroplanes and weather balloons, the only way to make measurements of temperature and humidity at altitude was to climb a mountain. Dalton estimated the height using a barometer. The Ordnance Survey did not publish maps for the Lake District until the 1860s. Before then, Dalton was one of the few authorities on the heights of the region's mountains. He was often accompanied by Jonathan Otley, who also made a study of the heights of the local peaks, using Dalton's figures as a comparison to check his work. Otley published his information in his map of 1818. Otley became both an assistant and a friend to Dalton. === Colour blindness === In 1794, shortly after his arrival in Manchester, Dalton was elected a member of the Manchester Literary and Philosophical Society, the "Lit & Phil", and a few weeks later he communicated his first paper on "Extraordinary facts relating to the vision of colours", in which he postulated that shortage in colour perception was caused by discoloration of the liquid medium of the eyeball. As both he and his brother were colour blind, he recognised that the condition must be hereditary. Although Dalton's theory was later disproven, his early research into colour vision deficiency was recognized after his lifetime. Examination of his preserved eyeball in 1995 demonstrated that Dalton had deuteranopia, a type of congenital red-green color blindness in which the gene for medium wavelength sensitive (green) photopsins is missing. Individuals with this form of colour blindness see every colour as mapped to blue, yellow or gray, or, as Dalton wrote in his seminal paper, That part of the image which others call red, appears to me little more than a shade, or defect of light; after that the orange, yellow and green seem one colour, which descends pretty uniformly from an intense to a rare yellow, making what I should call different shades of yellow. === Gas laws === In 1800, Dalton became secretary of the Manchester Literary and Philosophical Society, and in the following year he presented an important series of lectures, entitled "Experimental Essays" on the constitution of mixed gases; the pressure of steam and other vapours at different temperatures in a vacuum and in air; on evaporation; and on the thermal expansion of gases. The four essays, presented between 2 and 30 October 1801, were published in the Memoirs of the Literary and Philosophical Society of Manchester in 1802. The second essay opens with the remark, There can scarcely be a doubt entertained respecting the reducibility of all elastic fluids of whatever kind, into liquids; and we ought not to despair of effecting it in low temperatures and by strong pressures exerted upon the unmixed gases further. After describing experiments to ascertain the pressure of steam at various points between 0 and 100 °C (32 and 212 °F), Dalton concluded from observations of the vapour pressure of six different liquids, that the variation of vapour pressure for all liquids is equivalent, for the same variation of temperature, reckoning from vapour of any given pressure. In the fourth essay he remarks, I see no sufficient reason why we may not conclude, that all elastic fluids under the same pressure expand equally by heat—and that for any given expansion of mercury, the corresponding expansion of air is proportionally something less, the higher the temperature. ... It seems, therefore, that general laws respecting the absolute quantity and the nature of heat, are more likely to be derived from elastic fluids than from other substances. He enunciated Gay-Lussac's law, published in 1802 by Joseph Louis Gay-Lussac (Gay-Lussac credited the discovery to unpublished work from the 1780s by Jacques Charles). In the two or three years following the lectures, Dalton published several papers on similar topics. "On the Absorption of Gases by Water and other Liquids" (read as a lecture on 21 October 1803, first published in 1805) contained his law of partial pressures now known as Dalton's law. === Atomic theory === Arguably the most important of all Dalton's investigations are concerned with the atomic theory in chemistry. While his name is inseparably associated with this theory, the origin of Dalton's atomic theory is not fully understood. The theory may have been suggested to him either by researches on ethylene (olefiant gas) and methane (carburetted hydrogen) or by analysis of nitrous oxide (protoxide of azote) and nitrogen dioxide (deutoxide of azote), both views resting on the authority of Thomas Thomson. From 1814 to 1819, Irish chemist William Higgins claimed that Dalton had plagiarised his ideas, but Higgins' theory did not address relative atomic mass. Recent evidence suggests that Dalton's development of thought may have been influenced by the ideas of another Irish chemist Bryan Higgins, who was William's uncle. Bryan believed that an atom was a heavy central particle surrounded by an atmosphere of caloric, the supposed substance of heat at the time. The size of the atom was determined by the diameter of the caloric atmosphere. Based on the evidence, Dalton was aware of Bryan's theory and adopted very similar ideas and language, but he never acknowledged Bryan's anticipation of his caloric model. However, the essential novelty of Dalton's atomic theory is that he provided a method of calculating relative atomic weights for the chemical elements, which provides the means for the assignment of molecular formulas for all chemical substances. Neither Bryan nor William Higgins did this, and Dalton's priority for that crucial innovation is uncontested. A study of Dalton's laboratory notebooks, discovered in the rooms of the Manchester Literary and Philosophical Society, concluded that so far from Dalton being led by his search for an explanation of the law of multiple proportions to the idea that chemical combination consists in the interaction of atoms of definite and characteristic weight, the idea of atoms arose in his mind as a purely physical concept, forced on him by study of the physical properties of the atmosphere and other gases. The first published indications of this idea are to be found at the end of his paper "On the Absorption of Gases by Water and other Liquids" already mentioned. There he says: Why does not water admit its bulk of every kind of gas alike? This question I have duly considered, and though I am not able to satisfy myself completely I am nearly persuaded that the circumstance depends on the weight and number of the ultimate particles of the several gases. He then proposes relative weights for the atoms of a few elements, without going into further detail. However, a recent study of Dalton's laboratory notebook entries concludes he developed the chemical atomic theory in 1803 to reconcile Henry Cavendish’s and Antoine Lavoisier’s analytical data on the composition of nitric acid, not to explain the solubility of gases in water. The main points of Dalton's atomic theory, as it eventually developed, are: Elements are made of extremely small particles called atoms. Atoms of a given element are identical in size, mass and other properties; atoms of different elements differ in size, mass and other properties. Atoms cannot be subdivided, created or destroyed. Atoms of different elements combine in simple whole-number ratios to form chemical compounds. In chemical reactions, atoms are combined, separated or rearranged. In his first extended published discussion of the atomic theory (1808), Dalton proposed an additional (and controversial) "rule of greatest simplicity". This rule could not be independently confirmed, but some such assumption was necessary in order to propose formulas for a few simple molecules, upon which the calculation of atomic weights depended. This rule dictated that if the atoms of two different elements were known to form only a single compound, like hydrogen and oxygen forming water or hydrogen and nitrogen forming ammonia, the molecules of that compound shall be assumed to consist of one atom of each element. For elements that combined in multiple ratios, such as the then-known two oxides of carbon or the three oxides of nitrogen, their combinations were assumed to be the simplest ones possible. For example, if two such combinations are known, one must consist of an atom of each element, and the other must consist of one atom of one element and two atoms of the other. This was merely an assumption, derived from faith in the simplicity of nature. No evidence was then available to scientists to deduce how many atoms of each element combine to form molecules. But this or some other such rule was absolutely necessary to any incipient theory, since one needed an assumed molecular formula in order to calculate relative atomic weights. Dalton's "rule of greatest simplicity" caused him to assume that the formula for water was OH and ammonia was NH, quite different from our modern understanding (H2O, NH3). On the other hand, his simplicity rule led him to propose the correct modern formulas for the two oxides of carbon (CO and CO2). Despite the uncertainty at the heart of Dalton's atomic theory, the principles of the theory survived. === Relative atomic weights === Dalton published his first table of relative atomic weights containing six elements (hydrogen, oxygen, nitrogen, carbon, sulfur and phosphorus), relative to the weight of an atom of hydrogen conventionally taken as 1. Since these were only relative weights, they do not have a unit of weight attached to them. Dalton provided no indication in this paper how he had arrived at these numbers, but in his laboratory notebook, dated 6 September 1803, is a list in which he set out the relative weights of the atoms of a number of elements, derived from analysis of water, ammonia, carbon dioxide, etc. by chemists of the time. The extension of this idea to substances in general necessarily led him to the law of multiple proportions, and the comparison with experiment brilliantly confirmed his deduction. In the paper "On the Proportion of the Several Gases in the Atmosphere", read by him in November 1802, the law of multiple proportions appears to be anticipated in the words: The elements of oxygen may combine with a certain portion of nitrous gas or with twice that portion, but with no intermediate quantity. But there is reason to suspect that this sentence may have been added some time after the reading of the paper, which was not published until 1805. Compounds were listed as binary, ternary, quaternary, etc. (molecules composed of two, three, four, etc. atoms) in the New System of Chemical Philosophy depending on the number of atoms a compound had in its simplest, empirical form. Dalton hypothesised the structure of compounds can be represented in whole number ratios. So, one atom of element X combining with one atom of element Y is a binary compound. Furthermore, one atom of element X combining with two atoms of element Y or vice versa, is a ternary compound. Many of the first compounds listed in the New System of Chemical Philosophy correspond to modern views, although many others do not. Dalton used his own symbols to visually represent the atomic structure of compounds. They were depicted in the New System of Chemical Philosophy, where he listed 21 elements and 17 simple molecules. === Other investigations === Dalton published papers on such diverse topics as rain and dew and the origin of springs (hydrosphere); on heat, the colour of the sky, steam and the reflection and refraction of light; and on the grammatical subjects of the auxiliary verbs and participles of the English language. === Experimental approach === As an investigator, Dalton was often content with rough and inaccurate instruments, even though better ones were obtainable. Sir Humphry Davy described him as "a very coarse experimenter", who "almost always found the results he required, trusting to his head rather than his hands." On the other hand, historians who have replicated some of his crucial experiments have confirmed Dalton's skill and precision. In the preface to the second part of Volume I of his New System, he says he had so often been misled by taking for granted the results of others that he determined to write "as little as possible but what I can attest by my own experience", but this independence he carried so far that it sometimes resembled lack of receptivity. Thus he distrusted, and probably never fully accepted, Gay-Lussac's conclusions as to the combining volumes of gases. He held unconventional views on chlorine. Even after its elementary character had been settled by Davy, he persisted in using the atomic weights he himself had adopted, even when they had been superseded by the more accurate determinations of other chemists. He always objected to the chemical notation devised by Jöns Jacob Berzelius, although most thought that it was much simpler and more convenient than his own cumbersome system of circular symbols. == Other publications == For Rees's Cyclopædia Dalton contributed articles on Chemistry and Meteorology, but the topics are not known. He contributed 117 Memoirs of the Literary and Philosophical Society of Manchester from 1817 until his death in 1844 while president of that organisation. Of these the earlier are the most important. In one of them, read in 1814, he explains the principles of volumetric analysis, in which he was one of the earliest researchers. In 1840 a paper on phosphates and arsenates, often regarded as a weaker work, was refused by the Royal Society, and he was so incensed that he published it himself. He took the same course soon afterwards with four other papers, two of which ("On the quantity of acids, bases and salts in different varieties of salts" and "On a new and easy method of analysing sugar") contain his discovery, regarded by him as second in importance only to atomic theory, that certain anhydrates, when dissolved in water, cause no increase in its volume, his inference being that the salt enters into the pores of the water. == Public life == Even before he had propounded the atomic theory, Dalton had attained a considerable scientific reputation. In 1803, he was chosen to give a series of lectures on natural philosophy at the Royal Institution in London, and he delivered another series of lectures there in 1809–1810. Some witnesses reported that he was deficient in the qualities that make an attractive lecturer, being harsh and indistinct in voice, ineffective in the treatment of his subject, and singularly wanting in the language and power of illustration. In 1810, Sir Humphry Davy asked him to offer himself as a candidate for the fellowship of the Royal Society, but Dalton declined, possibly for financial reasons. In 1822 he was proposed without his knowledge, and on election paid the usual fee. Six years previously he had been made a corresponding member of the French Académie des Sciences, and in 1830 he was elected as one of its eight foreign associates in place of Davy. In 1833, Earl Grey's government conferred on him a pension of £150, raised in 1836 to £300 (equivalent to £17,981 and £35,672 in 2023, respectively). Dalton was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1834. A young James Prescott Joule, who later studied and published (1843) on the nature of heat and its relationship to mechanical work, was a pupil of Dalton in his last years. == Personal life == Dalton never married and had only a few close friends. As a Quaker, he lived a modest and unassuming personal life. For the 26 years prior to his death, Dalton lived in a room in the home of the Rev W. Johns, a published botanist, and his wife, in George Street, Manchester. Dalton and Johns died in the same year (1844). Dalton's daily round of laboratory work and tutoring in Manchester was broken only by annual excursions to the Lake District and occasional visits to London. In 1822 he paid a short visit to Paris, where he met many distinguished resident men of science. He attended several of the earlier meetings of the British Association at York, Oxford, Dublin and Bristol. == Disability and death == Dalton suffered a minor stroke in 1837, and a second in 1838 left him with a speech impairment, although he remained able to perform experiments. In May 1844 he had another stroke; on 26 July, while his hand was trembling, he recorded his last meteorological observation. On 27 July, in Manchester, Dalton fell from his bed and was found dead by his attendant. Dalton was accorded a civic funeral with full honours. His body lay in state in Manchester Town Hall for four days and more than 40,000 people filed past his coffin. The funeral procession included representatives of the city's major civic, commercial, and scientific bodies. He was buried in Manchester in Ardwick Cemetery; the cemetery is now a playing field, but pictures of the original grave may be found in published materials. == Legacy == Much of Dalton's written work, collected by the Manchester Literary and Philosophical Society, was damaged during bombing on 24 December 1940. It prompted Isaac Asimov to say, "John Dalton's records, carefully preserved for a century, were destroyed during the World War II bombing of Manchester. It is not only the living who are killed in war". The damaged papers are in the John Rylands Library. A bust of Dalton, by Chantrey, paid for by public subscription was placed in the entrance hall of the Royal Manchester Institution. Chantrey's large statue of Dalton, erected while Dalton was alive was placed in Manchester Town Hall in 1877. He "is probably the only scientist who got a statue in his lifetime". The Manchester-based Swiss phrenologist and sculptor William Bally made a cast of the interior of Dalton's cranium and of a cyst therein, having arrived at the Manchester Royal Infirmary too late to make a cast of the head and face. A cast of the head was made, by a Mr Politi, whose arrival at the scene preceded that of Bally. John Dalton Street connects Deansgate and Albert Square in the centre of Manchester. The John Dalton building at Manchester Metropolitan University is occupied by the Faculty of Science and Engineering. Outside it stands William Theed's statue of Dalton, erected in Piccadilly in 1855, and moved there in 1966. A blue plaque commemorates the site of his laboratory at 36 George Street in Manchester. The University of Manchester established two Dalton Chemical Scholarships, two Dalton Mathematical Scholarships, and a Dalton Prize for Natural History. A hall of residence is named Dalton Hall. The Dalton Medal has been awarded only twelve times by the Manchester Literary and Philosophical Society. The Dalton crater on the Moon was named after Dalton. "Daltonism" is a lesser-known synonym of colour-blindness and, in some languages, variations on this have persisted in common usage: for example, 'daltonien' is the French adjectival equivalent of 'colour-blind', and 'daltónico'/'daltonica' is the Spanish and the Italian. The inorganic section of the UK's Royal Society of Chemistry is named the Dalton Division, and the society's academic journal for inorganic chemistry is called Dalton Transactions. In honour of Dalton's work, many chemists and biochemists use the unit of mass dalton (symbol Da), also known as the unified atomic mass unit, equal to 1/12 the mass of a neutral atom of carbon-12). The dalton is officially accepted for use with the SI. Quaker schools have named buildings after Dalton: for example, a schoolhouse in the primary sector of Ackworth School, is called Dalton. Dalton Township in southern Ontario was named after him. In 2001 the name was lost when the township was absorbed into the City of Kawartha Lakes but in 2002 the Dalton name was affixed to a new park, Dalton Digby Wildlands Provincial Park. Asteroid (12292) Dalton was named after him. == Works == Dalton, John (1834). Meteorological Observations and Essays (2 ed.). Manchester: Harrison and Crosfield. Retrieved 24 December 2007. Dalton, John (1893). Foundations of the Atomic Theory. Edinburgh: William F. Clay. Retrieved 24 December 2007.– Alembic Club reprint with some of Dalton's papers, along with some by William Hyde Wollaston and Thomas Thomson Dalton, John (1893.) Foundations of the Molecular Theory. Edinburgh: William F. Clay, 1893. Retrieved 15 August 2022 – with essays by Joseph Louis Gay-Lussac and Amedeo Avogadro Dalton, John (1808). A new system of chemical philosophy. London. ISBN 978-1-153-05671-7. Retrieved 8 July 2008. {{cite book}}: ISBN / Date incompatibility (help) John Dalton Papers at John Rylands Library, Manchester. Dalton, John (1808–1827). A New System of Chemical Philosophy (all images freely available for download in a variety of formats from Science History Institute Digital Collections at digital.sciencehistory.org). Dalton, John (1794). Extraordinary Facts Relating to the Vision of Colours: With Observations. Science History Institute Digital Collections. == See also == Pneumatic chemistry == Notes == == References == == Sources == Greenaway, Frank (1966). John Dalton and the Atom. Ithaca, New York: Cornell University Press. Henry, William C. (1854). Memoirs of the Life and Scientific Researches of John Dalton. London: Cavendish Society. Retrieved 21 July 2018. Hunt, D. M.; Dulai, K. S.; Bowmaker, J. K.; Mollon, J. D. (1995). "The Chemistry of John Dalton's Color Blindness". Science. 267 (5200): 984–988. Bibcode:1995Sci...267..984H. doi:10.1126/science.7863342. PMID 7863342. S2CID 6764146. Lonsdale, Henry (1874). The Worthies of Cumberland: John Dalton. George Routledge and Sons: George. Retrieved 24 December 2007. Millington, John Price (1906). John Dalton. London: J. M. Dent & Company. Retrieved 21 July 2018. Patterson, Elizabeth C. (1970). John Dalton and the Atomic Theory. Garden City, New York: Anchor. Rocke, Alan J. (2005). "In Search of El Dorado: John Dalton and the Origins of the Atomic Theory". Social Research. 72 (1): 125–158. doi:10.1353/sor.2005.0003. JSTOR 40972005. S2CID 141350239. Roscoe, Henry E. (1895). John Dalton and the Rise of Modern Chemistry. London: Macmillan. ISBN 9780608325361. Retrieved 24 December 2007. {{cite book}}: ISBN / Date incompatibility (help) Roscoe, Henry E. & Harden, Arthur (1896). A New View of the Origin of Dalton's Atomic Theory. London: Macmillan. ISBN 978-1-4369-2630-0. Retrieved 24 December 2007. {{cite book}}: ISBN / Date incompatibility (help) Smith, R. Angus (1856). Memoir of John Dalton and History of the Atomic Theory. London: H. Bailliere. ISBN 978-1-4021-6437-8. Retrieved 24 December 2007. {{cite book}}: ISBN / Date incompatibility (help) Smyth, A. L. (1998). John Dalton, 1766–1844: A Bibliography of Works by and About Him, With an Annotated List of His Surviving Apparatus and Personal Effects. Manchester Literary and Philosophical Publications. ISBN 978-1-85928-438-4.- Original edition published by Manchester University Press in 1966 Thackray, Arnold (1972). John Dalton: Critical Assessments of His Life and Science. Harvard University Press. ISBN 978-0-674-47525-0. == External links == Media related to John Dalton at Wikimedia Commons Works by or about John Dalton at Wikisource Works by John Dalton at LibriVox (public domain audiobooks) "Dalton, John (1766–1844)" . Dictionary of National Biography. Vol. 13. 1888. John Dalton Manuscripts at John Rylands Library
Wikipedia/Dalton_model
The dihydrogen cation or molecular hydrogen ion is a cation (positive ion) with formula H 2 + {\displaystyle {\ce {H2^+}}} . It consists of two hydrogen nuclei (protons), each sharing a single electron. It is the simplest molecular ion. The ion can be formed from the ionization of a neutral hydrogen molecule ( H 2 {\displaystyle {\ce {H2}}} ) by electron impact. It is commonly formed in molecular clouds in space by the action of cosmic rays. The dihydrogen cation is of great historical, theoretical, and experimental interest. Historically it is of interest because, having only one electron, the equations of quantum mechanics that describe its structure can be solved approximately in a relatively straightforward way, as long as the motion of the nuclei and relativistic and quantum electrodynamic effects are neglected. The first such solution was derived by Ø. Burrau in 1927, just one year after the wave theory of quantum mechanics was published. The theoretical interest arises because an accurate mathematical description, taking into account the quantum motion of all constituents and also the interaction of the electron with the radiation field, is feasible. The description's accuracy has steadily improved over more than half a century, eventually resulting in a theoretical framework allowing ultra-high-accuracy predictions for the energies of the rotational and vibrational levels in the electronic ground state, which are mostly metastable. In parallel, the experimental approach to the study of the cation has undergone a fundamental evolution with respect to earlier experimental techniques used in the 1960s and 1980s. Employing advanced techniques, such as ion trapping and laser cooling, the rotational and vibrational transitions can be investigated in extremely fine detail. The corresponding transition frequencies can be precisely measured and the results can be compared with the precise theoretical predictions. Another approach for precision spectroscopy relies on cooling in a cryogenic magneto-electric trap (Penning trap); here the cations' motion is cooled resistively and the internal vibration and rotation decays by spontaneous emission. Then, electron spin resonance transitions can be precisely studied. These advances have turned the dihydrogen cations into one more family of bound systems relevant for the determination of fundamental constants of atomic and nuclear physics, after the hydrogen atom family (including hydrogen-like ions) and the helium atom family. == Physical properties == Bonding in H+2 can be described as a covalent one-electron bond, which has a formal bond order of one half. The ground state energy of the ion is −0.597 Hartree. The bond length in the ground state is 2.00 Bohr radius. === Isotopologues === The dihydrogen cation has six isotopologues. Each of the two nuclei can be one of the following: proton (p, the most common one), deuteron (d), or triton (t). H 2 + = 1 H 2 + {\displaystyle {\ce {H2^+=^1H2^+}}} (dihydrogen cation, the common one) [ HD ] + = [ 1 H 2 H ] + {\displaystyle {\ce {[HD]^+=[^1H^2H]^+}}} (hydrogen deuterium cation) D 2 + = 2 H 2 + {\displaystyle {\ce {D2^+=^2H2^+}}} (dideuterium cation) [ HT ] + = [ 1 H 3 H ] + {\displaystyle {\ce {[HT]^+=[^1H^3H]^+}}} (hydrogen tritium cation) [ DT ] + = [ 2 H 3 H ] + {\displaystyle {\ce {[DT]^+=[^2H^3H]^+}}} (deuterium tritium cation) T 2 + = 3 H 2 + {\displaystyle {\ce {T2^+=^3H2^+}}} (ditritium cation) == Quantum mechanical analysis == === Clamped-nuclei approximation === An approximate description of the dihydrogen cation starts with the neglect of the motion of the nuclei - the so-called clamped-nuclei approximation. This is a good approximation because the nuclei (proton, deuteron or triton) are more than a factor 1000 heavier than the electron. Therefore, the motion of the electron is treated first, for a given (arbitrary) nucleus-nucleus distance R. The electronic energy of the molecule E is computed and the computation is repeated for different values of R. The nucleus-nucleus repulsive energy e2/(4πε0R) has to be added to the electronic energy, resulting in the total molecular energy Etot(R). The energy E is the eigenvalue of the Schrödinger equation for the single electron. The equation can be solved in a relatively straightforward way due to the lack of electron–electron repulsion (electron correlation). The wave equation (a partial differential equation) separates into two coupled ordinary differential equations when using prolate spheroidal coordinates instead of cartesian coordinates. The analytical solution of the equation, the wave function, is therefore proportional to a product of two infinite power series. The numerical evaluation of the series can be readily performed on a computer. The analytical solutions for the electronic energy eigenvalues are also a generalization of the Lambert W function which can be obtained using a computer algebra system within an experimental mathematics approach. Quantum chemistry and Physics textbooks usually treat the binding of the molecule in the electronic ground state by the simplest possible ansatz for the wave function: the (normalized) sum of two 1s hydrogen wave functions centered on each nucleus. This ansatz correctly reproduces the binding but is numerically unsatisfactory. ==== Historical notes ==== Early attempts to treat H 2 + {\displaystyle {\ce {H2^+}}} using the old quantum theory were published in 1922 by Karel Niessen and Wolfgang Pauli, and in 1925 by Harold Urey. The first successful quantum mechanical treatment of H 2 + {\displaystyle {\ce {H2^+}}} was published by the Danish physicist Øyvind Burrau in 1927, just one year after the publication of wave mechanics by Erwin Schrödinger. In 1928, Linus Pauling published a review putting together the work of Burrau with the work of Walter Heitler and Fritz London on the hydrogen molecule. The complete mathematical solution of the electronic energy problem for H+2 in the clamped-nuclei approximation was provided by Wilson (1928) and Jaffé (1934). Johnson (1940) gives a succinct summary of their solution. ==== The solutions of the clamped-nuclei Schrödinger equation ==== The electronic Schrödinger wave equation for the molecular hydrogen ion H+2 with two fixed nuclear centers, labeled A and B, and one electron can be written as ( − ℏ 2 2 m ∇ 2 + V ) ψ = E ψ , {\displaystyle \left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V\right)\psi =E\psi ~,} where V is the electron-nuclear Coulomb potential energy function: V = − e 2 4 π ε 0 ( 1 r a + 1 r b ) {\displaystyle V=-{\frac {e^{2}}{4\pi \varepsilon _{0}}}\left({\frac {1}{r_{a}}}+{\frac {1}{r_{b}}}\right)} and E is the (electronic) energy of a given quantum mechanical state (eigenstate), with the electronic state function ψ = ψ(r) depending on the spatial coordinates of the electron. An additive term ⁠1/R⁠, which is constant for fixed internuclear distance R, has been omitted from the potential V, since it merely shifts the eigenvalue. The distances between the electron and the nuclei are denoted ra and rb. In atomic units (ħ = m = e = 4πε0 = 1) the wave equation is ( − 1 2 ∇ 2 + V ) ψ = E ψ with V = − 1 r a − 1 r b . {\displaystyle \left(-{\tfrac {1}{2}}\nabla ^{2}+V\right)\psi =E\psi \qquad {\mbox{with}}\qquad V=-{\frac {1}{r_{a}}}-{\frac {1}{r_{b}}}\;.} We choose the midpoint between the nuclei as the origin of coordinates. It follows from general symmetry principles that the wave functions can be characterized by their symmetry behavior with respect to the point group inversion operation i (r ↦ −r). There are wave functions ψg(r), which are symmetric with respect to i, and there are wave functions ψu(r), which are antisymmetric under this symmetry operation: ψ g / u ( − r ) = ± ψ g / u ( r ) . {\displaystyle \psi _{g/u}(-{\mathbf {r} })={}\pm \psi _{g/u}({\mathbf {r} })\;.} The suffixes g and u are from the German gerade and ungerade) occurring here denote the symmetry behavior under the point group inversion operation i. Their use is standard practice for the designation of electronic states of diatomic molecules, whereas for atomic states the terms even and odd are used. The ground state (the lowest state) of H 2 + {\displaystyle {\ce {H2^+}}} is denoted X2Σ+g or 1sσg and it is gerade. There is also the first excited state A2Σ+u (2pσu), which is ungerade. Asymptotically, the (total) eigenenergies Eg/u for these two lowest lying states have the same asymptotic expansion in inverse powers of the internuclear distance R: E g / u = − 1 2 − 9 4 R 4 + O ( R − 6 ) + ⋯ {\displaystyle E_{g/u}=-{\frac {1}{2}}-{\frac {9}{4R^{4}}}+O\left(R^{-6}\right)+\cdots } This and the energy curves include the internuclear 1/R term. The actual difference between these two energies is called the exchange energy splitting and is given by: Δ E = E u − E g = 4 e R e − R [ 1 + 1 2 R + O ( R − 2 ) ] {\displaystyle \Delta E=E_{u}-E_{g}={\frac {4}{e}}\,R\,e^{-R}\left[\,1+{\frac {1}{2R}}+O\left(R^{-2}\right)\,\right]} which exponentially vanishes as the internuclear distance R gets greater. The lead term ⁠4/e⁠Re−R was first obtained by the Holstein–Herring method. Similarly, asymptotic expansions in powers of ⁠1/R⁠ have been obtained to high order by Cizek et al. for the lowest ten discrete states of the molecular hydrogen ion (clamped nuclei case). For general diatomic and polyatomic molecular systems, the exchange energy is thus very elusive to calculate at large internuclear distances but is nonetheless needed for long-range interactions including studies related to magnetism and charge exchange effects. These are of particular importance in stellar and atmospheric physics. The energies for the lowest discrete states are shown in the graph above. These can be obtained to within arbitrary accuracy using computer algebra from the generalized Lambert W function (see eq. (3) in that site and reference). They were obtained initially by numerical means to within double precision by the most precise program available, namely ODKIL. The red solid lines are 2Σ+g states. The green dashed lines are 2Σ+u states. The blue dashed line is a 2Πu state and the pink dotted line is a 2Πg state. Note that although the generalized Lambert W function eigenvalue solutions supersede these asymptotic expansions, in practice, they are most useful near the bond length. The complete Hamiltonian of H+2 (as for all centrosymmetric molecules) does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u electronic states (called ortho-para mixing) and give rise to ortho-para transitions. === Born-Oppenheimer approximation === Once the energy function Etot(R) has been obtained, one can compute the quantum states of rotational and vibrational motion of the nuclei, and thus of the molecule as a whole. The corresponding 'nuclear' Schrödinger equation is a one-dimensional ordinary differential equation, where the nucleus-nucleus distance R is the independent coordinate. The equation describes the motion of a fictitious particle of mass equal to the reduced mass of the two nuclei, in the potential Etot(R)+VL(R), where the second term is the centrifugal potential due to rotation with angular momentum described by the quantum number L. The eigenenergies of this Schrödinger equation are the total energies of the whole molecule, electronic plus nuclear. === High-accuracy ab initio theory === The Born-Oppenheimer approximation is unsuited for describing the dihydrogen cation accurately enough to explain the results of precision spectroscopy. The full Schrödinger equation for this cation, without the approximation of clamped nuclei, is much more complex, but nevertheless can be solved numerically essentially exactly using a variational approach. Thereby, the simultaneous motion of the electron and of the nuclei is treated exactly. When the solutions are restricted to the lowest-energy orbital, one obtains the rotational and ro-vibrational states' energies and wavefunctions. The numerical uncertainty of the energies and the wave functions found in this way is negligible compared to the systematic error stemming from using the Schrödinger equation, rather than fundamentally more accurate equations. Indeed, the Schrödinger equation does not incorporate all relevant physics, as is known from the hydrogen atom problem. More accurate treatments need to consider the physics that is described by the Dirac equation or, even more accurately, by quantum electrodynamics. The most accurate solutions of the ro-vibrational states are found by applying non-relativistic quantum electrodynamics (NRQED) theory. For comparison with experiment, one requires differences of state energies, i.e. transition frequencies. For transitions between ro-vibrational levels having small rotational and moderate vibrational quantum numbers the frequencies have been calculated with theoretical fractional uncertainty of approximately 8×10−12. Additional contributions to the uncertainty of the predicted frequencies arise from the uncertainties of fundamental constants, which are input to the theoretical calculation, especially from the ratio of the proton mass and the electron mass. Using a sophisticated ab initio formalism, also the hyperfine energies can be computed accurately, see below. == Experimental studies == === Precision spectroscopy === Because of its relative simplicity, the dihydrogen cation is the molecule that is most precisely understood, in the sense that theoretical calculations of its energy levels match the experimental results with the highest level of agreement. Specifically, spectroscopically determined pure rotational and ro-vibrational transition frequencies of the particular isotopologue HD + {\displaystyle {\ce {HD^+}}} agree with theoretically computed transition frequencies. Four high-precision experiments yielded comparisons with total uncertainties between 2×10−11 and 5×10−11, fractionally. The level of agreement is actually limited neither by theory not by experiment but rather by the uncertainty of the current values of the masses of the particles, that are used as input parameters to the calculation. In order to measure the transition frequencies with high accuracy, the spectroscopy of the dihydrogen cation had to be performed under special conditions. Therefore, ensembles of HD + {\displaystyle {\ce {HD^+}}} ions were trapped in a quadrupole ion trap under ultra-high vacuum, sympathetically cooled by laser-cooled beryllium ions, and probed using particular spectroscopic techniques. The hyperfine structure of the homonuclear isotopologue H 2 + {\displaystyle {\ce {H2^+}}} has been measured extensively and precisely by Jefferts in 1969. Finally, in 2021, ab initio theory computations were able to provide the quantitative details of the structure with uncertainty smaller than that of the experimental data, 1 kHz. Some contributions to the measured hyperfine structure have been theoretically confirmed at the level of approximately 50 Hz. The implication of these agreements is that one can deduce a spectroscopic value of the ratio of electron mass to reduced proton-deuteron mass, me/mp+me/md, that is an input to the ab initio theory. The ratio is fitted such that theoretical prediction and experimental results agree. The uncertainty of the obtained ratio is comparable to the one obtained from direct mass measurements of proton, deuteron, electron, and HD+ via cyclotron resonance in Penning traps. == Occurrence in space == === Formation === The dihydrogen ion is formed in nature by the interaction of cosmic rays and the hydrogen molecule. An electron is knocked off leaving the cation behind. H 2 + cosmic ray ⟶ H 2 + + e − + cosmic ray {\displaystyle {\ce {H2\ +cosmic\ ray->H2^{+}\ +e^{-}\ +cosmic\ ray}}} Cosmic ray particles have enough energy to ionize many molecules before coming to a stop. The ionization energy of the hydrogen molecule is 15.603 eV. High speed electrons also cause ionization of hydrogen molecules with a peak cross section around 50 eV. The peak cross section for ionization for high speed protons is 70000 eV with a cross section of 2.5×10−16 cm2. A cosmic ray proton at lower energy can also strip an electron off a neutral hydrogen molecule to form a neutral hydrogen atom and the dihydrogen cation, ( p + + H 2 ⟶ H + H 2 + {\displaystyle {\ce {\\p^+ + H2 -> H + H2^+}}} ) with a peak cross section at around 8000 eV of 8×10−16 cm2. === Destruction === In nature the ion is destroyed by reacting with other hydrogen molecules: H 2 + + H 2 ⟶ H 3 + + H {\displaystyle {\ce {H2^+ + H2 -> H3^+ + H}}} == Production in the laboratory == In the laboratory, the ion is easily produced by electron bombardment from an electron gun. An artificial plasma discharge cell can also produce the ion. == See also == Symmetry of diatomic molecules Dirac Delta function model (one-dimensional version of H+2) Di-positronium Euler's three-body problem (classical counterpart) Few-body systems Helium atom Helium hydride ion Trihydrogen cation Triatomic hydrogen Lambert W function Molecular astrophysics Holstein–Herring method Three-body problem List of quantum-mechanical systems with analytical solutions == References ==
Wikipedia/Hydrogen_molecule-ion
Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules. The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces, electrostatic charge, or hydrogen bonding to strong covalent bonding, provided that the electronic coupling strength remains small relative to the energy parameters of the component. While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. These forces include hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi–pi interactions and electrostatic effects. Important concepts advanced by supramolecular chemistry include molecular self-assembly, molecular folding, molecular recognition, host–guest chemistry, mechanically-interlocked molecular architectures, and dynamic covalent chemistry. The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research. == History == The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920. With the deeper understanding of the non-covalent interactions, for example, the clear elucidation of DNA structure, chemists started to emphasize the importance of non-covalent interactions. In 1967, Charles J. Pedersen discovered crown ethers, which are ring-like structures capable of chelating certain metal ions. Then, in 1969, Jean-Marie Lehn discovered a class of molecules similar to crown ethers, called cryptands. After that, Donald J. Cram synthesized many variations to crown ethers, on top of separate molecules capable of selective interaction with certain chemicals. The three scientists were awarded the Nobel Prize in Chemistry in 1987 for "development and use of molecules with structure-specific interactions of high selectivity”. In 2016, Bernard L. Feringa, Sir J. Fraser Stoddart, and Jean-Pierre Sauvage were awarded the Nobel Prize in Chemistry, "for the design and synthesis of molecular machines". The term supermolecule (or supramolecule) was introduced by Karl Lothar Wolf et al. (Übermoleküle) in 1937 to describe hydrogen-bonded acetic acid dimers. The term supermolecule is also used in biochemistry to describe complexes of biomolecules, such as peptides and oligonucleotides composed of multiple strands. Eventually, chemists applied these concepts to synthetic systems. One breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen. Following this work, other researchers such as Donald J. Cram, Jean-Marie Lehn and Fritz Vögtle reported a variety of three-dimensional receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging. The influence of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution. == Concepts == === Molecular self-assembly === Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles, membranes, vesicles, liquid crystals, and is important to crystal engineering. === Molecular recognition and complexation === Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis. === Template-directed synthesis === Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis. Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry. After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex. === Mechanically interlocked molecular architectures === Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, molecular Borromean rings, 2D [c2]daisy chain polymer and ravels. === Dynamic covalent chemistry === In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures. === Biomimetics === Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication. === Imprinting === Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity. === Molecular machinery === Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology, and prototypes have been demonstrated using supramolecular concepts. Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'. == Building blocks == Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen. === Synthetic recognition motifs === The pi-pi charge-transfer interactions of bipyridinium with dioxyarenes or diaminoarenes have been used extensively for the construction of mechanically interlocked systems and in crystal engineering. The use of crown ether binding with metal or ammonium cations is ubiquitous in supramolecular chemistry. The formation of carboxylic acid dimers and other simple hydrogen bonding interactions. The complexation of bipyridines or terpyridines with ruthenium, silver or other metal ions is of great utility in the construction of complex architectures of many individual molecules. The complexation of porphyrins or phthalocyanines around metal ions gives access to catalytic, photochemical and electrochemical properties in addition to the complexation itself. These units are used a great deal by nature. === Macrocycles === Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties. Cyclodextrins, calixarenes, cucurbiturils and crown ethers are readily synthesized in large quantities, and are therefore convenient for use in supramolecular systems. More complex cyclophanes, and cryptands can be synthesised to provide more tailored recognition properties. Supramolecular metallocycles are macrocyclic aggregates with metal ions in the ring, often formed from angular and linear modules. Common metallocycle shapes in these types of applications include triangles, squares, and pentagons, each bearing functional groups that connect the pieces via "self-assembly." Metallacrowns are metallomacrocycles generated via a similar self-assembly approach from fused chelate-rings. === Structural units === Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. Commonly used spacers and connecting groups include polyether chains, biphenyls and triphenyls, and simple alkyl chains. The chemistry for creating and connecting these units is very well understood. nanoparticles, nanorods, fullerenes and dendrimers offer nanometer-sized structure and encapsulation units. Surfaces can be used as scaffolds for the construction of complex systems and also for interfacing electrochemical systems with electrodes. Regular surfaces can be used for the construction of self-assembled monolayers and multilayers. The understanding of intermolecular interactions in solids has undergone a major renaissance via inputs from different experimental and computational methods in the last decade. This includes high-pressure studies in solids and "in situ" crystallization of compounds which are liquids at room temperature along with the use of electron density analysis, crystal structure prediction and DFT calculations in solid state to enable a quantitative understanding of the nature, energetics and topological properties associated with such interactions in crystals. === Photo-chemically and electro-chemically active units === Porphyrins, and phthalocyanines have highly tunable photochemical and electrochemical activity as well as the potential to form complexes. Photochromic and photoisomerizable groups can change their shapes and properties, including binding properties, upon exposure to light. Tetrathiafulvalene (TTF) and quinones have multiple stable oxidation states, and therefore can be used in redox reactions and electrochemistry. Other units, such as benzidine derivatives, viologens, and fullerenes, are useful in supramolecular electrochemical devices. === Biologically-derived units === The extremely strong complexation between avidin and biotin is instrumental in blood clotting, and has been used as the recognition motif to construct synthetic systems. The binding of enzymes with their cofactors has been used as a route to produce modified enzymes, electrically contacted enzymes, and even photoswitchable enzymes. DNA has been used both as a structural and as a functional unit in synthetic supramolecular systems. == Applications == === Materials technology === Supramolecular chemistry has found many applications, in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. Many smart materials are based on molecular recognition. === Catalysis === A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions influence the binding reactants. === Medicine === Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions. A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells. Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function. === Data storage and processing === Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox-switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers. == See also == Organic chemistry Nanotechnology == Reading == Cook, T. R.; Zheng, Y.; Stang, P. J. (2013). "Metal-organic frameworks and self-assembled supramolecular coordination complexes: Comparing and contrasting the design, synthesis, and functionality of metal-organic materials". Chem. Rev. 113 (1): 734–77. doi:10.1021/cr3002824. PMC 3764682. PMID 23121121.{{cite journal}}: CS1 maint: multiple names: authors list (link) Desiraju, G. R. (2013). "Crystal engineering: From molecule to crystal". J. Am. Chem. Soc. 135 (27): 9952–67. Bibcode:2013JAChS.135.9952D. doi:10.1021/ja403264c. PMID 23750552. Seto, C. T.; Whitesides, G. M. (1993). "Molecular self-assembly through hydrogen bonding: Supramolecular aggregates based on the cyanuric acid-melamine lattice". J. Am. Chem. Soc. 115 (3): 905–916. Bibcode:1993JAChS.115..905S. doi:10.1021/ja00056a014.{{cite journal}}: CS1 maint: multiple names: authors list (link) == References == == External links == 2D and 3D Models of Dodecahedrane and Cuneane Assemblies Supramolecular Chemistry and Supramolecular Chemistry II – Thematic Series in the Open Access Beilstein Journal of Organic Chemistry
Wikipedia/Supermolecule
Atomic force microscopy (AFM) or scanning force microscopy (SFM) is a very-high-resolution type of scanning probe microscopy (SPM), with demonstrated resolution on the order of fractions of a nanometer, more than 1000 times better than the optical diffraction limit. == Overview == Atomic force microscopy (AFM) gathers information by "feeling" or "touching" the surface with a mechanical probe. Piezoelectric elements that facilitate tiny but accurate and precise movements on (electronic) command enable precise scanning. Despite the name, the Atomic Force Microscope does not use the nuclear force. === Abilities and spatial resolution === The AFM has three major abilities: force measurement, topographic imaging, and manipulation. In force measurement, AFMs can be used to measure the forces between the probe and the sample as a function of their mutual separation. This can be applied to perform force spectroscopy, to measure the mechanical properties of the sample, such as the sample's Young's modulus, a measure of stiffness. For imaging, the reaction of the probe to the forces that the sample imposes on it can be used to form an image of the three-dimensional shape (topography) of a sample surface at a high resolution. This is achieved by raster scanning the position of the sample with respect to the tip and recording the height of the probe that corresponds to a constant probe-sample interaction (see § Topographic image for more). The surface topography is commonly displayed as a pseudocolor plot. Although the initial publication about atomic force microscopy by Binnig, Quate and Gerber in 1986 speculated about the possibility of achieving atomic resolution, profound experimental challenges needed to be overcome before atomic resolution of defects and step edges in ambient (liquid) conditions was demonstrated in 1993 by Ohnesorge and Binnig. True atomic resolution of the silicon 7x7 surface had to wait a little longer before it was shown by Giessibl. Subatomic resolution (i.e. the ability to resolve structural details within the electron density of a single atom) has also been achieved by AFM. In manipulation, the forces between tip and sample can also be used to change the properties of the sample in a controlled way. Examples of this include atomic manipulation, scanning probe lithography and local stimulation of cells. Simultaneous with the acquisition of topographical images, other properties of the sample can be measured locally and displayed as an image, often with similarly high resolution. Examples of such properties are mechanical properties like stiffness or adhesion strength and electrical properties such as conductivity or surface potential. In fact, the majority of SPM techniques are extensions of AFM that use this modality. === Other microscopy technologies === The major difference between atomic force microscopy and competing technologies such as optical microscopy and electron microscopy is that AFM does not use lenses or beam irradiation. Therefore, it does not suffer from a limitation in spatial resolution due to diffraction and aberration, and preparing a space for guiding the beam (by creating a vacuum) and staining the sample are not necessary. There are several types of scanning microscopy including SPM (which includes AFM, scanning tunneling microscopy (STM) and near-field scanning optical microscope (SNOM/NSOM), STED microscopy (STED), and scanning electron microscopy and electrochemical AFM, EC-AFM). Although SNOM and STED use visible, infrared or even terahertz light to illuminate the sample, their resolution is not constrained by the diffraction limit. === Configuration === Fig. 3 shows an AFM, which typically consists of the following features. Numbers in parentheses correspond to numbered features in Fig. 3. Coordinate directions are defined by the coordinate system (0). The small spring-like cantilever (1) is carried by the support (2). Optionally, a piezoelectric element (typically made of a ceramic material) (3) oscillates the cantilever (1). The sharp tip (4) is fixed to the free end of the cantilever (1). The detector (5) records the deflection and motion of the cantilever (1). The sample (6) is mounted on the sample stage (8). An xyz drive (7) permits to displace the sample (6) and the sample stage (8) in x, y, and z directions with respect to the tip apex (4). Although Fig. 3 shows the drive attached to the sample, the drive can also be attached to the tip, or independent drives can be attached to both, since it is the relative displacement of the sample and tip that needs to be controlled. Controllers and plotter are not shown in Fig. 3. According to the configuration described above, the interaction between tip and sample, which can be an atomic-scale phenomenon, is transduced into changes of the motion of cantilever, which is a macro-scale phenomenon. Several different aspects of the cantilever motion can be used to quantify the interaction between the tip and sample, most commonly the value of the deflection, the amplitude of an imposed oscillation of the cantilever, or the shift in resonance frequency of the cantilever (see section Imaging Modes). ==== Detector ==== The detector (5) of AFM measures the deflection (displacement with respect to the equilibrium position) of the cantilever and converts it into an electrical signal. The intensity of this signal will be proportional to the displacement of the cantilever. Various methods of detection can be used, e.g. interferometry, optical levers, the piezoelectric method, and STM-based detectors (see section "AFM cantilever deflection measurement"). ==== Image formation ==== This section applies specifically to imaging in § Contact mode. For other imaging modes, the process is similar, except that "deflection" should be replaced by the appropriate feedback variable. When using the AFM to image a sample, the tip is brought into contact with the sample, and the sample is raster scanned along an x–y grid. Most commonly, an electronic feedback loop is employed to keep the probe-sample force constant during scanning. This feedback loop has the cantilever deflection as input, and its output controls the distance along the z axis between the probe support (2 in fig. 3) and the sample support (8 in fig 3). As long as the tip remains in contact with the sample, and the sample is scanned in the x–y plane, height variations in the sample will change the deflection of the cantilever. The feedback then adjusts the height of the probe support so that the deflection is restored to a user-defined value (the setpoint). A properly adjusted feedback loop adjusts the support-sample separation continuously during the scanning motion, such that the deflection remains approximately constant. In this situation, the feedback output equals the sample surface topography to within a small error. Historically, a different operation method has been used, in which the sample-probe support distance is kept constant and not controlled by a feedback (servo mechanism). In this mode, usually referred to as "constant-height mode", the deflection of the cantilever is recorded as a function of the sample x–y position. As long as the tip is in contact with the sample, the deflection then corresponds to surface topography. This method is now less commonly used because the forces between tip and sample are not controlled, which can lead to forces high enough to damage the tip or the sample. It is, however, common practice to record the deflection even when scanning in constant force mode, with feedback. This reveals the small tracking error of the feedback, and can sometimes reveal features that the feedback was not able to adjust for. The AFM signals, such as sample height or cantilever deflection, are recorded on a computer during the x–y scan. They are plotted in a pseudocolor image, in which each pixel represents an x–y position on the sample, and the color represents the recorded signal. === History === The AFM was invented by IBM scientists in 1985. The precursor to the AFM, the scanning tunneling microscope (STM), was developed by Gerd Binnig and Heinrich Rohrer in the early 1980s at IBM Research – Zurich, a development that earned them the 1986 Nobel Prize for Physics. Binnig invented the atomic force microscope and the first experimental implementation was made by Binnig, Quate and Gerber in 1986. The first commercially available atomic force microscope was introduced in 1989. The AFM is one of the foremost tools for imaging, measuring, and manipulating matter at the nanoscale. === Applications === The AFM has been applied to problems in a wide range of disciplines of the natural sciences, including solid-state physics, semiconductor science and technology, molecular engineering, polymer chemistry and physics, surface chemistry, molecular biology, cell biology, and medicine. Applications in the field of solid state physics include (a) the identification of atoms at a surface, (b) the evaluation of interactions between a specific atom and its neighboring atoms, and (c) the study of changes in physical properties arising from changes in an atomic arrangement through atomic manipulation. In molecular biology, AFM can be used to study the structure and mechanical properties of protein complexes and assemblies. For example, AFM has been used to image microtubules and measure their stiffness. In cellular biology, AFM can be used to attempt to distinguish cancer cells and normal cells based on a hardness of cells, and to evaluate interactions between a specific cell and its neighboring cells in a competitive culture system. AFM can also be used to indent cells, to study how they regulate the stiffness or shape of the cell membrane or wall. In some variations, electric potentials can also be scanned using conducting cantilevers. In more advanced versions, currents can be passed through the tip to probe the electrical conductivity or transport of the underlying surface, but this is a challenging task with few research groups reporting consistent data (as of 2004). AFM techniques such as conductive atomic force microscopy (C-AFM) and Kelvin probe force microscopy (KPFM) are increasingly used in solid-state battery research to analyze local conductivity variations, interfacial potential changes, and degradation mechanisms at the nanoscale. == Principles == The AFM consists of a cantilever with a sharp tip (probe) at its end that is used to scan the specimen surface. The cantilever is typically silicon or silicon nitride with a tip radius of curvature on the order of nanometers. When the tip is brought into proximity of a sample surface, forces between the tip and the sample lead to a deflection of the cantilever according to Hooke's law. Depending on the situation, forces that are measured in AFM include mechanical contact force, van der Waals forces, capillary forces, chemical bonding, electrostatic forces, magnetic forces (see magnetic force microscope, MFM), Casimir forces, solvation forces, etc. Along with force, additional quantities may simultaneously be measured through the use of specialized types of probes (see scanning thermal microscopy, scanning joule expansion microscopy, photothermal microspectroscopy, etc.). The AFM can be operated in a number of modes, depending on the application. In general, possible imaging modes are divided into static (also called contact) modes and a variety of dynamic (non-contact or "tapping") modes where the cantilever is vibrated or oscillated at a given frequency. === Imaging modes === AFM operation is usually described as one of three modes, according to the nature of the tip motion: contact mode, also called static mode (as opposed to the other two modes, which are called dynamic modes); tapping mode, also called intermittent contact, AC mode, or vibrating mode, or, after the detection mechanism, amplitude modulation AFM; and non-contact mode, or, again after the detection mechanism, frequency modulation AFM. Despite the nomenclature, repulsive contact can occur or be avoided both in amplitude modulation AFM and frequency modulation AFM, depending on the settings. ==== Contact mode ==== In contact mode, the tip is "dragged" across the surface of the sample and the contours of the surface are measured either using the deflection of the cantilever directly or, more commonly, using the feedback signal required to keep the cantilever at a constant position. Because the measurement of a static signal is prone to noise and drift, low stiffness cantilevers (i.e. cantilevers with a low spring constant, k) are used to achieve a large enough deflection signal while keeping the interaction force low. Close to the surface of the sample, attractive forces can be quite strong, causing the tip to "snap-in" to the surface. Thus, contact mode AFM is almost always done at a depth where the overall force is repulsive, that is, in firm "contact" with the solid surface. ==== Tapping mode ==== In ambient conditions, most samples develop a liquid meniscus layer. Because of this, keeping the probe tip close enough to the sample for short-range forces to become detectable while preventing the tip from sticking to the surface presents a major problem for contact mode in ambient conditions. Dynamic contact mode (also called intermittent contact, AC mode or tapping mode) was developed to bypass this problem. Nowadays, tapping mode is the most frequently used AFM mode when operating in ambient conditions or in liquids. In tapping mode, the cantilever is driven to oscillate up and down at or near its resonance frequency. This oscillation is commonly achieved with a small piezo element in the cantilever holder, but other possibilities include an AC magnetic field (with magnetic cantilevers), piezoelectric cantilevers, or periodic heating with a modulated laser beam. The amplitude of this oscillation usually varies from several nm to 200 nm. In tapping mode, the frequency and amplitude of the driving signal are kept constant, leading to a constant amplitude of the cantilever oscillation as long as there is no drift or interaction with the surface. The interaction of forces acting on the cantilever when the tip comes close to the surface, van der Waals forces, dipole–dipole interactions, electrostatic forces, etc. cause the amplitude of the cantilever's oscillation to change (usually decrease) as the tip gets closer to the sample. This amplitude is used as the parameter that goes into the electronic servo that controls the height of the cantilever above the sample. The servo adjusts the height to maintain a set cantilever oscillation amplitude as the cantilever is scanned over the sample. A tapping AFM image is therefore produced by imaging the force of the intermittent contacts of the tip with the sample surface. Although the peak forces applied during the contacting part of the oscillation can be much higher than typically used in contact mode, tapping mode generally lessens the damage done to the surface and the tip compared to the amount done in contact mode. This can be explained by the short duration of the applied force, and because the lateral forces between tip and sample are significantly lower in tapping mode over contact mode. Tapping mode imaging is gentle enough even for the visualization of supported lipid bilayers or adsorbed single polymer molecules (for instance, 0.4 nm thick chains of synthetic polyelectrolytes) under liquid medium. With proper scanning parameters, the conformation of single molecules can remain unchanged for hours, and even single molecular motors can be imaged while moving. When operating in tapping mode, the phase of the cantilever's oscillation with respect to the driving signal can be recorded as well. This signal channel contains information about the energy dissipated by the cantilever in each oscillation cycle. Samples that contain regions of varying stiffness or with different adhesion properties can give a contrast in this channel that is not visible in the topographic image. Extracting the sample's material properties in a quantitative manner from phase images, however, is often not feasible. ==== Non-contact mode ==== In non-contact atomic force microscopy mode, the tip of the cantilever does not contact the sample surface. The cantilever is instead oscillated at either its resonant frequency (frequency modulation) or just above (amplitude modulation) where the amplitude of oscillation is typically a few nanometers (<10 nm) down to a few picometers. The van der Waals forces, which are strongest from 1 nm to 10 nm above the surface, or any other long-range force that extends above the surface acts to decrease the resonance frequency of the cantilever. This decrease in resonant frequency combined with the feedback loop system maintains a constant oscillation amplitude or frequency by adjusting the average tip-to-sample distance. Measuring the tip-to-sample distance at each (x,y) data point allows the scanning software to construct a topographic image of the sample surface. Non-contact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed after taking numerous scans with contact AFM. This makes non-contact AFM preferable to contact AFM for measuring soft samples, e.g. biological samples and organic thin film. In the case of rigid samples, contact and non-contact images may look the same. However, if a few monolayers of adsorbed fluid are lying on the surface of a rigid sample, the images may look quite different. An AFM operating in contact mode will penetrate the liquid layer to image the underlying surface, whereas in non-contact mode an AFM will oscillate above the adsorbed fluid layer to image both the liquid and surface. Schemes for dynamic mode operation include frequency modulation where a phase-locked loop is used to track the cantilever's resonance frequency and the more common amplitude modulation with a servo loop in place to keep the cantilever excitation to a defined amplitude. In frequency modulation, changes in the oscillation frequency provide information about tip-sample interactions. Frequency can be measured with very high sensitivity and thus the frequency modulation mode allows for the use of very stiff cantilevers. Stiff cantilevers provide stability very close to the surface and, as a result, this technique was the first AFM technique to provide true atomic resolution in ultra-high vacuum conditions. In amplitude modulation, changes in the oscillation amplitude or phase provide the feedback signal for imaging. In amplitude modulation, changes in the phase of oscillation can be used to discriminate between different types of materials on the surface. Amplitude modulation can be operated either in the non-contact or in the intermittent contact regime. In dynamic contact mode, the cantilever is oscillated such that the separation distance between the cantilever tip and the sample surface is modulated. Amplitude modulation has also been used in the non-contact regime to image with atomic resolution by using very stiff cantilevers and small amplitudes in an ultra-high vacuum environment. == Topographic image == Image formation is a plotting method that produces a color mapping through changing the x–y position of the tip while scanning and recording the measured variable, i.e. the intensity of control signal, to each x–y coordinate. The color mapping shows the measured value corresponding to each coordinate. The image expresses the intensity of a value as a hue. Usually, the correspondence between the intensity of a value and a hue is shown as a color scale in the explanatory notes accompanying the image. Operation mode of image forming of the AFM are generally classified into two groups from the viewpoint of whether or not it uses z-Feedback loop (not shown) to maintain the tip-sample distance to keep signal intensity exported by the detector. The first one (using z-Feedback loop), said to be "constant XX mode" (XX is something which kept by z-Feedback loop). Topographic image formation mode is based on abovementioned "constant XX mode", z-Feedback loop controls the relative distance between the probe and the sample through outputting control signals to keep constant one of frequency, vibration and phase which typically corresponds to the motion of cantilever (for instance, voltage is applied to the Z-piezoelectric element and it moves the sample up and down towards the Z direction. === Topographic image of FM-AFM === When the distance between the probe and the sample is brought to the range where atomic force may be detected, while a cantilever is excited in its natural eigenfrequency (f0), the resonance frequency f of the cantilever may shift from its original resonance frequency. In other words, in the range where atomic force may be detected, a frequency shift (df =f–f0) will also be observed. When the distance between the probe and the sample is in the non-contact region, the frequency shift increases in negative direction as the distance between the probe and the sample gets smaller. When the sample has concavity and convexity, the distance between the tip-apex and the sample varies in accordance with the concavity and convexity accompanied with a scan of the sample along x–y direction (without height regulation in z-direction). As a result, the frequency shift arises. The image in which the values of the frequency obtained by a raster scan along the x–y direction of the sample surface are plotted against the x–y coordination of each measurement point is called a constant-height image. On the other hand, the df may be kept constant by moving the probe upward and downward (See (3) of FIG.5) in z-direction using a negative feedback (by using z-feedback loop) while the raster scan of the sample surface along the x–y direction. The image in which the amounts of the negative feedback (the moving distance of the probe upward and downward in z-direction) are plotted against the x–y coordination of each measurement point is a topographic image. In other words, the topographic image is a trace of the tip of the probe regulated so that the df is constant and it may also be considered to be a plot of a constant-height surface of the df. Therefore, the topographic image of the AFM is not the exact surface morphology itself, but actually the image influenced by the bond-order between the probe and the sample, however, the topographic image of the AFM is considered to reflect the geographical shape of the surface more than the topographic image of a scanning tunnel microscope. == Force spectroscopy == Besides imaging, AFM can be used for force spectroscopy, the direct measurement of tip-sample interaction forces as a function of the gap between the tip and sample. The result of this measurement is called a force-distance curve. For this method, the AFM tip is extended towards and retracted from the surface as the deflection of the cantilever is monitored as a function of piezoelectric displacement. These measurements have been used to measure nanoscale contacts, atomic bonding, Van der Waals forces, and Casimir forces, dissolution forces in liquids and single molecule stretching and rupture forces. AFM has also been used to measure, in an aqueous environment, the dispersion force due to polymer adsorbed on the substrate. Forces of the order of a few piconewtons can now be routinely measured with a vertical distance resolution of better than 0.1 nanometers. Force spectroscopy can be performed with either static or dynamic modes. In dynamic modes, information about the cantilever vibration is monitored in addition to the static deflection. Problems with the technique include no direct measurement of the tip-sample separation and the common need for low-stiffness cantilevers, which tend to "snap" to the surface. These problems are not insurmountable. An AFM that directly measures the tip-sample separation has been developed. The snap-in can be reduced by measuring in liquids or by using stiffer cantilevers, but in the latter case a more sensitive deflection sensor is needed. By applying a small dither to the tip, the stiffness (force gradient) of the bond can be measured as well. === Biological applications and other === Force spectroscopy is used in biophysics to measure the mechanical properties of living material (such as tissue or cells) or detect structures of different stiffness buried into the bulk of the sample using the stiffness tomography. Another application was to measure the interaction forces between from one hand a material stuck on the tip of the cantilever, and from another hand the surface of particles either free or occupied by the same material. From the adhesion force distribution curve, a mean value of the forces has been derived. It allowed to make a cartography of the surface of the particles, covered or not by the material. AFM has also been used for mechanically unfolding proteins. In such experiments, the analyzes of the mean unfolding forces with the appropriate model leads to the obtainment of the information about the unfolding rate and free energy profile parameters of the protein. == Identification of individual surface atoms == The AFM can be used to image atoms and structures on a variety of surfaces. The atom at the apex of the tip "senses" individual atoms on the underlying surface when it begins the formation of chemical bonds with each atom. Because these chemical interactions subtly alter the tip's vibration frequency, they can be detected and mapped. This principle was used to distinguish between atoms of silicon, tin and lead on an alloy surface, by comparing these atomic fingerprints with values obtained from density functional theory (DFT) simulations. Interaction forces must be measured precisely for each type of atom expected in the sample, and then to compare with forces given by DFT simulations. It was found that the tip interacted most strongly with silicon atoms, and interacted 24% and 41% less strongly with tin and lead atoms, respectively. Each different type of atom could be identified in the matrix as the tip using this information. == Probe == An AFM probe has a sharp tip on the free-swinging end of a cantilever that protrudes from a holder. The dimensions of the cantilever are in the scale of micrometers. The radius of the tip is usually on the scale of a few nanometers to a few tens of nanometers. (Specialized probes exist with much larger end radii, for example probes for indentation of soft materials.) The cantilever holder, also called the holder chip—often 1.6 mm by 3.4 mm in size—allows the operator to hold the AFM cantilever/probe assembly with tweezers and fit it into the corresponding holder clips on the scanning head of the atomic force microscope. This device is most commonly called an "AFM probe", but other names include "AFM tip" and "cantilever" (employing the name of a single part as the name of the whole device). An AFM probe is a particular type of SPM probe. AFM probes are manufactured with MEMS technology. Most AFM probes used are made from silicon (Si), but borosilicate glass and silicon nitride are also in use. AFM probes are considered consumables as they are often replaced when the tip apex becomes dull or contaminated or when the cantilever is broken. They can cost from a couple of tens of dollars up to hundreds of dollars per cantilever for the most specialized cantilever/probe combinations. To use the device, the tip is brought very close to the surface of the object under investigation, and the cantilever is deflected by the interaction between the tip and the surface, which is what the AFM is designed to measure. A spatial map of the interaction can be made by measuring the deflection at many points on a 2D surface. Several types of interaction can be detected. Depending on the interaction under investigation, the surface of the tip of the AFM probe needs to be modified with a coating. Among the coatings used are gold – for covalent bonding of biological molecules and the detection of their interaction with a surface, diamond for increased wear resistance and magnetic coatings for detecting the magnetic properties of the investigated surface. Another solution exists to achieve high resolution magnetic imaging: equipping the probe with a microSQUID. The AFM tips are fabricated using silicon micro machining and the precise positioning of the microSQUID loop is achieved using electron beam lithography. The additional attachment of a quantum dot to the tip apex of a conductive probe enables surface potential imaging with high lateral resolution, scanning quantum dot microscopy. The surface of the cantilevers can also be modified. These coatings are mostly applied in order to increase the reflectance of the cantilever and to improve the deflection signal. == Forces as a function of tip geometry == The forces between the tip and the sample strongly depend on the geometry of the tip. Various studies were exploited in the past years to write the forces as a function of the tip parameters. Among the different forces between the tip and the sample, the water meniscus forces are highly interesting, both in air and liquid environment. Other forces must be considered, like the Coulomb force, van der Waals forces, double layer interactions, solvation forces, hydration and hydrophobic forces. === Water meniscus === Water meniscus forces are highly interesting for AFM measurements in air. Due to the ambient humidity, a thin layer of water is formed between the tip and the sample during air measurements. The resulting capillary force gives rise to a strong attractive force that pulls the tip onto the surface. In fact, the adhesion force measured between tip and sample in ambient air of finite humidity is usually dominated by capillary forces. As a consequence, it is difficult to pull the tip away from the surface. For soft samples including many polymers and in particular biological materials, the strong adhesive capillary force gives rise to sample degradation and destruction upon imaging in contact mode. Historically, these problems were an important motivation for the development of dynamic imaging in air (e.g. "tapping mode"). During tapping mode imaging in air, capillary bridges still form. Yet, for suitable imaging conditions, the capillary bridges are formed and broken in every oscillation cycle of the cantilever normal to the surface, as can be inferred from an analysis of cantilever amplitude and phase vs. distance curves. As a consequence, destructive shear forces are largely reduced and soft samples can be investigated. In order to quantify the equilibrium capillary force, it is necessary to start from the Laplace equation for pressure: P = γ L ( 1 r 1 + 1 r 0 ) ≃ γ L r e f f {\displaystyle P=\gamma _{L}\left({\frac {1}{r}}_{1}+{\frac {1}{r}}_{0}\right)\simeq {\frac {\gamma _{L}}{r_{eff}}}} where γL, is the surface energy and r0 and r1 are defined in the figure. The pressure is applied on an area of A ≃ 2 π R ≃ [ r e f f ( 1 + cos ⁡ θ ) + h ] {\displaystyle A\simeq 2\pi R\simeq [r_{eff}(1+\cos \theta )+h]} where θ is the angle between the tip's surface and the liquid's surface while h is the height difference between the surrounding liquid and the top of the miniscus. The force that pulls together the two surfaces is F = 2 π R γ L ( 1 + cos ⁡ θ + h r e f f ) {\displaystyle F=2\pi R\gamma _{L}\left(1+\cos \theta +{\frac {h}{r_{eff}}}\right)} The same formula could also be calculated as a function of relative humidity. Gao calculated formulas for different tip geometries. As an example, the force decreases by 20% for a conical tip with respect to a spherical tip. When these forces are calculated, a difference must be made between the wet on dry situation and the wet on wet situation. For a spherical tip, the force is: f m = − 2 π R γ L ( cos ⁡ θ + cos ⁡ ϕ ) ( 1 − d h d D ) {\displaystyle f_{m}=-2\pi R\gamma _{L}(\cos \theta +\cos \phi )\left(1-{\frac {dh}{dD}}\right)} for dry on wet, f m = − 2 π R γ L d r 0 d D {\displaystyle f_{m}=-2\pi R\gamma _{L}{\frac {dr_{0}}{dD}}} for wet on wet, where θ is the contact angle of the dry sphere and φ is the immersed angle, as shown in the figure For a conical tip, the formula becomes: f m = − 2 π R γ L tan ⁡ δ cos ⁡ δ ( cos ⁡ θ + sin ⁡ δ ) ( h D ) ( 1 − d h d D ) {\displaystyle f_{m}=-2\pi R\gamma _{L}{\frac {\tan \delta }{\cos \delta }}(\cos \theta +\sin \delta )(hD)\left(1-{\frac {dh}{dD}}\right)} for dry on wet f m = − 2 π R γ L ( 1 cos ⁡ δ + sin ⁡ δ ) ( r 0 ) ( d r 0 d D ) {\displaystyle f_{m}=-2\pi R\gamma _{L}\left({\frac {1}{\cos \delta }}+\sin \delta \right)(r_{0})\left({\frac {dr_{0}}{dD}}\right)} for wet on wet where δ is the half cone angle and r0 and h are parameters of the meniscus profile. == AFM cantilever-deflection measurement == === Beam-deflection measurement === The most common method for cantilever-deflection measurements is the beam-deflection method. In this method, laser light from a solid-state diode is reflected off the back of the cantilever and collected by a position-sensitive detector (PSD) consisting of two closely spaced photodiodes, whose output signal is collected by a differential amplifier. Angular displacement of the cantilever results in one photodiode collecting more light than the other photodiode, producing an output signal (the difference between the photodiode signals normalized by their sum), which is proportional to the deflection of the cantilever. The sensitivity of the beam-deflection method is very high, and a noise floor on the order of 10 fm Hz−1⁄2 can be obtained routinely in a well-designed system. Although this method is sometimes called the "optical lever" method, the signal is not amplified if the beam path is made longer. A longer beam path increases the motion of the reflected spot on the photodiodes, but also widens the spot by the same amount due to diffraction, so that the same amount of optical power is moved from one photodiode to the other. The "optical leverage" (output signal of the detector divided by deflection of the cantilever) is inversely proportional to the numerical aperture of the beam focusing optics, as long as the focused laser spot is small enough to fall completely on the cantilever. It is also inversely proportional to the length of the cantilever. The relative popularity of the beam-deflection method can be explained by its high sensitivity and simple operation, and by the fact that cantilevers do not require electrical contacts or other special treatments, and can therefore be fabricated relatively cheaply with sharp integrated tips. === Other deflection-measurement methods === Many other methods for beam-deflection measurements exist. Piezoelectric detection – Cantilevers made from quartz (such as the qPlus configuration), or other piezoelectric materials can directly detect deflection as an electrical signal. Cantilever oscillations down to 10pm have been detected with this method. Laser Doppler vibrometry – A laser Doppler vibrometer can be used to produce very accurate deflection measurements for an oscillating cantilever (thus is only used in non-contact mode). This method is expensive and is only used by relatively few groups. Scanning tunneling microscope (STM) — The first atomic microscope used an STM complete with its own feedback mechanism to measure deflection. This method is very difficult to implement, and is slow to react to deflection changes compared to modern methods. Optical interferometry – Optical interferometry can be used to measure cantilever deflection. Due to the nanometre scale deflections measured in AFM, the interferometer is running in the sub-fringe regime, thus, any drift in laser power or wavelength has strong effects on the measurement. For these reasons optical interferometer measurements must be done with great care (for example using index matching fluids between optical fibre junctions), with very stable lasers. For these reasons optical interferometry is rarely used. Capacitive detection – Metal coated cantilevers can form a capacitor with another contact located behind the cantilever. Deflection changes the distance between the contacts and can be measured as a change in capacitance. Piezoresistive detection – Cantilevers can be fabricated with piezoresistive elements that act as a strain gauge. Using a Wheatstone bridge, strain in the AFM cantilever due to deflection can be measured. This is not commonly used in vacuum applications, as the piezoresistive detection dissipates energy from the system affecting Q of the resonance. == Piezoelectric scanners == AFM scanners are made from piezoelectric material, which expands and contracts proportionally to an applied voltage. Whether they elongate or contract depends upon the polarity of the voltage applied. Traditionally the tip or sample is mounted on a "tripod" of three piezo crystals, with each responsible for scanning in the x,y and z directions. In 1986, the same year as the AFM was invented, a new piezoelectric scanner, the tube scanner, was developed for use in STM. Later tube scanners were incorporated into AFMs. The tube scanner can move the sample in the x, y, and z directions using a single tube piezo with a single interior contact and four external contacts. An advantage of the tube scanner compared to the original tripod design, is better vibrational isolation, resulting from the higher resonant frequency of the single element construction, in combination with a low resonant frequency isolation stage. A disadvantage is that the x-y motion can cause unwanted z motion resulting in distortion. Another popular design for AFM scanners is the flexure stage, which uses separate piezos for each axis, and couples them through a flexure mechanism. Scanners are characterized by their sensitivity, which is the ratio of piezo movement to piezo voltage, i.e., by how much the piezo material extends or contracts per applied volt. Due to the differences in material or size, the sensitivity varies from scanner to scanner. Sensitivity varies non-linearly with respect to scan size. Piezo scanners exhibit more sensitivity at the end than at the beginning of a scan. This causes the forward and reverse scans to behave differently and display hysteresis between the two scan directions. This can be corrected by applying a non-linear voltage to the piezo electrodes to cause linear scanner movement and calibrating the scanner accordingly. One disadvantage of this approach is that it requires re-calibration because the precise non-linear voltage needed to correct non-linear movement will change as the piezo ages (see below). This problem can be circumvented by adding a linear sensor to the sample stage or piezo stage to detect the true movement of the piezo. Deviations from ideal movement can be detected by the sensor and corrections applied to the piezo drive signal to correct for non-linear piezo movement. This design is known as a "closed loop" AFM. Non-sensored piezo AFMs are referred to as "open loop" AFMs. The sensitivity of piezoelectric materials decreases exponentially with time. This causes most of the change in sensitivity to occur in the initial stages of the scanner's life. Piezoelectric scanners are run for approximately 48 hours before they are shipped from the factory so that they are past the point where they may have large changes in sensitivity. As the scanner ages, the sensitivity will change less with time and the scanner would seldom require recalibration, though various manufacturer manuals recommend monthly to semi-monthly calibration of open loop AFMs. == Advantages and disadvantages == === Advantages === AFM has several advantages over the scanning electron microscope (SEM). Unlike the electron microscope, which provides a two-dimensional projection or a two-dimensional image of a sample, the AFM provides a three-dimensional surface profile. In addition, samples viewed by AFM do not require any special treatments (such as metal/carbon coatings) that would irreversibly change or damage the sample, and does not typically suffer from charging artifacts in the final image. While an electron microscope needs an expensive vacuum environment for proper operation, most AFM modes can work perfectly well in ambient air or even a liquid environment. This makes it possible to study biological macromolecules and even living organisms. In principle, AFM can provide higher resolution than SEM. It has been shown to give true atomic resolution in ultra-high vacuum (UHV) and, more recently, in liquid environments. High resolution AFM is comparable in resolution to scanning tunneling microscopy and transmission electron microscopy. AFM can also be combined with a variety of optical microscopy and spectroscopy techniques such as fluorescent microscopy of infrared spectroscopy, giving rise to scanning near-field optical microscopy, nano-FTIR and further expanding its applicability. Combined AFM-optical instruments have been applied primarily in the biological sciences but have recently attracted strong interest in photovoltaics and energy-storage research, polymer sciences, nanotechnology and even medical research. === Disadvantages === A disadvantage of AFM compared with the scanning electron microscope (SEM) is the single scan image size. In one pass, the SEM can image an area on the order of square millimeters with a depth of field on the order of millimeters, whereas the AFM can only image a maximum scanning area of about 150×150 micrometers and a maximum height on the order of 10–20 micrometers. One method of improving the scanned area size for AFM is by using parallel probes in a fashion similar to that of millipede data storage. The scanning speed of an AFM is also a limitation. Traditionally, an AFM cannot scan images as fast as an SEM, requiring several minutes for a typical scan, while an SEM is capable of scanning at near real-time, although at relatively low quality. The relatively slow rate of scanning during AFM imaging often leads to thermal drift in the image making the AFM less suited for measuring accurate distances between topographical features on the image. However, several fast-acting designs were suggested to increase microscope scanning productivity including what is being termed videoAFM (reasonable quality images are being obtained with videoAFM at video rate: faster than the average SEM). To eliminate image distortions induced by thermal drift, several methods have been introduced. AFM images can also be affected by nonlinearity, hysteresis, and creep of the piezoelectric material and cross-talk between the x, y, z axes that may require software enhancement and filtering. Such filtering could "flatten" out real topographical features. However, newer AFMs utilize real-time correction software (for example, feature-oriented scanning) or closed-loop scanners, which practically eliminate these problems. Some AFMs also use separated orthogonal scanners (as opposed to a single tube), which also serve to eliminate part of the cross-talk problems. As with any other imaging technique, there is the possibility of image artifacts, which could be induced by an unsuitable tip, a poor operating environment, or even by the sample itself, as depicted on the right. These image artifacts are unavoidable; however, their occurrence and effect on results can be reduced through various methods. Artifacts resulting from a too-coarse tip can be caused for example by inappropriate handling or de facto collisions with the sample by either scanning too fast or having an unreasonably rough surface, causing actual wearing of the tip. Due to the nature of AFM probes, they cannot normally measure steep walls or overhangs. Specially made cantilevers and AFMs can be used to modulate the probe sideways as well as up and down (as with dynamic contact and non-contact modes) to measure sidewalls, at the cost of more expensive cantilevers, lower lateral resolution and additional artifacts. == Other applications in various fields of study == The latest efforts in integrating nanotechnology and biological research have been successful and show much promise for the future, including in fields such as nanobiomechanics. Since nanoparticles are a potential vehicle of drug delivery, the biological responses of cells to these nanoparticles are continuously being explored to optimize their efficacy and how their design could be improved. Pyrgiotakis et al. were able to study the interaction between CeO2 and Fe2O3 engineered nanoparticles and cells by attaching the engineered nanoparticles to the AFM tip. Studies have taken advantage of AFM to obtain further information on the behavior of live cells in biological media. Real-time atomic force spectroscopy (or nanoscopy) and dynamic atomic force spectroscopy have been used to study live cells and membrane proteins and their dynamic behavior at high resolution, on the nanoscale. Imaging and obtaining information on the topography and the properties of the cells has also given insight into chemical processes and mechanisms that occur through cell-cell interaction and interactions with other signaling molecules (ex. ligands). Evans and Calderwood used single cell force microscopy to study cell adhesion forces, bond kinetics/dynamic bond strength and its role in chemical processes such as cell signaling. Scheuring, Lévy, and Rigaud reviewed studies in which AFM to explore the crystal structure of membrane proteins of photosynthetic bacteria. Alsteen et al. have used AFM-based nanoscopy to perform a real-time analysis of the interaction between live mycobacteria and antimycobacterial drugs (specifically isoniazid, ethionamide, ethambutol, and streptomycine), which serves as an example of the more in-depth analysis of pathogen-drug interactions that can be done through AFM. == See also == Science portal == References == == Further reading == Voigtländer, Bert (2019). Atomic Force Microscopy. NanoScience and Technology. Springer. Bibcode:2019afm..book.....V. doi:10.1007/978-3-030-13654-3. ISBN 978-3-030-13653-6. S2CID 199490753. Carpick, Robert W.; Salmeron, Miquel (1997). "Scratching the Surface: Fundamental Investigations of Tribology with Atomic Force Microscopy". Chemical Reviews. 97 (4): 1163–1194. doi:10.1021/cr960068q. ISSN 0009-2665. PMID 11851446. Giessibl, Franz J. (2003). "Advances in atomic force microscopy". Reviews of Modern Physics. 75 (3): 949–983. arXiv:cond-mat/0305119. Bibcode:2003RvMP...75..949G. doi:10.1103/RevModPhys.75.949. ISSN 0034-6861. S2CID 18924292. Garcia, Ricardo; Knoll, Armin; Riedo, Elisa (2014). "Advanced Scanning Probe Lithography". Nature Nanotechnology. 9 (8): 577–87. arXiv:1505.01260. Bibcode:2014NatNa...9..577G. doi:10.1038/NNANO.2014.157. PMID 25091447. S2CID 205450948. García, Ricardo; Pérez, Rubén (2002). "Dynamic atomic force microscopy methods". Surface Science Reports. 47 (6–8): 197–301. Bibcode:2002SurSR..47..197G. doi:10.1016/S0167-5729(02)00077-8. == External links == The Inner Workings of an AFM - An Animated Explanation WeCanFigureThisOut.org
Wikipedia/Atomic_force_microscope
In molecular biology and pharmacology, a small molecule or micromolecule is a low molecular weight (≤ 1000 daltons) organic compound that may regulate a biological process, with a size on the order of 1 nm. Many drugs are small molecules; the terms are equivalent in the literature. Larger structures such as nucleic acids and proteins, and many polysaccharides are not small molecules, although their constituent monomers (ribo- or deoxyribonucleotides, amino acids, and monosaccharides, respectively) are often considered small molecules. Small molecules may be used as research tools to probe biological function as well as leads in the development of new therapeutic agents. Some can inhibit a specific function of a protein or disrupt protein–protein interactions. Pharmacology usually restricts the term "small molecule" to molecules that bind specific biological macromolecules and act as an effector, altering the activity or function of the target. Small molecules can have a variety of biological functions or applications, serving as cell signaling molecules, drugs in medicine, pesticides in farming, and in many other roles. These compounds can be natural (such as secondary metabolites) or artificial (such as antiviral drugs); they may have a beneficial effect against a disease (such as drugs) or may be detrimental (such as teratogens and carcinogens). == Molecular weight cutoff == The upper molecular-weight limit for a small molecule is approximately 900 daltons, which allows for the possibility to rapidly diffuse across cell membranes so that it can reach intracellular sites of action. This molecular weight cutoff is also a necessary but insufficient condition for oral bioavailability as it allows for transcellular transport through intestinal epithelial cells. In addition to intestinal permeability, the molecule must also possess a reasonably rapid rate of dissolution into water and adequate water solubility and moderate to low first pass metabolism. A somewhat lower molecular weight cutoff of 500 daltons (as part of the "rule of five") has been recommended for oral small molecule drug candidates based on the observation that clinical attrition rates are significantly reduced if the molecular weight is kept below this limit. == Drugs == Most pharmaceuticals are small molecules, although some drugs can be proteins (e.g., insulin and other biologic medical products). With the exception of therapeutic antibodies, many proteins are degraded if administered orally and most often cannot cross cell membranes. Small molecules are more likely to be absorbed, although some of them are only absorbed after oral administration if given as prodrugs. One advantage that small molecule drugs (SMDs) have over "large molecule" biologics is that many small molecules can be taken orally whereas biologics generally require injection or another parenteral administration. Small molecule drugs are also typically simpler to manufacture and cheaper for the purchaser. A downside is that not all targets are amenable to modification with small-molecule drugs; bacteria and cancers are often resistant to their effects. == Secondary metabolites == A variety of organisms including bacteria, fungi, and plants, produce small molecule secondary metabolites also known as natural products, which play a role in cell signaling, pigmentation and in defense against predation. Secondary metabolites are a rich source of biologically active compounds and hence are often used as research tools and leads for drug discovery. Examples of secondary metabolites include: == Research tools == Enzymes and receptors are often activated or inhibited by endogenous protein, but can be also inhibited by endogenous or exogenous small molecule inhibitors or activators, which can bind to the active site or on the allosteric site. An example is the teratogen and carcinogen phorbol 12-myristate 13-acetate, which is a plant terpene that activates protein kinase C, which promotes cancer, making it a useful investigative tool. There is also interest in creating small molecule artificial transcription factors to regulate gene expression, examples include wrenchnolol (a wrench shaped molecule). Binding of ligand can be characterised using a variety of analytical techniques such as surface plasmon resonance, microscale thermophoresis or dual polarisation interferometry to quantify the reaction affinities and kinetic properties and also any induced conformational changes. == Anti-genomic therapeutics == Small-molecule anti-genomic therapeutics, or SMAT, refers to a biodefense technology that targets DNA signatures found in many biological warfare agents. SMATs are new, broad-spectrum drugs that unify antibacterial, antiviral and anti-malarial activities into a single therapeutic that offers substantial cost benefits and logistic advantages for physicians and the military. == See also == Pharmacology Druglikeness Lipinski's rule of five Metabolite Chemogenomics Neurotransmitter Peptidomimetic Macromolecule == References == == External links == Small+Molecule+Libraries at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Small_molecule
A 2D geometric model is a geometric model of an object as a two-dimensional figure, usually on the Euclidean or Cartesian plane. Even though all material objects are three-dimensional, a 2D geometric model is often adequate for certain flat objects, such as paper cut-outs and machine parts made of sheet metal. Other examples include circles used as a model of thunderstorms, which can be considered flat when viewed from above. 2D geometric models are also convenient for describing certain types of artificial images, such as technical diagrams, logos, the glyphs of a font, etc. They are an essential tool of 2D computer graphics and often used as components of 3D geometric models, e.g. to describe the decals to be applied to a car model. Modern architecture practice "digital rendering" which is a technique used to form a perception of a 2-D geometric model as of a 3-D geometric model designed through descriptive geometry and computerized equipment. == 2D geometric modeling techniques == simple geometric shapes boundary representation Boolean operations on polygons == See also == 2D geometric primitive Computational geometry Digital image == References ==
Wikipedia/2D_geometric_model
A network solid or covalent network solid (also called atomic crystalline solids or giant covalent structures) is a chemical compound (or element) in which the atoms are bonded by covalent bonds in a continuous network extending throughout the material. In a network solid there are no individual molecules, and the entire crystal or amorphous solid may be considered a macromolecule. Formulas for network solids, like those for ionic compounds, are simple ratios of the component atoms represented by a formula unit. Examples of network solids include diamond with a continuous network of carbon atoms and silicon dioxide or quartz with a continuous three-dimensional network of SiO2 units. Graphite and the mica group of silicate minerals structurally consist of continuous two-dimensional sheets covalently bonded within the layer, with other bond types holding the layers together. Disordered network solids are termed glasses. These are typically formed on rapid cooling of melts so that little time is left for atomic ordering to occur. == Properties == Hardness: Very hard, due to the strong covalent bonds throughout the lattice (deformation can be easier, however, in directions that do not require the breaking of any covalent bonds, as with flexing or sliding of sheets in graphite or mica). Melting point: High, since melting means breaking covalent bonds (rather than merely overcoming weaker intermolecular forces). Solid-phase electrical conductivity: Variable, depending on the nature of the bonding: network solids in which all electrons are used for sigma bonds (e.g. diamond, quartz) are poor conductors, as there are no delocalized electrons. However, network solids with delocalized pi bonds (e.g. graphite) or dopants can exhibit metal-like conductivity. Liquid-phase electrical conductivity: Low, as the macromolecule consists of neutral atoms, meaning that melting does not free up any new charge carriers (as it would for an ionic compound). Solubility: Generally insoluble in any solvent due to the difficulty of solvating such a large molecule. == Examples == Boron nitride (BN) Diamond (carbon, C) Quartz (SiO2) Rhenium diboride (ReB2) Silicon carbide (moissanite, carborundum, SiC) Silicon (Si) Germanium (Ge) Aluminium nitride (AlN) α-tin allotrope (gray tin, Sn) == See also == Molecular solid == References ==
Wikipedia/Network_solid
A Rydberg molecule is an electronically excited chemical species. Electronically excited molecular states are generally quite different in character from electronically excited atomic states. However, particularly for highly electronically excited molecular systems, the ionic core interaction with an excited electron can take on the general aspects of the interaction between the proton and the electron in the hydrogen atom. The spectroscopic assignment of these states follows the Rydberg formula, named after the Swedish physicist Johannes Rydberg, and they are called Rydberg states of molecules. Rydberg series are associated with partially removing an electron from the ionic core. Each Rydberg series of energies converges on an ionization energy threshold associated with a particular ionic core configuration. These quantized Rydberg energy levels can be associated with the quasiclassical Bohr atomic picture. The closer you get to the ionization threshold energy, the higher the principal quantum number, and the smaller the energy difference between near threshold Rydberg states. As the electron is promoted to higher energy levels in a Rydberg series, the spatial excursion of the electron from the ionic core increases and the system is more like the Bohr quasiclassical picture. The Rydberg states of molecules with low principal quantum numbers can interact with the other excited electronic states of the molecule. This can cause shifts in energy. The assignment of molecular Rydberg states often involves following a Rydberg series from intermediate to high principal quantum numbers. The energy of Rydberg states can be refined by including a correction called the quantum defect in the Rydberg formula. The quantum defect correction can be associated with the presence of a distributed ionic core. The experimental study of molecular Rydberg states has been conducted with traditional methods for generations. However, the development of laser-based techniques such as Resonance Ionization Spectroscopy has allowed relatively easy access to these Rydberg molecules as intermediates. This is particularly true for Resonance Enhanced Multiphoton Ionization (REMPI) spectroscopy, since multiphoton processes involve different selection rules from single photon processes. The study of high principal quantum number Rydberg states has spawned a number of spectroscopic techniques. These "near threshold Rydberg states" can have long lifetimes, particularly for the higher orbital angular momentum states that do not interact strongly with the ionic core. Rydberg molecules can condense to form clusters of Rydberg matter which has an extended lifetime against de-excitation. Dihelium (He2*) was the first known Rydberg molecule. == Other types == In 2009, a different kind of Rydberg molecule was finally created by researchers from the University of Stuttgart. There, the interaction between a Rydberg atom and a ground state atom leads to a novel bond type. Two rubidium atoms were used to create the molecule which survived for 18 microseconds. In 2015, a 'trilobite' Rydberg molecule was observed by researchers from the University of Oklahoma. This molecule was theorized in 2000 and is characterized by an electron density distribution that resembles the shape of a trilobite when plotted in cylindrical coordinates. These molecules have lifetimes of tens of microseconds and electric dipole moments of up to 2000 Debye. In 2016, a butterfly Rydberg molecule was observed by a collaboration involving researchers from the Kaiserslautern University of Technology and Purdue University. A butterfly Rydberg molecule is a weak pairing of a Rydberg atom and a ground state atom that is enhanced by the presence of a shape resonance in the scattering between the Rydberg electron and the ground state atom. This new kind of atomic bond was theorized in 2002 and is characterized by an electron density distribution that resembles the shape of a butterfly. As a consequence of the unconventional binding mechanism, butterfly Rydberg molecules show peculiar properties such as multiple vibrational ground states at different bond lengths and giant dipole moments in excess of 500 debye. == See also == Rydberg atom Rydberg matter == References == == Further reading == Molecular Spectra and Molecular Structure, Vol. I, II and III Gerhard Herzberg, Krieger Pub. Co, revised ed. 1991. Atoms and Molecules: An Introduction for Students of Physical Chemistry, Martin Karplus and Richard N. Porter, Benjamin & Company, Inc., 1970.
Wikipedia/Rydberg_molecule
Periodic systems of molecules are charts of molecules similar to the periodic table of the elements. Construction of such charts was initiated in the early 20th century and is still ongoing. It is commonly believed that the periodic law, represented by the periodic chart, is echoed in the behavior of molecules, at least small molecules. For instance, if one replaces any one of the atoms in a triatomic molecule with a rare gas atom, there will be a drastic change in the molecule’s properties. Several goals could be accomplished by constructing an explicit representation of this periodic law as manifested in molecules: (1) a classification scheme for the vast number of molecules that exist, starting with small ones having just a few atoms, for use as a teaching aid and tool for archiving data, (2) forecasting data for molecular properties based on the classification scheme, and (3) a sort of unity with the periodic chart and the periodic system of fundamental particles. == Physical periodic systems of molecules == Periodic systems (or charts or tables) of molecules are the subjects of two reviews. The systems of diatomic molecules include those of (1) H. D. W. Clark, and (2) F.-A. Kong, which somewhat resemble the atomic chart. The system of R. Hefferlin et al. was developed from (3) a three-dimensional to (4) a four-dimensional system Kronecker product of the element chart with itself. A totally different kind of periodic system is (5) that of G. V. Zhuvikin, which is based on group dynamics. In all but the first of these cases, other researchers provided invaluable contributions and some of them are co-authors. The architectures of these systems have been adjusted by Kong and Hefferlin to include ionized species, and expanded by Kong, Hefferlin, and Zhuvikin and Hefferlin to the space of triatomic molecules. These architectures are mathematically related to the chart of the elements. They were first called “physical” periodic systems. == Chemical periodic systems of molecules == Other investigators have focused on building structures that address specific kinds of molecules such as alkanes (Morozov); benzenoids (Dias); functional groups containing fluorine, oxygen, nitrogen and sulfur (Haas); or a combination of core charge, number of shells, redox potentials, and acid-base tendencies (Gorski). These structures are not restricted to molecules with a given number of atoms and they bear little resemblance to the element chart; they are called “chemical” systems. Chemical systems do not start with the element chart, but instead start with, for example, formula enumerations (Dias), Grimm's hydride displacement law (Haas), reduced potential curves (Jenz), a set of molecular descriptors (Gorski), and similar strategies. == Hyperperiodicity == E. V. Babaev has erected a hyperperiodic system which in principle includes all of the systems described above except those of Dias, Gorski, and Jenz. == Bases of the element chart and periodic systems of molecules == The periodic chart of the elements, like a small stool, is supported by three legs: (a) the Bohr–Sommerfeld “solar system” atomic model (with electron spin and the Madelung principle), which provides the magic-number elements that end each row of the table and gives the number of elements in each row, (b) solutions to the Schrödinger equation, which provide the same information, and (c) data provided by experiment, by the solar system model, and by solutions to the Schroedinger equation. The Bohr–Sommerfeld model should not be ignored: it gave explanations for the wealth of spectroscopic data that were already in existence before the advent of wave mechanics. Each of the molecular systems listed above, and those not cited, is also supported by three legs: (a) physical and chemical data arranged in graphical or tabular patterns (which, for physical periodic systems at least, echo the appearance of the element chart), (b) group dynamic, valence-bond, molecular-orbital, and other fundamental theories, and (c) summing of atomic period and group numbers (Kong), the Kronecker product and exploitation of higher dimensions (Hefferlin), formula enumerations (Dias), the hydrogen-displacement principle (Haas), reduced potential curves (Jenz), and similar strategies. A chronological list of the contributions to this field contains almost thirty entries dated 1862, 1907, 1929, 1935, and 1936; then, after a pause, a higher level of activity beginning with the 100th anniversary of Mendeleev’s publication of his element chart, 1969. Many publications on periodic systems of molecules include some predictions of molecular properties, but starting at the turn of the Century there have been serious attempts to use periodic systems for the prediction of progressively more precise data for various numbers of molecules. Among these attempts are those of Kong, and Hefferlin == A collapsed-coordinate system for triatomic molecules == The collapsed-coordinate system has three independent variables instead of the six demanded by the Kronecker-product system. The reduction of independent variables makes use of three properties of gas-phase, ground-state, triatomic molecules. (1) In general, whatever the total number of constituent atomic valence electrons, data for isoelectronic molecules tend to be more similar than for adjacent molecules that have more or fewer valence electrons; for triatomic molecules, the electron count is the sum of the atomic group numbers (the sum of the column numbers 1 to 8 in the p-block of the periodic chart of the elements, C1+C2+C3). (2) Linear/bent triatomic molecules appear to be slightly more stable, other parameters being equal, if carbon is the central atom. (3) Most physical properties of diatomic molecules (especially spectroscopic constants) are closely monotonic with respect to the product of the two atomic period (or row) numbers, R1 and R2; for triatomic molecules, the monotonicity is close with respect to R1R2+R2R3 (which reduces to R1R2 for diatomic molecules). Therefore, the coordinates x, y, and z of the collapsed-coordinate system are C1+C2+C3, C2, and R1R2+R2R3. Multiple-regression predictions of four property values for molecules with tabulated data agree very well with the tabulated data (the error measures of the predictions include the tabulated data in all but a few cases). == See also == History of the periodic table Periodic table == References ==
Wikipedia/Periodic_systems_of_small_molecules
In chemistry, orbital hybridisation (or hybridization) is the concept of mixing atomic orbitals to form new hybrid orbitals (with different energies, shapes, etc., than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. For example, in a carbon atom which forms four single bonds, the valence-shell s orbital combines with three valence-shell p orbitals to form four equivalent sp3 mixtures in a tetrahedral arrangement around the carbon to bond to four different atoms. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are symmetrically disposed in space. Usually hybrid orbitals are formed by mixing atomic orbitals of comparable energies. == History and uses == Chemist Linus Pauling first developed the hybridisation theory in 1931 to explain the structure of simple molecules such as methane (CH4) using atomic orbitals. Pauling pointed out that a carbon atom forms four bonds by using one s and three p orbitals, so that "it might be inferred" that a carbon atom would form three bonds at right angles (using p orbitals) and a fourth weaker bond using the s orbital in some arbitrary direction. In reality, methane has four C–H bonds of equivalent strength. The angle between any two bonds is the tetrahedral bond angle of 109°28' (around 109.5°). Pauling supposed that in the presence of four hydrogen atoms, the s and p orbitals form four equivalent combinations which he called hybrid orbitals. Each hybrid is denoted sp3 to indicate its composition, and is directed along one of the four C–H bonds. This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalizing the structures of organic compounds. It gives a simple orbital picture equivalent to Lewis structures. Hybridisation theory is an integral part of organic chemistry, one of the most compelling examples being Baldwin's rules. For drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons. Hybridisation theory explains bonding in alkenes and methane. The amount of p character or s character, which is decided mainly by orbital hybridisation, can be used to reliably predict molecular properties such as acidity or basicity. == Overview == Orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridization, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen. Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each carbon–hydrogen bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form N ( s + 3 p σ ) {\displaystyle N(s+{\sqrt {3}}p\sigma )} , where N is a normalisation constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The ratio of coefficients (denoted λ in general) is 3 {\displaystyle \color {blue}{\sqrt {3}}} in this example. Since the electron density associated with an orbital is proportional to the square of the wavefunction, the ratio of p-character to s-character is λ2 = 3. The p character or the weight of the p component is N2λ2 = 3/4. == Types of hybridisation == === sp3 === Hybridisation describes the bonding of atoms from an atom's point of view. For a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals directed towards the 4 hydrogen atoms. Carbon's ground state configuration is 1s2 2s2 2p2 or more easily read: This diagram suggests that the carbon atom could use its two singly occupied p-type orbitals to form two covalent bonds with two hydrogen atoms in a methylene (CH2) molecule, with a hypothetical bond angle of 90° corresponding to the angle between two p orbitals on the same atom. However the true H–C–H angle in singlet methylene is about 102° which implies the presence of some orbital hybridisation. The carbon atom can also bond to four hydrogen atoms in methane by an excitation (or promotion) of an electron from the doubly occupied 2s orbital to the empty 2p orbital, producing four singly occupied orbitals. The energy released by the formation of two additional bonds more than compensates for the excitation energy required, energetically favoring the formation of four C-H bonds. According to quantum mechanics, the lowest energy is obtained if the four bonds are equivalent, which requires that they are formed from equivalent orbitals on the carbon. A set of four equivalent orbitals can be obtained that are linear combinations of the valence-shell (core orbitals are almost never involved in bonding) s and p wave functions, which are the four sp3 hybrids. In CH4, four sp3 hybrid orbitals are overlapped by the four hydrogens' 1s orbitals, yielding four σ (sigma) bonds (that is, four single covalent bonds) of equal length and strength. The following : translates into : === sp2 === Other carbon compounds and other molecules may be explained in a similar way. For example, ethylene (C2H4) has a double bond between the carbons. For this molecule, carbon sp2 hybridises, because one π (pi) bond is required for the double bond between the carbons and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals, usually denoted 2px and 2py. The third 2p orbital (2pz) remains unhybridised. forming a total of three sp2 orbitals with one remaining p orbital. In ethylene, the two carbon atoms form a σ bond by overlapping one sp2 orbital from each carbon atom. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. Each carbon atom forms covalent C–H bonds with two hydrogens by s–sp2 overlap, all with 120° bond angles. The hydrogen–carbon bonds are all of equal strength and length, in agreement with experimental data. === sp === The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model, the 2s orbital is mixed with only one of the three p orbitals, resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles. == Hybridisation and molecule shape == Hybridisation helps to explain molecule shape, since the angles between bonds are approximately equal to the angles between hybrid orbitals. This is in contrast to valence shell electron-pair repulsion (VSEPR) theory, which can be used to predict molecular geometry based on empirical rules rather than on valence-bond or orbital theories. === spx hybridisation === As the valence orbitals of main group elements are the one s and three p orbitals with the corresponding octet rule, spx hybridization is used to model the shape of these molecules. === spxdy hybridisation === As the valence orbitals of transition metals are the five d, one s and three p orbitals with the corresponding 18-electron rule, spxdy hybridisation is used to model the shape of these molecules. These molecules tend to have multiple shapes corresponding to the same hybridization due to the different d-orbitals involved. A square planar complex has one unoccupied p-orbital and hence has 16 valence electrons. === sdx hybridisation === In certain transition metal complexes with a low d electron count, the p-orbitals are unoccupied and sdx hybridisation is used to model the shape of these molecules. == Hybridisation of hypervalent molecules == === Octet expansion === In some general chemistry textbooks, hybridization is presented for main group coordination number 5 and above using an "expanded octet" scheme with d-orbitals first proposed by Pauling. However, such a scheme is now considered to be incorrect in light of computational chemistry calculations. In 1990, Eric Alfred Magnusson of the University of New South Wales published a paper definitively excluding the role of d-orbital hybridisation in bonding in hypervalent compounds of second-row (period 3) elements, ending a point of contention and confusion. Part of the confusion originates from the fact that d-functions are essential in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result). Also, the contribution of the d-function to the molecular wavefunction is large. These facts were incorrectly interpreted to mean that d-orbitals must be involved in bonding. === Resonance === In light of computational chemistry, a better treatment would be to invoke sigma bond resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. All resonance structures must obey the octet rule. == Hybridisation in computational VB theory == While the simple model of orbital hybridisation is commonly used to explain molecular shape, hybridisation is used differently when computed in modern valence bond programs. Specifically, hybridisation is not determined a priori but is instead variationally optimized to find the lowest energy solution and then reported. This means that all artificial constraints, specifically two constraints, on orbital hybridisation are lifted: that hybridisation is restricted to integer values (isovalent hybridisation) that hybrid orbitals are orthogonal to one another (hybridisation defects) This means that in practice, hybrid orbitals do not conform to the simple ideas commonly taught and thus in scientific computational papers are simply referred to as spx, spxdy or sdx hybrids to express their nature instead of more specific integer values. === Isovalent hybridisation === Although ideal hybrid orbitals can be useful, in reality, most bonds require orbitals of intermediate character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of the bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridizations like sp2.5 are also readily described. The hybridization of bond orbitals is determined by Bent's rule: "Atomic s character concentrates in orbitals directed towards electropositive substituents". For molecules with lone pairs, the bonding orbitals are isovalent spx hybrids. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4.0 to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does not imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals. === Hybridisation defects === Hybridisation of s and p orbitals to form effective spx hybrids requires that they have comparable radial extent. While 2p orbitals are on average less than 10% larger than 2s, in part attributable to the lack of a radial node in 2p orbitals, 3p orbitals which have one radial node, exceed the 3s orbitals by 20–33%. The difference in extent of s and p orbitals increases further down a group. The hybridisation of atoms in chemical bonds can be analysed by considering localised molecular orbitals, for example using natural localised molecular orbitals in a natural bond orbital (NBO) scheme. In methane, CH4, the calculated p/s ratio is approximately 3 consistent with "ideal" sp3 hybridisation, whereas for silane, SiH4, the p/s ratio is closer to 2. A similar trend is seen for the other 2p elements. Substitution of fluorine for hydrogen further decreases the p/s ratio. The 2p elements exhibit near ideal hybridisation with orthogonal hybrid orbitals. For heavier p block elements this assumption of orthogonality cannot be justified. These deviations from the ideal hybridisation were termed hybridisation defects by Kutzelnigg. However, computational VB groups such as Gerratt, Cooper and Raimondi (SCVB) as well as Shaik and Hiberty (VBSCF) go a step further to argue that even for model molecules such as methane, ethylene and acetylene, the hybrid orbitals are already defective and nonorthogonal, with hybridisations such as sp1.76 instead of sp3 for methane. == Photoelectron spectra == One misconception concerning orbital hybridization is that it incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionised states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and an A1 state. The difference in energy between each ionized state and the ground state would be ionization energy, which yields two values in agreement with experimental results. == Localized vs canonical molecular orbitals == Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is, therefore equivalent to the delocalized orbital description for ground state total energy and electron density, as well as the molecular geometry that corresponds to the minimum total energy value. === Two localized representations === Molecules with multiple bonds or multiple lone pairs can have orbitals represented in terms of sigma and pi symmetry or equivalent orbitals. Different valence bond methods use either of the two representations, which have mathematically equivalent total many-electron wave functions and are related by a unitary transformation of the set of occupied molecular orbitals. For multiple bonds, the sigma-pi representation is the predominant one compared to the equivalent orbital (bent bond) representation. In contrast, for multiple lone pairs, most textbooks use the equivalent orbital representation. However, the sigma-pi representation is also used, such as by Weinhold and Landis within the context of natural bond orbitals, a localized orbital theory containing modernized analogs of classical (valence bond/Lewis structure) bonding pairs and lone pairs. For the hydrogen fluoride molecule, for example, two F lone pairs are essentially unhybridized p orbitals, while the other is an spx hybrid orbital. An analogous consideration applies to water (one O lone pair is in a pure p orbital, another is in an spx hybrid orbital). == See also == Crystal field theory Isovalent hybridisation Ligand field theory Linear combination of atomic orbitals MO diagrams VALBOND == References == == External links == Covalent Bonds and Molecular Structure Hybridisation flash movie Hybrid orbital 3D preview program in OpenGL Understanding Concepts: Molecular Orbitals Archived 2013-04-11 at archive.today General Chemistry tutorial on orbital hybridization
Wikipedia/Hybridization_theory
In physics, a quantum (pl.: quanta) is the minimum amount of any physical entity (physical property) involved in an interaction. The fundamental notion that a property can be "quantized" is referred to as "the hypothesis of quantization". This means that the magnitude of the physical property can take on only discrete values consisting of integer multiples of one quantum. For example, a photon is a single quantum of light of a specific frequency (or of any other form of electromagnetic radiation). Similarly, the energy of an electron bound within an atom is quantized and can exist only in certain discrete values. Atoms and matter in general are stable because electrons can exist only at discrete energy levels within an atom. Quantization is one of the foundations of the much broader physics of quantum mechanics. Quantization of energy and its influence on how energy and matter interact (quantum electrodynamics) is part of the fundamental framework for understanding and describing nature. == Origin == The modern concept of the quantum in physics originates from December 14, 1900, when Max Planck reported his findings to the German Physical Society. He showed that modelling harmonic oscillators with discrete energy levels resolved a longstanding problem in the theory of blackbody radiation.: 15  In his report, Planck did not use the term quantum in the modern sense. Instead, he used the term Elementarquantum to refer to the "quantum of electricity", now known as the elementary charge. For the smallest unit of energy, he employed the term Energieelement, "energy element", rather than calling it a quantum. Shortly afterwards, in a paper published in Annalen der Physik, Planck introduced the constant h, which he termed the "quantum of action" (elementares Wirkungsquantum) in 1906. In this paper, Planck also reported more precise values for the elementary charge and the Avogadro–Loschmidt number, the number of molecules in one mole of substance. The constant h is now known as the Planck constant. After his theory was validated, Planck was awarded the Nobel Prize in Physics for his discovery in 1918. In 1905 Albert Einstein suggested that electromagnetic radiation exists in spatially localized packets which he called "quanta of light" (Lichtquanta). Einstein was able to use this hypothesis to recast Planck's treatment of the blackbody problem in a form that also explained the voltages observed in Philipp Lenard's experiments on the photoelectric effect.: 23  Shortly thereafter, the term "energy quantum" was introduced for the quantity hν. == Quantization == While quantization was first discovered in electromagnetic radiation, it describes a fundamental aspect of energy not just restricted to photons. In the attempt to bring theory into agreement with experiment, Max Planck postulated that electromagnetic energy is absorbed or emitted in discrete packets, or quanta. == See also == Introduction to quantum mechanics History of quantum mechanics == References == == Further reading == Hoffmann, Banesh (1959). The Strange story of the quantum: An account for the general reader of the growth of the ideas underlying our present atomic knowledge (2 ed.). New York: Dover. ISBN 978-0-486-20518-2. {{cite book}}: ISBN / Date incompatibility (help) Mehra, Jagdish; Rechenberg, Helmut; Mehra, Jagdish; Rechenberg, Helmut (2001). The historical development of quantum theory. 4: Pt.1, the fundamental equations of quantum mechanics, 1925-1926 (1. softcover print ed.). New York Heidelberg: Springer. ISBN 978-0-387-95178-2. M. Planck, A Survey of Physical Theory, transl. by R. Jones and D.H. Williams, Methuen & Co., Limited., London 1925 (Dover edition 17 May 2003, ISBN 978-0486678672) including the Nobel lecture. Rodney, Brooks (14 December 2010) Fields of Color: The theory that escaped Einstein. Allegra Print & Imaging. ISBN 979-8373308427
Wikipedia/quantum
Electric potential energy is a potential energy (measured in joules) that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may be said to have electric potential energy by virtue of either its own electric charge or its relative position to other electrically charged objects. The term "electric potential energy" is used to describe the potential energy in systems with time-variant electric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with time-invariant electric fields. == Definition == The electric potential energy of a system of point charges is defined as the work required to assemble this system of charges by bringing them close together, as in the system from an infinite distance. Alternatively, the electric potential energy of any given charge or system of charges is termed as the total work done by an external agent in bringing the charge or the system of charges from infinity to the present configuration without undergoing any acceleration. The electrostatic potential energy can also be defined from the electric potential as follows: == Units == The SI unit of electric potential energy is joule (named after the English physicist James Prescott Joule). In the CGS system the erg is the unit of energy, being equal to 10−7 Joules. Also electronvolts may be used, 1 eV = 1.602×10−19 Joules. == Electrostatic potential energy of one point charge == === One point charge q in the presence of another point charge Q === The electrostatic potential energy, UE, of one point charge q at position r in the presence of a point charge Q, taking an infinite separation between the charges as the reference position, is: U E ( r ) = 1 4 π ε 0 q Q r {\displaystyle U_{E}(\mathbf {r} )={\frac {1}{4\pi \varepsilon _{0}}}{\frac {qQ}{r}}} where r is the distance between the point charges q and Q, and q and Q are the charges (not the absolute values of the charges—i.e., an electron would have a negative value of charge when placed in the formula). The following outline of proof states the derivation from the definition of electric potential energy and Coulomb's law to this formula. === One point charge q in the presence of n point charges Qi === The electrostatic potential energy, UE, of one point charge q in the presence of n point charges Qi, taking an infinite separation between the charges as the reference position, is: U E ( r ) = q 4 π ε 0 ∑ i = 1 n Q i r i , {\displaystyle U_{E}(r)={\frac {q}{4\pi \varepsilon _{0}}}\sum _{i=1}^{n}{\frac {Q_{i}}{r_{i}}},} where ri is the distance between the point charges q and Qi, and q and Qi are the assigned values of the charges. == Electrostatic potential energy stored in a system of point charges == The electrostatic potential energy UE stored in a system of N charges q1, q2, …, qN at positions r1, r2, …, rN respectively, is: where, for each i value, V(ri) is the electrostatic potential due to all point charges except the one at ri, and is equal to: V ( r i ) = k e ∑ j ≠ i j = 1 N q j r i j , {\displaystyle V(\mathbf {r} _{i})=k_{e}\sum _{\stackrel {j=1}{j\neq i}}^{N}{\frac {q_{j}}{r_{ij}}},} where rij is the distance between qi and qj. === Energy stored in a system of one point charge === The electrostatic potential energy of a system containing only one point charge is zero, as there are no other sources of electrostatic force against which an external agent must do work in moving the point charge from infinity to its final location. A common question arises concerning the interaction of a point charge with its own electrostatic potential. Since this interaction doesn't act to move the point charge itself, it doesn't contribute to the stored energy of the system. === Energy stored in a system of two point charges === Consider bringing a point charge, q, into its final position near a point charge, Q1. The electric potential V(r) due to Q1 is V ( r ) = k e Q 1 r {\displaystyle V(\mathbf {r} )=k_{e}{\frac {Q_{1}}{r}}} Hence we obtain, the electrostatic potential energy of q in the potential of Q1 as U E = 1 4 π ε 0 q Q 1 r 1 {\displaystyle U_{E}={\frac {1}{4\pi \varepsilon _{0}}}{\frac {qQ_{1}}{r_{1}}}} where r1 is the separation between the two point charges. === Energy stored in a system of three point charges === The electrostatic potential energy of a system of three charges should not be confused with the electrostatic potential energy of Q1 due to two charges Q2 and Q3, because the latter doesn't include the electrostatic potential energy of the system of the two charges Q2 and Q3. The electrostatic potential energy stored in the system of three charges is: U E = 1 4 π ε 0 [ Q 1 Q 2 r 12 + Q 1 Q 3 r 13 + Q 2 Q 3 r 23 ] {\displaystyle U_{\mathrm {E} }={\frac {1}{4\pi \varepsilon _{0}}}\left[{\frac {Q_{1}Q_{2}}{r_{12}}}+{\frac {Q_{1}Q_{3}}{r_{13}}}+{\frac {Q_{2}Q_{3}}{r_{23}}}\right]} == Energy stored in an electrostatic field distribution in vacuum == The energy density, or energy per unit volume, d U d V {\textstyle {\frac {dU}{dV}}} , of the electrostatic field of a continuous charge distribution is: u e = d U d V = 1 2 ε 0 | E | 2 . {\displaystyle u_{e}={\frac {dU}{dV}}={\frac {1}{2}}\varepsilon _{0}\left|{\mathbf {E} }\right|^{2}.} == Energy stored in electronic elements == Some elements in a circuit can convert energy from one form to another. For example, a resistor converts electrical energy to heat. This is known as the Joule effect. A capacitor stores it in its electric field. The total electrostatic potential energy stored in a capacitor is given by U E = 1 2 Q V = 1 2 C V 2 = Q 2 2 C {\displaystyle U_{E}={\frac {1}{2}}QV={\frac {1}{2}}CV^{2}={\frac {Q^{2}}{2C}}} where C is the capacitance, V is the electric potential difference, and Q the charge stored in the capacitor. The total electrostatic potential energy may also be expressed in terms of the electric field in the form U E = 1 2 ∫ V E ⋅ D d V {\displaystyle U_{E}={\frac {1}{2}}\int _{V}\mathrm {E} \cdot \mathrm {D} \,dV} where D {\displaystyle \mathrm {D} } is the electric displacement field within a dielectric material and integration is over the entire volume of the dielectric. The total electrostatic potential energy stored within a charged dielectric may also be expressed in terms of a continuous volume charge, ρ {\displaystyle \rho } , U E = 1 2 ∫ V ρ Φ d V {\displaystyle U_{E}={\frac {1}{2}}\int _{V}\rho \Phi \,dV} where integration is over the entire volume of the dielectric. These latter two expressions are valid only for cases when the smallest increment of charge is zero ( d q → 0 {\displaystyle dq\to 0} ) such as dielectrics in the presence of metallic electrodes or dielectrics containing many charges. Note that a virtual experiment based on the energy transfer between capacitor plates reveals that an additional term should be taken into account when dealing with semiconductors for instance. While this extra energy cancels when dealing with insulators, the derivation predicts that it cannot be ignored as it may exceed the polarization energy. == Notes == == References == == External links == Media related to Electric potential energy at Wikimedia Commons
Wikipedia/Coulomb_potential_energy
In physics, an observable is a physical property or physical quantity that can be measured. In classical mechanics, an observable is a real-valued "function" on the set of all possible system states, e.g., position and momentum. In quantum mechanics, an observable is an operator, or gauge, where the property of the quantum state can be determined by some sequence of operations. For example, these operations might involve submitting the system to various electromagnetic fields and eventually reading a value. Physically meaningful observables must also satisfy transformation laws that relate observations performed by different observers in different frames of reference. These transformation laws are automorphisms of the state space, that is bijective transformations that preserve certain mathematical properties of the space in question. == Quantum mechanics == In quantum mechanics, observables manifest as self-adjoint operators on a separable complex Hilbert space representing the quantum state space. Observables assign values to outcomes of particular measurements, corresponding to the eigenvalue of the operator. If these outcomes represent physically allowable states (i.e. those that belong to the Hilbert space) the eigenvalues are real; however, the converse is not necessarily true. As a consequence, only certain measurements can determine the value of an observable for some state of a quantum system. In classical mechanics, any measurement can be made to determine the value of an observable. The relation between the state of a quantum system and the value of an observable requires some linear algebra for its description. In the mathematical formulation of quantum mechanics, up to a phase constant, pure states are given by non-zero vectors in a Hilbert space V. Two vectors v and w are considered to specify the same state if and only if w = c v {\displaystyle \mathbf {w} =c\mathbf {v} } for some non-zero c ∈ C {\displaystyle c\in \mathbb {C} } . Observables are given by self-adjoint operators on V. Not every self-adjoint operator corresponds to a physically meaningful observable. Also, not all physical observables are associated with non-trivial self-adjoint operators. For example, in quantum theory, mass appears as a parameter in the Hamiltonian, not as a non-trivial operator. In the case of transformation laws in quantum mechanics, the requisite automorphisms are unitary (or antiunitary) linear transformations of the Hilbert space V. Under Galilean relativity or special relativity, the mathematics of frames of reference is particularly simple, considerably restricting the set of physically meaningful observables. In quantum mechanics, measurement of observables exhibits some seemingly unintuitive properties. Specifically, if a system is in a state described by a vector in a Hilbert space, the measurement process affects the state in a non-deterministic but statistically predictable way. In particular, after a measurement is applied, the state description by a single vector may be destroyed, being replaced by a statistical ensemble. The irreversible nature of measurement operations in quantum physics is sometimes referred to as the measurement problem and is described mathematically by quantum operations. By the structure of quantum operations, this description is mathematically equivalent to that offered by the relative state interpretation where the original system is regarded as a subsystem of a larger system and the state of the original system is given by the partial trace of the state of the larger system. In quantum mechanics, dynamical variables A {\displaystyle A} such as position, translational (linear) momentum, orbital angular momentum, spin, and total angular momentum are each associated with a self-adjoint operator A ^ {\displaystyle {\hat {A}}} that acts on the state of the quantum system. The eigenvalues of operator A ^ {\displaystyle {\hat {A}}} correspond to the possible values that the dynamical variable can be observed as having. For example, suppose | ψ a ⟩ {\displaystyle |\psi _{a}\rangle } is an eigenket (eigenvector) of the observable A ^ {\displaystyle {\hat {A}}} , with eigenvalue a {\displaystyle a} , and exists in a Hilbert space. Then A ^ | ψ a ⟩ = a | ψ a ⟩ . {\displaystyle {\hat {A}}|\psi _{a}\rangle =a|\psi _{a}\rangle .} This eigenket equation says that if a measurement of the observable A ^ {\displaystyle {\hat {A}}} is made while the system of interest is in the state | ψ a ⟩ {\displaystyle |\psi _{a}\rangle } , then the observed value of that particular measurement must return the eigenvalue a {\displaystyle a} with certainty. However, if the system of interest is in the general state | ϕ ⟩ ∈ H {\displaystyle |\phi \rangle \in {\mathcal {H}}} (and | ϕ ⟩ {\displaystyle |\phi \rangle } and | ψ a ⟩ {\displaystyle |\psi _{a}\rangle } are unit vectors, and the eigenspace of a {\displaystyle a} is one-dimensional), then the eigenvalue a {\displaystyle a} is returned with probability | ⟨ ψ a | ϕ ⟩ | 2 {\displaystyle |\langle \psi _{a}|\phi \rangle |^{2}} , by the Born rule. === Compatible and incompatible observables in quantum mechanics === A crucial difference between classical quantities and quantum mechanical observables is that some pairs of quantum observables may not be simultaneously measurable, a property referred to as complementarity. This is mathematically expressed by non-commutativity of their corresponding operators, to the effect that the commutator [ A ^ , B ^ ] := A ^ B ^ − B ^ A ^ ≠ 0 ^ . {\displaystyle \left[{\hat {A}},{\hat {B}}\right]:={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}\neq {\hat {0}}.} This inequality expresses a dependence of measurement results on the order in which measurements of observables A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} are performed. A measurement of A ^ {\displaystyle {\hat {A}}} alters the quantum state in a way that is incompatible with the subsequent measurement of B ^ {\displaystyle {\hat {B}}} and vice versa. Observables corresponding to commuting operators are called compatible observables. For example, momentum along say the x {\displaystyle x} and y {\displaystyle y} axes are compatible. Observables corresponding to non-commuting operators are called incompatible observables or complementary variables. For example, the position and momentum along the same axis are incompatible.: 155  Incompatible observables cannot have a complete set of common eigenfunctions. Note that there can be some simultaneous eigenvectors of A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} , but not enough in number to constitute a complete basis. == See also == == References == == Further reading == Auyang, Sunny Y. (1995). How is quantum field theory possible?. New York, N.Y.: Oxford University Press. ISBN 978-0195093452. Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck (2019). Quantum Mechanics, Volume 1. Weinheim: John Wiley & Sons. ISBN 978-3-527-34553-3. de la Madrid Modino, R. (2001). Quantum mechanics in rigged Hilbert space language (PhD thesis). Universidad de Valladolid. Teschl, G. (2014). Mathematical Methods in Quantum Mechanics. Providence (R.I): American Mathematical Soc. ISBN 978-1-4704-1704-8. von Neumann, John (1996). Mathematical foundations of quantum mechanics. Translated by Robert T. Beyer (12. print., 1. paperback print. ed.). Princeton, N.J.: Princeton Univ. Press. ISBN 978-0691028934. Varadarajan, V.S. (2007). Geometry of quantum theory (2nd ed.). New York: Springer. ISBN 9780387493862. Weyl, Hermann (2009). "Appendix C: Quantum physics and causality". Philosophy of mathematics and natural science. Revised and augmented English edition based on a translation by Olaf Helmer. Princeton, N.J.: Princeton University Press. pp. 253–265. ISBN 9780691141206. Moretti, Valter (2017). Spectral Theory and Quantum Mechanics: Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation (2 ed.). Springer. ISBN 978-3319707068. Moretti, Valter (2019). Fundamental Mathematical Structures of Quantum Theory: Spectral Theory, Foundational Issues, Symmetries, Algebraic Formulation. Springer. ISBN 978-3030183462.
Wikipedia/Observable_(physics)
In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. == Group theory == The commutator of two elements, g and h, of a group G, is the element [g, h] = g−1h−1gh. This element is equal to the group's identity if and only if g and h commute (that is, if and only if gh = hg). The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group. The definition of the commutator above is used throughout this article, but many group theorists define the commutator as [g, h] = ghg−1h−1. Using the first definition, this can be expressed as [g−1, h−1]. === Identities (group theory) === Commutator identities are an important tool in group theory. The expression ax denotes the conjugate of a by x, defined as x−1ax. x y = x − 1 [ x , y ] . {\displaystyle x^{y}=x^{-1}[x,y].} [ y , x ] = [ x , y ] − 1 . {\displaystyle [y,x]=[x,y]^{-1}.} [ x , z y ] = [ x , y ] ⋅ [ x , z ] y {\displaystyle [x,zy]=[x,y]\cdot [x,z]^{y}} and [ x z , y ] = [ x , y ] z ⋅ [ z , y ] . {\displaystyle [xz,y]=[x,y]^{z}\cdot [z,y].} [ x , y − 1 ] = [ y , x ] y − 1 {\displaystyle \left[x,y^{-1}\right]=[y,x]^{y^{-1}}} and [ x − 1 , y ] = [ y , x ] x − 1 . {\displaystyle \left[x^{-1},y\right]=[y,x]^{x^{-1}}.} [ [ x , y − 1 ] , z ] y ⋅ [ [ y , z − 1 ] , x ] z ⋅ [ [ z , x − 1 ] , y ] x = 1 {\displaystyle \left[\left[x,y^{-1}\right],z\right]^{y}\cdot \left[\left[y,z^{-1}\right],x\right]^{z}\cdot \left[\left[z,x^{-1}\right],y\right]^{x}=1} and [ [ x , y ] , z x ] ⋅ [ [ z , x ] , y z ] ⋅ [ [ y , z ] , x y ] = 1. {\displaystyle \left[\left[x,y\right],z^{x}\right]\cdot \left[[z,x],y^{z}\right]\cdot \left[[y,z],x^{y}\right]=1.} Identity (5) is also known as the Hall–Witt identity, after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section). N.B., the above definition of the conjugate of a by x is used by some group theorists. Many other group theorists define the conjugate of a by x as xax−1. This is often written x a {\displaystyle {}^{x}a} . Similar identities hold for these conventions. Many identities that are true modulo certain subgroups are also used. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well: ( x y ) 2 = x 2 y 2 [ y , x ] [ [ y , x ] , y ] . {\displaystyle (xy)^{2}=x^{2}y^{2}[y,x][[y,x],y].} If the derived subgroup is central, then ( x y ) n = x n y n [ y , x ] ( n 2 ) . {\displaystyle (xy)^{n}=x^{n}y^{n}[y,x]^{\binom {n}{2}}.} == Ring theory == Rings often do not support division. Thus, the commutator of two elements a and b of a ring (or any associative algebra) is defined differently by [ a , b ] = a b − b a . {\displaystyle [a,b]=ab-ba.} The commutator is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra. The anticommutator of two elements a and b of a ring or associative algebra is defined by { a , b } = a b + b a . {\displaystyle \{a,b\}=ab+ba.} Sometimes [ a , b ] + {\displaystyle [a,b]_{+}} is used to denote anticommutator, while [ a , b ] − {\displaystyle [a,b]_{-}} is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras and in the derivation of the Dirac equation in particle physics. The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets and are completely isomorphic to the Hilbert space commutator structures mentioned. === Identities (ring theory) === The commutator has the following properties: ==== Lie-algebra identities ==== [ A + B , C ] = [ A , C ] + [ B , C ] {\displaystyle [A+B,C]=[A,C]+[B,C]} [ A , A ] = 0 {\displaystyle [A,A]=0} [ A , B ] = − [ B , A ] {\displaystyle [A,B]=-[B,A]} [ A , [ B , C ] ] + [ B , [ C , A ] ] + [ C , [ A , B ] ] = 0 {\displaystyle [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0} Relation (3) is called anticommutativity, while (4) is the Jacobi identity. ==== Additional identities ==== [ A , B C ] = [ A , B ] C + B [ A , C ] {\displaystyle [A,BC]=[A,B]C+B[A,C]} [ A , B C D ] = [ A , B ] C D + B [ A , C ] D + B C [ A , D ] {\displaystyle [A,BCD]=[A,B]CD+B[A,C]D+BC[A,D]} [ A , B C D E ] = [ A , B ] C D E + B [ A , C ] D E + B C [ A , D ] E + B C D [ A , E ] {\displaystyle [A,BCDE]=[A,B]CDE+B[A,C]DE+BC[A,D]E+BCD[A,E]} [ A B , C ] = A [ B , C ] + [ A , C ] B {\displaystyle [AB,C]=A[B,C]+[A,C]B} [ A B C , D ] = A B [ C , D ] + A [ B , D ] C + [ A , D ] B C {\displaystyle [ABC,D]=AB[C,D]+A[B,D]C+[A,D]BC} [ A B C D , E ] = A B C [ D , E ] + A B [ C , E ] D + A [ B , E ] C D + [ A , E ] B C D {\displaystyle [ABCD,E]=ABC[D,E]+AB[C,E]D+A[B,E]CD+[A,E]BCD} [ A , B + C ] = [ A , B ] + [ A , C ] {\displaystyle [A,B+C]=[A,B]+[A,C]} [ A + B , C + D ] = [ A , C ] + [ A , D ] + [ B , C ] + [ B , D ] {\displaystyle [A+B,C+D]=[A,C]+[A,D]+[B,C]+[B,D]} [ A B , C D ] = A [ B , C ] D + [ A , C ] B D + C A [ B , D ] + C [ A , D ] B = A [ B , C ] D + A C [ B , D ] + [ A , C ] D B + C [ A , D ] B {\displaystyle [AB,CD]=A[B,C]D+[A,C]BD+CA[B,D]+C[A,D]B=A[B,C]D+AC[B,D]+[A,C]DB+C[A,D]B} [ [ A , C ] , [ B , D ] ] = [ [ [ A , B ] , C ] , D ] + [ [ [ B , C ] , D ] , A ] + [ [ [ C , D ] , A ] , B ] + [ [ [ D , A ] , B ] , C ] {\displaystyle [[A,C],[B,D]]=[[[A,B],C],D]+[[[B,C],D],A]+[[[C,D],A],B]+[[[D,A],B],C]} If A is a fixed element of a ring R, identity (1) can be interpreted as a Leibniz rule for the map ad A : R → R {\displaystyle \operatorname {ad} _{A}:R\rightarrow R} given by ad A ⁡ ( B ) = [ A , B ] {\displaystyle \operatorname {ad} _{A}(B)=[A,B]} . In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity. From identity (9), one finds that the commutator of integer powers of ring elements is: [ A N , B M ] = ∑ n = 0 N − 1 ∑ m = 0 M − 1 A n B m [ A , B ] B N − n − 1 A M − m − 1 = ∑ n = 0 N − 1 ∑ m = 0 M − 1 B n A m [ A , B ] A N − n − 1 B M − m − 1 {\displaystyle [A^{N},B^{M}]=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}A^{n}B^{m}[A,B]B^{N-n-1}A^{M-m-1}=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}B^{n}A^{m}[A,B]A^{N-n-1}B^{M-m-1}} Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example: [ A B , C ] ± = A [ B , C ] − + [ A , C ] ± B {\displaystyle [AB,C]_{\pm }=A[B,C]_{-}+[A,C]_{\pm }B} [ A B , C D ] ± = A [ B , C ] − D + A C [ B , D ] − + [ A , C ] − D B + C [ A , D ] ± B {\displaystyle [AB,CD]_{\pm }=A[B,C]_{-}D+AC[B,D]_{-}+[A,C]_{-}DB+C[A,D]_{\pm }B} [ [ A , B ] , [ C , D ] ] = [ [ [ B , C ] + , A ] + , D ] − [ [ [ B , D ] + , A ] + , C ] + [ [ [ A , D ] + , B ] + , C ] − [ [ [ A , C ] + , B ] + , D ] {\displaystyle [[A,B],[C,D]]=[[[B,C]_{+},A]_{+},D]-[[[B,D]_{+},A]_{+},C]+[[[A,D]_{+},B]_{+},C]-[[[A,C]_{+},B]_{+},D]} [ A , [ B , C ] ± ] + [ B , [ C , A ] ± ] + [ C , [ A , B ] ± ] = 0 {\displaystyle \left[A,[B,C]_{\pm }\right]+\left[B,[C,A]_{\pm }\right]+\left[C,[A,B]_{\pm }\right]=0} [ A , B C ] ± = [ A , B ] − C + B [ A , C ] ± = [ A , B ] ± C ∓ B [ A , C ] − {\displaystyle [A,BC]_{\pm }=[A,B]_{-}C+B[A,C]_{\pm }=[A,B]_{\pm }C\mp B[A,C]_{-}} [ A , B C ] = [ A , B ] ± C ∓ B [ A , C ] ± {\displaystyle [A,BC]=[A,B]_{\pm }C\mp B[A,C]_{\pm }} ==== Exponential identities ==== Consider a ring or algebra in which the exponential e A = exp ⁡ ( A ) = 1 + A + 1 2 ! A 2 + ⋯ {\displaystyle e^{A}=\exp(A)=1+A+{\tfrac {1}{2!}}A^{2}+\cdots } can be meaningfully defined, such as a Banach algebra or a ring of formal power series. In such a ring, Hadamard's lemma applied to nested commutators gives: e A B e − A = B + [ A , B ] + 1 2 ! [ A , [ A , B ] ] + 1 3 ! [ A , [ A , [ A , B ] ] ] + ⋯ = e ad A ( B ) . {\textstyle e^{A}Be^{-A}\ =\ B+[A,B]+{\frac {1}{2!}}[A,[A,B]]+{\frac {1}{3!}}[A,[A,[A,B]]]+\cdots \ =\ e^{\operatorname {ad} _{A}}(B).} (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)). A similar expansion expresses the group commutator of expressions e A {\displaystyle e^{A}} (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets), e A e B e − A e − B = exp ( [ A , B ] + 1 2 ! [ A + B , [ A , B ] ] + 1 3 ! ( 1 2 [ A , [ B , [ B , A ] ] ] + [ A + B , [ A + B , [ A , B ] ] ] ) + ⋯ ) . {\displaystyle e^{A}e^{B}e^{-A}e^{-B}=\exp \!\left([A,B]+{\frac {1}{2!}}[A{+}B,[A,B]]+{\frac {1}{3!}}\left({\frac {1}{2}}[A,[B,[B,A]]]+[A{+}B,[A{+}B,[A,B]]]\right)+\cdots \right).} == Graded rings and algebras == When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as [ ω , η ] g r := ω η − ( − 1 ) deg ⁡ ω deg ⁡ η η ω . {\displaystyle [\omega ,\eta ]_{gr}:=\omega \eta -(-1)^{\deg \omega \deg \eta }\eta \omega .} == Adjoint derivation == Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element x ∈ R {\displaystyle x\in R} , we define the adjoint mapping a d x : R → R {\displaystyle \mathrm {ad} _{x}:R\to R} by: ad x ⁡ ( y ) = [ x , y ] = x y − y x . {\displaystyle \operatorname {ad} _{x}(y)=[x,y]=xy-yx.} This mapping is a derivation on the ring R: a d x ( y z ) = a d x ( y ) z + y a d x ( z ) . {\displaystyle \mathrm {ad} _{x}\!(yz)\ =\ \mathrm {ad} _{x}\!(y)\,z\,+\,y\,\mathrm {ad} _{x}\!(z).} By the Jacobi identity, it is also a derivation over the commutation operation: a d x [ y , z ] = [ a d x ( y ) , z ] + [ y , a d x ( z ) ] . {\displaystyle \mathrm {ad} _{x}[y,z]\ =\ [\mathrm {ad} _{x}\!(y),z]\,+\,[y,\mathrm {ad} _{x}\!(z)].} Composing such mappings, we get for example ad x ⁡ ad y ⁡ ( z ) = [ x , [ y , z ] ] {\displaystyle \operatorname {ad} _{x}\operatorname {ad} _{y}(z)=[x,[y,z]\,]} and ad x 2 ( z ) = ad x ( ad x ( z ) ) = [ x , [ x , z ] ] . {\displaystyle \operatorname {ad} _{x}^{2}\!(z)\ =\ \operatorname {ad} _{x}\!(\operatorname {ad} _{x}\!(z))\ =\ [x,[x,z]\,].} We may consider a d {\displaystyle \mathrm {ad} } itself as a mapping, a d : R → E n d ( R ) {\displaystyle \mathrm {ad} :R\to \mathrm {End} (R)} , where E n d ( R ) {\displaystyle \mathrm {End} (R)} is the ring of mappings from R to itself with composition as the multiplication operation. Then a d {\displaystyle \mathrm {ad} } is a Lie algebra homomorphism, preserving the commutator: ad [ x , y ] = [ ad x , ad y ] . {\displaystyle \operatorname {ad} _{[x,y]}=\left[\operatorname {ad} _{x},\operatorname {ad} _{y}\right].} By contrast, it is not always a ring homomorphism: usually ad x y ≠ ad x ⁡ ad y {\displaystyle \operatorname {ad} _{xy}\,\neq \,\operatorname {ad} _{x}\operatorname {ad} _{y}} . === General Leibniz rule === The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation: x n y = ∑ k = 0 n ( n k ) ad x k ( y ) x n − k . {\displaystyle x^{n}y=\sum _{k=0}^{n}{\binom {n}{k}}\operatorname {ad} _{x}^{k}\!(y)\,x^{n-k}.} Replacing x {\displaystyle x} by the differentiation operator ∂ {\displaystyle \partial } , and y {\displaystyle y} by the multiplication operator m f : g ↦ f g {\displaystyle m_{f}:g\mapsto fg} , we get ad ⁡ ( ∂ ) ( m f ) = m ∂ ( f ) {\displaystyle \operatorname {ad} (\partial )(m_{f})=m_{\partial (f)}} , and applying both sides to a function g, the identity becomes the usual Leibniz rule for the nth derivative ∂ n ( f g ) {\displaystyle \partial ^{n}\!(fg)} . == See also == Anticommutativity Associator Baker–Campbell–Hausdorff formula Canonical commutation relation Centralizer a.k.a. commutant Derivation (abstract algebra) Moyal bracket Pincherle derivative Poisson bracket Ternary commutator Three subgroups lemma == Notes == == References == Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1 Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, ISBN 0-13-805326-X Herstein, I. N. (1975), Topics In Algebra (2nd ed.), Wiley, ISBN 0471010901 Lavrov, P.M. (2014), "Jacobi -type identities in algebras and superalgebras", Theoretical and Mathematical Physics, 179 (2): 550–558, arXiv:1304.5050, Bibcode:2014TMP...179..550L, doi:10.1007/s11232-014-0161-2, S2CID 119175276 Liboff, Richard L. (2003), Introductory Quantum Mechanics (4th ed.), Addison-Wesley, ISBN 0-8053-8714-5 McKay, Susan (2000), Finite p-groups, Queen Mary Maths Notes, vol. 18, University of London, ISBN 978-0-902480-17-9, MR 1802994 McMahon, D. (2008), Quantum Field Theory, McGraw Hill, ISBN 978-0-07-154382-8 == Further reading == McKenzie, R.; Snow, J. (2005), "Congruence modular varieties: commutator theory", in Kudryavtsev, V. B.; Rosenberg, I. G. (eds.), Structural Theory of Automata, Semigroups, and Universal Algebra, NATO Science Series II, vol. 207, Springer, pp. 273–329, doi:10.1007/1-4020-3817-8_11, ISBN 9781402038174 == External links == "Commutator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Commutator_(physics)
Algorithmica is a monthly peer-reviewed scientific journal focusing on research and the application of computer science algorithms. The journal was established in 1986 and is published by Springer Science+Business Media. The editor in chief is Mohammad Hajiaghayi. Subject coverage includes sorting, searching, data structures, computational geometry, and linear programming, VLSI, distributed computing, parallel processing, computer aided design, robotics, graphics, data base design, and software tools. == Abstracting and indexing == This journal is indexed by the following services: == See also == ACM Transactions on Algorithms Algorithms (journal) Discrete Mathematics & Theoretical Computer Science == References == "Journal Rankings". CORE: The Computing Research and Education Association of Australasia. July 2008. Archived from the original on 2014-01-25. Retrieved 2010-11-05. Algorithmica received the highest possible ranking "A*". == External links == Springer information
Wikipedia/Algorithmica
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm. == Quantum data fitting == Data fitting is a process of constructing a mathematical function that best fits a set of data points. The fit's quality is measured by some criteria, usually the distance between the function and the data points. === Quantum least squares fitting === One of the most common types of data fitting is solving the least squares problem, minimizing the sum of the squares of differences between the data points and the fitted function. The algorithm is given N {\displaystyle N} input data points ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x N , y N ) {\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})} and M {\displaystyle M} continuous functions f 1 , f 2 , . . . , f M {\displaystyle f_{1},f_{2},...,f_{M}} . The algorithm finds and gives as output a continuous function f λ → {\displaystyle f_{\vec {\lambda }}} that is a linear combination of f j {\displaystyle f_{j}} : f λ → ( x ) = ∑ j = 1 M f j ( x ) λ j {\displaystyle f_{\vec {\lambda }}(x)=\sum _{j=1}^{M}f_{j}(x)\lambda _{j}} In other words, the algorithm finds the complex coefficients λ j {\displaystyle \lambda _{j}} , and thus the vector λ → = ( λ 1 , λ 2 , . . . , λ M ) {\displaystyle {\vec {\lambda }}=(\lambda _{1},\lambda _{2},...,\lambda _{M})} . The algorithm is aimed at minimizing the error, which is given by: E = ∑ i = 1 N | f λ → ( x i ) − y i | 2 = ∑ i = 1 N | ∑ j = 1 M f j ( x i ) λ j − y i | 2 = | F λ → − y → | 2 {\displaystyle E=\sum _{i=1}^{N}\left\vert f_{\vec {\lambda }}(x_{i})-y_{i}\right\vert ^{2}=\sum _{i=1}^{N}\left\vert \sum _{j=1}^{M}f_{j}(x_{i})\lambda _{j}-y_{i}\right\vert ^{2}=\left\vert F{\vec {\lambda }}-{\vec {y}}\right\vert ^{2}} where F {\displaystyle F} is defined to be the following matrix: F = ( f 1 ( x 1 ) ⋯ f M ( x 1 ) f 1 ( x 2 ) ⋯ f M ( x 2 ) ⋮ ⋱ ⋮ f 1 ( x N ) ⋯ f M ( x N ) ) {\displaystyle {F}={\begin{pmatrix}f_{1}(x_{1})&\cdots &f_{M}(x_{1})\\f_{1}(x_{2})&\cdots &f_{M}(x_{2})\\\vdots &\ddots &\vdots \\f_{1}(x_{N})&\cdots &f_{M}(x_{N})\\\end{pmatrix}}} The quantum least-squares fitting algorithm makes use of a version of Harrow, Hassidim, and Lloyd's quantum algorithm for linear systems of equations (HHL), and outputs the coefficients λ j {\displaystyle \lambda _{j}} and the fit quality estimation E {\displaystyle E} . It consists of three subroutines: an algorithm for performing a pseudo-inverse operation, one routine for the fit quality estimation, and an algorithm for learning the fit parameters. Because the quantum algorithm is mainly based on the HHL algorithm, it suggests an exponential improvement in the case where F {\displaystyle F} is sparse and the condition number (namely, the ratio between the largest and the smallest eigenvalues) of both F F † {\displaystyle FF^{\dagger }} and F † F {\displaystyle F^{\dagger }F} is small. == Quantum semidefinite programming == Semidefinite programming (SDP) is an optimization subfield dealing with the optimization of a linear objective function (a user-specified function to be minimized or maximized), over the intersection of the cone of positive semidefinite matrices with an affine space. The objective function is an inner product of a matrix C {\displaystyle C} (given as an input) with the variable X {\displaystyle X} . Denote by S n {\displaystyle \mathbb {S} ^{n}} the space of all n × n {\displaystyle n\times n} symmetric matrices. The variable X {\displaystyle X} must lie in the (closed convex) cone of positive semidefinite symmetric matrices S + n {\displaystyle \mathbb {S} _{+}^{n}} . The inner product of two matrices is defined as: ⟨ A , B ⟩ S n = t r ( A T B ) = ∑ i = 1 , j = 1 n A i j B i j . {\displaystyle \langle A,B\rangle _{\mathbb {S} ^{n}}={\rm {tr}}(A^{T}B)=\sum _{i=1,j=1}^{n}A_{ij}B_{ij}.} The problem may have additional constraints (given as inputs), also usually formulated as inner products. Each constraint forces the inner product of the matrices A k {\displaystyle A_{k}} (given as an input) with the optimization variable X {\displaystyle X} to be smaller than a specified value b k {\displaystyle b_{k}} (given as an input). Finally, the SDP problem can be written as: min X ∈ S n ⟨ C , X ⟩ S n subject to ⟨ A k , X ⟩ S n ≤ b k , k = 1 , … , m X ⪰ 0 {\displaystyle {\begin{array}{rl}{\displaystyle \min _{X\in \mathbb {S} ^{n}}}&\langle C,X\rangle _{\mathbb {S} ^{n}}\\{\text{subject to}}&\langle A_{k},X\rangle _{\mathbb {S} ^{n}}\leq b_{k},\quad k=1,\ldots ,m\\&X\succeq 0\end{array}}} The best classical algorithm is not known to unconditionally run in polynomial time. The corresponding feasibility problem is known to either lie outside of the union of the complexity classes NP and co-NP, or in the intersection of NP and co-NP. === The quantum algorithm === The algorithm inputs are A 1 . . . A m , C , b 1 . . . b m {\displaystyle A_{1}...A_{m},C,b_{1}...b_{m}} and parameters regarding the solution's trace, precision and optimal value (the objective function's value at the optimal point). The quantum algorithm consists of several iterations. In each iteration, it solves a feasibility problem, namely, finds any solution satisfying the following conditions (giving a threshold t {\displaystyle t} ): ⟨ C , X ⟩ S n ≤ t ⟨ A k , X ⟩ S n ≤ b k , k = 1 , … , m X ⪰ 0 {\displaystyle {\begin{array}{lr}\langle C,X\rangle _{\mathbb {S} ^{n}}\leq t\\\langle A_{k},X\rangle _{\mathbb {S} ^{n}}\leq b_{k},\quad k=1,\ldots ,m\\X\succeq 0\end{array}}} In each iteration, a different threshold t {\displaystyle t} is chosen, and the algorithm outputs either a solution X {\displaystyle X} such that ⟨ C , X ⟩ S n ≤ t {\displaystyle \langle C,X\rangle _{\mathbb {S} ^{n}}\leq t} (and the other constraints are satisfied, too) or an indication that no such solution exists. The algorithm performs a binary search to find the minimal threshold t {\displaystyle t} for which a solution X {\displaystyle X} still exists: this gives the minimal solution to the SDP problem. The quantum algorithm provides a quadratic improvement over the best classical algorithm in the general case, and an exponential improvement when the input matrices are of low rank. == Quantum combinatorial optimization == The combinatorial optimization problem is aimed at finding an optimal object from a finite set of objects. The problem can be phrased as a maximization of an objective function which is a sum of Boolean functions. Each Boolean function C α : { 0 , 1 } n → { 0 , 1 } {\displaystyle \,C_{\alpha }\colon \lbrace {0,1\rbrace }^{n}\rightarrow \lbrace {0,1}\rbrace } gets as input the n {\displaystyle n} -bit string z = z 1 z 2 … z n {\displaystyle z=z_{1}z_{2}\ldots z_{n}} and gives as output one bit (0 or 1). The combinatorial optimization problem of n {\displaystyle n} bits and m {\displaystyle m} clauses is finding an n {\displaystyle n} -bit string z {\displaystyle z} that maximizes the function C ( z ) = ∑ α = 1 m C α ( z ) {\displaystyle C(z)=\sum _{\alpha =1}^{m}C_{\alpha }(z)} Approximate optimization is a way of finding an approximate solution to an optimization problem, which is often NP-hard. The approximated solution of the combinatorial optimization problem is a string z {\displaystyle z} that is close to maximizing C ( z ) {\displaystyle C(z)} . === Quantum approximate optimization algorithm === For combinatorial optimization, the quantum approximate optimization algorithm (QAOA) briefly had a better approximation ratio than any known polynomial time classical algorithm (for a certain problem), until a more effective classical algorithm was proposed. The relative speed-up of the quantum algorithm is an open research question. QAOA consists of the following steps: Defining a cost Hamiltonian H C {\displaystyle H_{C}} such that its ground state encodes the solution to the optimization problem. Defining a mixer Hamiltonian H M {\displaystyle H_{M}} . Defining the oracles U C ( γ ) = exp ⁡ ( − ı γ H C ) {\displaystyle U_{C}(\gamma )=\exp(-\imath \gamma H_{C})} and U M ( α ) = exp ⁡ ( − ı α H M ) {\displaystyle U_{M}(\alpha )=\exp(-\imath \alpha H_{M})} , with parameters γ {\displaystyle \gamma } and α. Repeated application of the oracles U C {\displaystyle U_{C}} and U M {\displaystyle U_{M}} , in the order: U ( γ , α ) = ∐ i = 1 N ( U C ( γ i ) U M ( α i ) ) {\displaystyle U({\boldsymbol {\gamma }},{\boldsymbol {\alpha }})=\coprod _{i=1}^{N}(U_{C}(\gamma _{i})U_{M}(\alpha _{i}))} Preparing an initial state, that is a superposition of all possible states and apply U ( γ , α ) {\displaystyle U({\boldsymbol {\gamma }},{\boldsymbol {\alpha }})} to the state. Using classical methods to optimize the parameters γ , α {\displaystyle {\boldsymbol {\gamma }},{\boldsymbol {\alpha }}} and measure the output state of the optimized circuit to obtain the approximate optimal solution to the cost Hamiltonian. An optimal solution will be one that maximizes the expectation value of the cost Hamiltonian H C {\displaystyle H_{C}} . The layout of the algorithm, viz, the use of cost and mixer Hamiltonians are inspired from the Quantum Adiabatic theorem, which states that starting in a ground state of a time-dependent Hamiltonian, if the Hamiltonian evolves slowly enough, the final state will be a ground state of the final Hamiltonian. Moreover, the adiabatic theorem can be generalized to any other eigenstate as long as there is no overlap (degeneracy) between different eigenstates across the evolution. Identifying the initial Hamiltonian with H M {\displaystyle H_{M}} and the final Hamiltonian with H C {\displaystyle H_{C}} , whose ground states encode the solution to the optimization problem of interest, one can approximate the optimization problem as the adiabatic evolution of the Hamiltonian from an initial to the final one, whose ground (eigen)state gives the optimal solution. In general, QAOA relies on the use of unitary operators dependent on 2 p {\displaystyle 2p} angles (parameters), where p > 1 {\displaystyle p>1} is an input integer, which can be identified the number of layers of the oracle U ( γ , α ) {\displaystyle U({\boldsymbol {\gamma }},{\boldsymbol {\alpha }})} . These operators are iteratively applied on a state that is an equal-weighted quantum superposition of all the possible states in the computational basis. In each iteration, the state is measured in the computational basis and the Boolean function C ( z ) {\displaystyle C(z)} is estimated. The angles are then updated classically to increase C ( z ) {\displaystyle C(z)} . After this procedure is repeated a sufficient number of times, the value of C ( z ) {\displaystyle C(z)} is almost optimal, and the state being measured is close to being optimal as well. A sample circuit that implements QAOA on a quantum computer is given in figure. This procedure is highlighted using the following example of finding the minimum vertex cover of a graph. === QAOA for finding the minimum vertex cover of a graph === The goal here is to find a minimum vertex cover of a graph: a collection of vertices such that each edge in the graph contains at least one of the vertices in the cover. Hence, these vertices “cover” all the edges. We wish to find a vertex cover that has the smallest possible number of vertices. Vertex covers can be represented by a bit string where each bit denotes whether the corresponding vertex is present in the cover. For example, the bit string 0101 represents a cover consisting of the second and fourth vertex in a graph with four vertices. Consider the graph given in the figure. It has four vertices and there are two minimum vertex cover for this graph: vertices 0 and 2, and the vertices 1 and 2. These can be respectively represented by the bit strings 1010 and 0110. The goal of the algorithm is to sample these bit strings with high probability. In this case, the cost Hamiltonian has two ground states, |1010⟩ and |0110⟩, coinciding with the solutions of the problem. The mixer Hamiltonian is the simple, non-commuting sum of Pauli-X operations on each node of the graph and they are given by: H C = − 0.25 Z 3 + 0.5 Z 0 + 0.5 Z 1 + 1.25 Z 2 + 0.75 ( Z 0 Z 1 + Z 0 Z 2 + Z 2 Z 3 + Z 1 Z 2 ) {\displaystyle H_{C}=-0.25Z_{3}+0.5Z_{0}+0.5Z_{1}+1.25Z_{2}+0.75(Z_{0}Z_{1}+Z_{0}Z_{2}+Z_{2}Z_{3}+Z_{1}Z_{2})} H M = X 0 + X 1 + X 2 + X 3 {\displaystyle H_{M}=X_{0}+X_{1}+X_{2}+X_{3}} Implementing QAOA algorithm for this four qubit circuit with two layers of the ansatz in qiskit (see figure) and optimizing leads to a probability distribution for the states given in the figure. This shows that the states |0110⟩ and |1010⟩ have the highest probabilities of being measured, just as expected. === Generalization of QAOA to constrained combinatorial optimisation === In principle the optimal value of C ( z ) {\displaystyle C(z)} can be reached up to arbitrary precision, this is guaranteed by the adiabatic theorem or alternatively by the universality of the QAOA unitaries. However, it is an open question whether this can be done in a feasible way. For example, it was shown that QAOA exhibits a strong dependence on the ratio of a problem's constraint to variables (problem density) placing a limiting restriction on the algorithm's capacity to minimize a corresponding objective function. It was soon recognized that a generalization of the QAOA process is essentially an alternating application of a continuous-time quantum walk on an underlying graph followed by a quality-dependent phase shift applied to each solution state. This generalized QAOA was termed as QWOA (Quantum Walk-based Optimisation Algorithm). In the paper How many qubits are needed for quantum computational supremacy submitted to arXiv, the authors conclude that a QAOA circuit with 420 qubits and 500 constraints would require at least one century to be simulated using a classical simulation algorithm running on state-of-the-art supercomputers so that would be sufficient for quantum computational supremacy. A rigorous comparison of QAOA with classical algorithms can give estimates on depth p {\displaystyle p} and number of qubits required for quantum advantage. A study of QAOA and MaxCut algorithm shows that p > 11 {\displaystyle p>11} is required for scalable advantage. === Variations of QAOA === Several variations to the basic structure of QAOA have been proposed, which include variations to the ansatz of the basic algorithm. The choice of ansatz typically depends on the problem type, such as combinatorial problems represented as graphs, or problems strongly influenced by hardware design. However, ansatz design must balance specificity and generality to avoid overfitting and maintain applicability to a wide range of problems. For this reason, designing optimal ansatze for QAOA is an extensively researched and widely investigated topic. Some of the proposed variants are: Multi-angle QAOA Expressive QAOA (XQAOA) QAOA+ Digitised counteradiabatic QAOA Quantum alternating operator ansatz,which allows for constrains on the optimization problem etc. Another variation of QAOA focuses on techniques for parameter optimization, which aims at selecting the optimal set of initial parameters for a given problem and avoiding barren plateaus, which represent parameters leading to eigenstates which correspond to plateaus in the energy landscape of the cost Hamiltonian. Finally, there has been significant research interest in leveraging specific hardware to enhance the performance of QAOA across various platforms, such as trapped ion, neutral atoms, superconducting qubits, and photonic quantum computers. The goals of these approaches include overcoming hardware connectivity limitations and mitigating noise-related issues to broaden the applicability of QAOA to a wide range of combinatorial optimization problems. == QAOA algorithm Qiskit implementation == The quantum circuit shown here is from a simple example of how the QAOA algorithm can be implemented in Python using Qiskit, an open-source quantum computing software development framework by IBM. == See also == Adiabatic quantum computation Quantum annealing == References == == External links == Implementation of the QAOA algorithm for the knapsack problem with Classiq
Wikipedia/Quantum_approximate_optimization_algorithm
The Hadamard transform (also known as the Walsh–Hadamard transform, Hadamard–Rademacher–Walsh transform, Walsh transform, or Walsh–Fourier transform) is an example of a generalized class of Fourier transforms. It performs an orthogonal, symmetric, involutive, linear operation on 2m real numbers (or complex, or hypercomplex numbers, although the Hadamard matrices themselves are purely real). The Hadamard transform can be regarded as being built out of size-2 discrete Fourier transforms (DFTs), and is in fact equivalent to a multidimensional DFT of size 2 × 2 × ⋯ × 2 × 2. It decomposes an arbitrary input vector into a superposition of Walsh functions. The transform is named for the French mathematician Jacques Hadamard (French: [adamaʁ]), the German-American mathematician Hans Rademacher, and the American mathematician Joseph L. Walsh. == Definition == The Hadamard transform Hm is a 2m × 2m matrix, the Hadamard matrix (scaled by a normalization factor), that transforms 2m real numbers xn into 2m real numbers Xk. The Hadamard transform can be defined in two ways: recursively, or by using the binary (base-2) representation of the indices n and k. Recursively, we define the 1 × 1 Hadamard transform H0 by the identity H0 = 1, and then define Hm for m > 0 by: H m = 1 2 ( H m − 1 H m − 1 H m − 1 − H m − 1 ) {\displaystyle H_{m}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}H_{m-1}&H_{m-1}\\H_{m-1}&-H_{m-1}\end{pmatrix}}} where the 1/√2 is a normalization that is sometimes omitted. For m > 1, we can also define Hm by: H m = H 1 ⊗ H m − 1 {\displaystyle H_{m}=H_{1}\otimes H_{m-1}} where ⊗ {\displaystyle \otimes } represents the Kronecker product. Thus, other than this normalization factor, the Hadamard matrices are made up entirely of 1 and −1. Equivalently, we can define the Hadamard matrix by its (k, n)-th entry by writing k = ∑ i = 0 m − 1 k i 2 i = k m − 1 2 m − 1 + k m − 2 2 m − 2 + ⋯ + k 1 2 + k 0 n = ∑ i = 0 m − 1 n i 2 i = n m − 1 2 m − 1 + n m − 2 2 m − 2 + ⋯ + n 1 2 + n 0 {\displaystyle {\begin{aligned}k&=\sum _{i=0}^{m-1}{k_{i}2^{i}}=k_{m-1}2^{m-1}+k_{m-2}2^{m-2}+\dots +k_{1}2+k_{0}\\n&=\sum _{i=0}^{m-1}{n_{i}2^{i}}=n_{m-1}2^{m-1}+n_{m-2}2^{m-2}+\dots +n_{1}2+n_{0}\end{aligned}}} where the kj and nj are the bit elements (0 or 1) of k and n, respectively. Note that for the element in the top left corner, we define: k = n = 0 {\displaystyle k=n=0} . In this case, we have: ( H m ) k , n = 1 2 m / 2 ( − 1 ) ∑ j k j n j {\displaystyle (H_{m})_{k,n}={\frac {1}{2^{m/2}}}(-1)^{\sum _{j}k_{j}n_{j}}} This is exactly the multidimensional 2 × 2 × ⋯ × 2 × 2 {\textstyle 2\times 2\times \cdots \times 2\times 2} DFT, normalized to be unitary, if the inputs and outputs are regarded as multidimensional arrays indexed by the nj and kj, respectively. Some examples of the Hadamard matrices follow. H 0 = + ( 1 ) H 1 = 1 2 ( 1 1 1 − 1 ) H 2 = 1 2 ( 1 1 1 1 1 − 1 1 − 1 1 1 − 1 − 1 1 − 1 − 1 1 ) H 3 = 1 2 3 / 2 ( 1 1 1 1 1 1 1 1 1 − 1 1 − 1 1 − 1 1 − 1 1 1 − 1 − 1 1 1 − 1 − 1 1 − 1 − 1 1 1 − 1 − 1 1 1 1 1 1 − 1 − 1 − 1 − 1 1 − 1 1 − 1 − 1 1 − 1 1 1 1 − 1 − 1 − 1 − 1 1 1 1 − 1 − 1 1 − 1 1 1 − 1 ) ( H n ) i , j = 1 2 n / 2 ( − 1 ) i ⋅ j {\displaystyle {\begin{aligned}H_{0}&=+{\begin{pmatrix}1\end{pmatrix}}\\[5pt]H_{1}&={\frac {1}{\sqrt {2}}}\left({\begin{array}{rr}1&1\\1&-1\end{array}}\right)\\[5pt]H_{2}&={\frac {1}{2}}\left({\begin{array}{rrrr}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1\end{array}}\right)\\[5pt]H_{3}&={\frac {1}{2^{3/2}}}\left({\begin{array}{rrrrrrrr}1&1&1&1&1&1&1&1\\1&-1&1&-1&1&-1&1&-1\\1&1&-1&-1&1&1&-1&-1\\1&-1&-1&1&1&-1&-1&1\\1&1&1&1&-1&-1&-1&-1\\1&-1&1&-1&-1&1&-1&1\\1&1&-1&-1&-1&-1&1&1\\1&-1&-1&1&-1&1&1&-1\end{array}}\right)\\[5pt](H_{n})_{i,j}&={\frac {1}{2^{n/2}}}(-1)^{i\cdot j}\end{aligned}}} where i ⋅ j {\displaystyle i\cdot j} is the bitwise dot product of the binary representations of the numbers i and j. For example, if n ≥ 2 {\textstyle n\;\geq \;2} , then ( H n ) 3 , 2 = ( − 1 ) 3 ⋅ 2 = ( − 1 ) ( 1 , 1 ) ⋅ ( 1 , 0 ) = ( − 1 ) 1 + 0 = ( − 1 ) 1 = − 1 {\displaystyle (H_{n})_{3,2}\;=\;(-1)^{3\cdot 2}\;=\;(-1)^{(1,1)\cdot (1,0)}\;=\;(-1)^{1+0}\;=\;(-1)^{1}\;=\;-1} , agreeing with the above (ignoring the overall constant). Note that the first row, first column element of the matrix is denoted by ( H n ) 0 , 0 {\textstyle (H_{n})_{0,0}} . H1 is precisely the size-2 DFT. It can also be regarded as the Fourier transform on the two-element additive group of Z/(2). The rows of the Hadamard matrices are the Walsh functions. == Advantages of the Walsh–Hadamard transform == === Real === According to the above definition of matrix H, here we let H = H[m,n] H [ m , n ] = ( 1 1 1 − 1 ) {\displaystyle H[m,n]={\begin{pmatrix}1&1\\1&-1\end{pmatrix}}} In the Walsh transform, only 1 and −1 will appear in the matrix. The numbers 1 and −1 are real numbers so there is no need to perform a complex number calculation. === No multiplication is required === The DFT needs irrational multiplication, while the Hadamard transform does not. Even rational multiplication is not needed, since sign flips is all it takes. === Some properties are similar to those of the DFT === In the Walsh transform matrix, all entries in the first row (and column) are equal to 1. sign change calculated 1st row 0 second row=1. third row =2. . . . eighth row=7. H [ m , n ] = ( 1 1 1 1 1 1 1 1 1 1 1 1 − 1 − 1 − 1 − 1 1 1 − 1 − 1 − 1 − 1 1 1 1 1 − 1 − 1 1 1 − 1 − 1 1 − 1 − 1 1 1 − 1 − 1 1 1 − 1 − 1 1 − 1 1 1 − 1 1 − 1 1 − 1 − 1 1 − 1 1 1 − 1 1 − 1 1 − 1 1 − 1 ) {\displaystyle H[m,n]=\left({\begin{array}{rrrrrrrr}1&1&1&1&1&1&1&1\\1&1&1&1&-1&-1&-1&-1\\1&1&-1&-1&-1&-1&1&1\\1&1&-1&-1&1&1&-1&-1\\1&-1&-1&1&1&-1&-1&1\\1&-1&-1&1&-1&1&1&-1\\1&-1&1&-1&-1&1&-1&1\\1&-1&1&-1&1&-1&1&-1\end{array}}\right)} Discrete Fourier transform: e − j 2 π m n / N {\displaystyle e^{-j2\pi mn/N}} In discrete Fourier transform, when m equal to zeros (mean first row), the result of DFT also is 1. At the second row, although it is different from the first row we can observe a characteristic of the matrix that the signal in the first raw matrix is low frequency and it will increase the frequency at second row, increase more frequency until the last row. If we calculate zero crossing: First row = 0 zero crossing Second row = 1 zero crossing Third row = 2 zero crossings ⋮ Eight row = 7 zero crossings == Relation to Fourier transform == The Hadamard transform is in fact equivalent to a multidimensional DFT of size 2 × 2 × ⋯ × 2 × 2. Another approach is to view the Hadamard transform as a Fourier transform on the Boolean group ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} . Using the Fourier transform on finite (abelian) groups, the Fourier transform of a function f : ( Z / 2 Z ) n → C {\displaystyle f\colon (\mathbb {Z} /2\mathbb {Z} )^{n}\to \mathbb {C} } is the function f ^ {\displaystyle {\widehat {f}}} defined by f ^ ( χ ) = ∑ a ∈ ( Z / 2 Z ) n f ( a ) χ ¯ ( a ) {\displaystyle {\widehat {f}}(\chi )=\sum _{a\in (\mathbb {Z} /2\mathbb {Z} )^{n}}f(a){\bar {\chi }}(a)} where χ {\displaystyle \chi } is a character of ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} . Each character has the form χ r ( a ) = ( − 1 ) a ⋅ r {\displaystyle \chi _{r}(a)=(-1)^{a\cdot r}} for some r ∈ ( Z / 2 Z ) n {\displaystyle r\in (\mathbb {Z} /2\mathbb {Z} )^{n}} , where the multiplication is the boolean dot product on bit strings, so we can identify the input to f ^ {\displaystyle {\widehat {f}}} with r ∈ ( Z / 2 Z ) n {\displaystyle r\in (\mathbb {Z} /2\mathbb {Z} )^{n}} (Pontryagin duality) and define f ^ : ( Z / 2 Z ) n → C {\displaystyle {\widehat {f}}\colon (\mathbb {Z} /2\mathbb {Z} )^{n}\to \mathbb {C} } by f ^ ( r ) = ∑ a ∈ ( Z / 2 Z ) n f ( a ) ( − 1 ) r ⋅ a {\displaystyle {\widehat {f}}(r)=\sum _{a\in (\mathbb {Z} /2\mathbb {Z} )^{n}}f(a)(-1)^{r\cdot a}} This is the Hadamard transform of f {\displaystyle f} , considering the input to f {\displaystyle f} and f ^ {\displaystyle {\widehat {f}}} as boolean strings. In terms of the above formulation where the Hadamard transform multiplies a vector of 2 n {\displaystyle 2^{n}} complex numbers v {\displaystyle v} on the left by the Hadamard matrix H n {\displaystyle H_{n}} the equivalence is seen by taking f {\displaystyle f} to take as input the bit string corresponding to the index of an element of v {\displaystyle v} , and having f {\displaystyle f} output the corresponding element of v {\displaystyle v} . Compare this to the usual discrete Fourier transform which when applied to a vector v {\displaystyle v} of 2 n {\displaystyle 2^{n}} complex numbers instead uses characters of the cyclic group Z / 2 n Z {\displaystyle \mathbb {Z} /2^{n}\mathbb {Z} } . == Computational complexity == In the classical domain, the Hadamard transform can be computed in n log ⁡ n {\displaystyle n\log n} operations ( n = 2 m {\displaystyle n=2^{m}} ), using the fast Hadamard transform algorithm. In the quantum domain, the Hadamard transform can be computed in O ( 1 ) {\displaystyle O(1)} time, as it is a quantum logic gate that can be parallelized. == Quantum computing applications == The Hadamard transform is used extensively in quantum computing. The 2 × 2 Hadamard transform H 1 {\displaystyle H_{1}} is the quantum logic gate known as the Hadamard gate, and the application of a Hadamard gate to each qubit of an n {\displaystyle n} -qubit register in parallel is equivalent to the Hadamard transform H n {\displaystyle H_{n}} . === Hadamard gate === In quantum computing, the Hadamard gate is a one-qubit rotation, mapping the qubit-basis states | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } to two superposition states with equal weight of the computational basis states | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } . Usually the phases are chosen so that H = | 0 ⟩ + | 1 ⟩ 2 ⟨ 0 | + | 0 ⟩ − | 1 ⟩ 2 ⟨ 1 | {\displaystyle H={\frac {|0\rangle +|1\rangle }{\sqrt {2}}}\langle 0|+{\frac {|0\rangle -|1\rangle }{\sqrt {2}}}\langle 1|} in Dirac notation. This corresponds to the transformation matrix H 1 = 1 2 ( 1 1 1 − 1 ) {\displaystyle H_{1}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&1\\1&-1\end{pmatrix}}} in the | 0 ⟩ , | 1 ⟩ {\displaystyle |0\rangle ,|1\rangle } basis, also known as the computational basis. The states | 0 ⟩ + | 1 ⟩ 2 {\textstyle {\frac {\left|0\right\rangle +\left|1\right\rangle }{\sqrt {2}}}} and | 0 ⟩ − | 1 ⟩ 2 {\textstyle {\frac {\left|0\right\rangle -\left|1\right\rangle }{\sqrt {2}}}} are known as | + ⟩ {\displaystyle \left|{\boldsymbol {+}}\right\rangle } and | − ⟩ {\displaystyle \left|{\boldsymbol {-}}\right\rangle } respectively, and together constitute the polar basis in quantum computing. === Hadamard gate operations === H ( | 0 ⟩ ) = 1 2 | 0 ⟩ + 1 2 | 1 ⟩ =: | + ⟩ H ( | 1 ⟩ ) = 1 2 | 0 ⟩ − 1 2 | 1 ⟩ =: | − ⟩ H ( | + ⟩ ) = H ( 1 2 | 0 ⟩ + 1 2 | 1 ⟩ ) = 1 2 ( | 0 ⟩ + | 1 ⟩ ) + 1 2 ( | 0 ⟩ − | 1 ⟩ ) = | 0 ⟩ H ( | − ⟩ ) = H ( 1 2 | 0 ⟩ − 1 2 | 1 ⟩ ) = 1 2 ( | 0 ⟩ + | 1 ⟩ ) − 1 2 ( | 0 ⟩ − | 1 ⟩ ) = | 1 ⟩ {\displaystyle {\begin{aligned}H(|0\rangle )&={\frac {1}{\sqrt {2}}}|0\rangle +{\frac {1}{\sqrt {2}}}|1\rangle =:|+\rangle \\H(|1\rangle )&={\frac {1}{\sqrt {2}}}|0\rangle -{\frac {1}{\sqrt {2}}}|1\rangle =:|-\rangle \\H(|+\rangle )&=H\left({\frac {1}{\sqrt {2}}}|0\rangle +{\frac {1}{\sqrt {2}}}|1\rangle \right)={\frac {1}{2}}{\Big (}|0\rangle +|1\rangle {\Big )}+{\frac {1}{2}}{\Big (}|0\rangle -|1\rangle {\Big )}=|0\rangle \\H(|-\rangle )&=H\left({\frac {1}{\sqrt {2}}}|0\rangle -{\frac {1}{\sqrt {2}}}|1\rangle \right)={\frac {1}{2}}{\Big (}|0\rangle +|1\rangle {\Big )}-{\frac {1}{2}}{\Big (}|0\rangle -|1\rangle {\Big )}=|1\rangle \end{aligned}}} One application of the Hadamard gate to either a 0 or 1 qubit will produce a quantum state that, if observed, will be a 0 or 1 with equal probability (as seen in the first two operations). This is exactly like flipping a fair coin in the standard probabilistic model of computation. However, if the Hadamard gate is applied twice in succession (as is effectively being done in the last two operations), then the final state is always the same as the initial state. === Hadamard transform in quantum algorithms === Computing the quantum Hadamard transform is simply the application of a Hadamard gate to each qubit individually because of the tensor product structure of the Hadamard transform. This simple result means the quantum Hadamard transform requires log 2 ⁡ N {\displaystyle \log _{2}N} operations, compared to the classical case of N log 2 ⁡ N {\displaystyle N\log _{2}N} operations. For an n {\displaystyle n} -qubit system, Hadamard gates acting on each of the n {\displaystyle n} qubits (each initialized to the | 0 ⟩ {\displaystyle |0\rangle } ) can be used to prepare uniform quantum superposition states when N {\displaystyle N} is of the form N = 2 n {\displaystyle N=2^{n}} . In this case case with n {\displaystyle n} qubits, the combined Hadamard gate H n {\displaystyle H_{n}} is expressed as the tensor product of n {\displaystyle n} Hadamard gates: H n = H ⊗ H ⊗ … ⊗ H ⏟ n times {\displaystyle H_{n}=\underbrace {H\otimes H\otimes \ldots \otimes H} _{n{\text{ times}}}} The resulting uniform quantum superposition state is then: H n | 0 ⟩ ⊗ n = 1 2 n ∑ j = 0 2 n − 1 | j ⟩ {\displaystyle H_{n}|0\rangle ^{\otimes n}={\frac {1}{\sqrt {2^{n}}}}\sum _{j=0}^{2^{n}-1}|j\rangle } This generalizes the preparation of uniform quantum states using Hadamard gates for any N = 2 n {\displaystyle N=2^{n}} . Measurement of this uniform quantum state results in a random state between | 0 ⟩ {\displaystyle |0\rangle } and | N − 1 ⟩ {\displaystyle |N-1\rangle } . Many quantum algorithms use the Hadamard transform as an initial step, since as explained earlier, it maps n qubits initialized with | 0 ⟩ {\displaystyle |0\rangle } to a superposition of all 2n orthogonal states in the | 0 ⟩ , | 1 ⟩ {\displaystyle |0\rangle ,|1\rangle } basis with equal weight. For example, this is used in the Deutsch–Jozsa algorithm, Simon's algorithm, the Bernstein–Vazirani algorithm, and in Grover's algorithm. Note that Shor's algorithm uses both an initial Hadamard transform, as well as the quantum Fourier transform, which are both types of Fourier transforms on finite groups; the first on ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} and the second on Z / 2 n Z {\displaystyle \mathbb {Z} /2^{n}\mathbb {Z} } . Preparation of uniform quantum superposition states in the general case, when N {\displaystyle N} ≠ 2 n {\displaystyle 2^{n}} is non-trivial and requires more work. An efficient and deterministic approach for preparing the superposition state | Ψ ⟩ = 1 N ∑ j = 0 N − 1 | j ⟩ {\displaystyle |\Psi \rangle ={\frac {1}{\sqrt {N}}}\sum _{j=0}^{N-1}|j\rangle } with a gate complexity and circuit depth of only O ( log 2 ⁡ N ) {\displaystyle O(\log _{2}N)} for all N {\displaystyle N} was recently presented. This approach requires only n = ⌈ log 2 ⁡ N ⌉ {\displaystyle n=\lceil \log _{2}N\rceil } qubits. Importantly, neither ancilla qubits nor any quantum gates with multiple controls are needed in this approach for creating the uniform superposition state | Ψ ⟩ {\displaystyle |\Psi \rangle } . == Molecular phylogenetics (evolutionary biology) applications == The Hadamard transform can be used to estimate phylogenetic trees from molecular data. Phylogenetics is the subfield of evolutionary biology focused on understanding the relationships among organisms. A Hadamard transform applied to a vector (or matrix) of site pattern frequencies obtained from a DNA multiple sequence alignment can be used to generate another vector that carries information about the tree topology. The invertible nature of the phylogenetic Hadamard transform also allows the calculation of site likelihoods from a tree topology vector, allowing one to use the Hadamard transform for maximum likelihood estimation of phylogenetic trees. However, the latter application is less useful than the transformation from the site pattern vector to the tree vector because there are other ways to calculate site likelihoods that are much more efficient. However, the invertible nature of the phylogenetic Hadamard transform does provide an elegant tool for mathematic phylogenetics. The mechanics of the phylogenetic Hadamard transform involve the calculation of a vector γ ( T ) {\displaystyle \gamma (T)} that provides information about the topology and branch lengths for tree T {\displaystyle T} using the site pattern vector or matrix s ( T ) {\displaystyle s(T)} . γ ( T ) = H − 1 ( ln ⁡ ( H s ( T ) ) ) {\displaystyle \gamma (T)=H^{-1}(\ln(Hs(T)))} where H {\displaystyle H} is the Hadamard matrix of the appropriate size. This equation can be rewritten as a series of three equations to simplify its interpretation: r = H s ( T ) ρ = ln ⁡ r γ ( T ) = H − 1 ρ {\displaystyle {\begin{aligned}r&=Hs(T)\\\rho &=\ln r\\\gamma (T)&=H^{-1}\rho \end{aligned}}} The invertible nature of this equation allows one to calculate an expected site pattern vector (or matrix) as follows: s ( T ) = H − 1 ( exp ⁡ ( H γ ( T ) ) ) {\displaystyle s(T)=H^{-1}(\exp(H\gamma (T)))} We can use the Cavender–Farris–Neyman (CFN) two-state substitution model for DNA by encoding the nucleotides as binary characters (the purines A and G are encoded as R and the pyrimidines C and T are encoded as Y). This makes it possible to encode the multiple sequence alignment as the site pattern vector s ( T ) {\displaystyle s(T)} that can be converted to a tree vector γ ( T ) {\displaystyle \gamma (T)} , as shown in the following example: The example shown in this table uses the simplified three equation scheme and it is for a four taxon tree that can be written as ((A,B),(C,D)); in newick format. The site patterns are written in the order ABCD. This particular tree has two long terminal branches (0.2 transversion substitutions per site), two short terminal branches (0.025 transversion substitutions per site), and a short internal branch (0.025 transversion substitutions per site); thus, it would be written as ((A:0.025,B:0.2):0.025,(C:0.025,D:0.2)); in newick format. This tree will exhibit long branch attraction if the data are analyzed using the maximum parsimony criterion (assuming the sequence analyzed is long enough for the observed site pattern frequencies to be close to the expected frequencies shown in the s ( T ) = H − 1 ρ {\displaystyle s(T)=H^{-1}\rho } column). The long branch attraction reflects the fact that the expected number of site patterns with index 6 -- which support the tree ((A,C),(B,D)); -- exceed the expected number of site patterns that support the true tree (index 4). Obviously, the invertible nature of the phylogenetic Hadamard transform means that the tree vector means that the tree vector γ ( T ) {\displaystyle \gamma (T)} corresponds to the correct tree. Parsimony analysis after the transformation is therefore statistically consistent, as would be a standard maximum likelihood analysis using the correct model (in this case the CFN model). Note that the site pattern with 0 corresponds to the sites that have not changed (after encoding the nucleotides as purines or pyrimidines). The indices with asterisks (3, 5, and 6) are "parsimony-informative", and. the remaining indices represent site patterns where a single taxon differs from the other three taxa (so they are the equivalent of terminal branch lengths in a standard maximum likelihood phylogenetic tree). If one wishes to use nucleotide data without recoding as R and Y (and ultimately as 0 and 1) it is possible to encode the site patterns as a matrix. If we consider a four-taxon tree there are a total of 256 site patterns (four nucleotides to the 4th power). However, symmetries of the Kimura three-parameter (or K81) model allow us to reduce the 256 possible site patterns for DNA to 64 patterns, making it possible to encode the nucleotide data for a four-taxon tree as an 8 × 8 matrix in a manner similar to the vector of 8 elements used above for transversion (RY) site patterns. This is accomplished by recoding the data using the Klein four-group: As with RY data, site patterns are indexed relative to the base in the arbitrarily chosen first taxon with the bases in the subsequent taxa encoded relative to that first base. Thus, the first taxon receives the bit pair (0,0). Using those bit pairs one can produce two vectors similar to the RY vectors and then populate the matrix using those vectors. This can be illustrated using the example from Hendy et al. (1994), which is based on a multiple sequence alignment of four primate hemoglobin pseudogenes: The much larger number of site patterns in column 0 reflects the fact that column 0 corresponds to transition differences, which accumulate more rapidly than transversion differences in virtually all comparisons of genomic regions (and definitely accumulate more rapidly in the hemoglobin pseudogenes used for this worked example). If we consider the site pattern AAGG it would to binary pattern 0000 for the second element of the Klein group bit pair and 0011 for the first element. in this case binary pattern based on the first element first element corresponds to index 3 (so row 3 in column 0; indicated with a single asterisk in the table). The site patterns GGAA, CCTT, and TTCC would be encoded in the exact same way. The site pattern AACT would be encoded with binary pattern 0011 based on the second element and 0001 based on the first element; this yields index 1 for the first element and index 3 for the second. The index based on the second Klein group bit pair is multiplied by 8 to yield the column index (in this case it would be column 24) The cell that would include the count of AACT site patterns is indicated with two asterisks; however, the absence of a number in the example indicates that the sequence alignment include no AACT site patterns (likewise, CCAG, GGTC, and TTGA site patterns, which would be encoded in the same way, are absent). == Other applications == The Hadamard transform is also used in data encryption, as well as many signal processing and data compression algorithms, such as JPEG XR and MPEG-4 AVC. In video compression applications, it is usually used in the form of the sum of absolute transformed differences. It is also a crucial part of significant number of algorithms in quantum computing. The Hadamard transform is also applied in experimental techniques such as NMR, mass spectrometry and crystallography. It is additionally used in some versions of locality-sensitive hashing, to obtain pseudo-random matrix rotations. == See also == Fast Walsh–Hadamard transform Pseudo-Hadamard transform Haar transform Generalized distributive law == External links == Ritter, Terry (August 1996). "Walsh–Hadamard Transforms: A Literature Survey". Akansu, Ali N.; Poluri, R. (July 2007). "Walsh-Like Nonlinear Phase Orthogonal Codes for Direct Sequence CDMA Communications" (PDF). IEEE Transactions on Signal Processing. 55 (7): 3800–6. Bibcode:2007ITSP...55.3800A. doi:10.1109/TSP.2007.894229. S2CID 6830633. Pan, Jeng-shyang Data Encryption Method Using Discrete Fractional Hadamard Transformation (May 28, 2009) Lachowicz, Dr. Pawel. Walsh–Hadamard Transform and Tests for Randomness of Financial Return-Series (April 7, 2015) Beddard, Godfrey; Yorke, Briony A. (January 2011). "Pump-probe Spectroscopy using Hadamard Transforms" (PDF). Archived from the original (PDF) on 2014-10-18. Retrieved 2012-04-28. Yorke, Briony A.; Beddard, Godfrey; Owen, Robin L.; Pearson, Arwen R. (September 2014). "Time-resolved crystallography using the Hadamard transform". Nature Methods. 11 (11): 1131–1134. doi:10.1038/nmeth.3139. PMC 4216935. PMID 25282611. == References ==
Wikipedia/Hadamard_transform
In speculative fiction, a force field, sometimes known as an energy shield, force shield, energy bubble, or deflector shield, is a barrier produced by something like energy, negative energy, dark energy, electromagnetic fields, gravitational fields, electric fields, quantum fields, telekinetic fields, plasma, particles, radiation, solid light, magic, or pure force. It protects a person, area, or object from attacks or intrusions, or even deflects energy attacks back at the attacker. This fictional technology is created as a field of energy without matter that acts as a wall, so that objects affected by the particular force relating to the field are unable to pass through the field and reach the other side, instead being deflected or destroyed. Actual research in the 21st century has looked into the potential to deflect radiation or cosmic rays, as well as more extensive shielding. This concept has become a staple of many science-fiction works, so much so that authors frequently do not even bother to explain or justify them to their readers, treating them almost as established fact and attributing whatever capabilities the plot requires. The ability to create force fields has become a frequent superpower in superhero media. == History == The concept of a force field goes back at least as far as early 20th century. The Encyclopedia of Science Fiction suggests that the first use of the term in science fiction was in 1931, in Spacehounds of IPC by E.E. 'Doc' Smith. An early precursor of what is now called "force field" may be found in Eugenio Taquechel's Spanish historical-fiction novel "La Alhambra Romántica: Leyenda Morisca" published in Madrid in 1928, where in its 11th chapter it describes (translated) "... in front of his palace a wall as thin as a hair, strong and transparent as a diamond, had been raised which defended from ..." An earlier precursor is that of William Hope Hodgson's The Night Land (1912), where the Last Redoubt, the fortress of the remnants of a far-future humanity, is kept safe by "The Air Clog" generated by the burning "Earth-Current". An even earlier precursor is Florence Carpenter Dieudonné's 1887 novel Rondah, or Thirty-Three Years in a Star, where the far-off Sun Island is enclosed by a "wall in the air" that blocks access by land, sea and air, which is occasionally disabled. In Isaac Asimov's Foundation universe, personal shields have been developed by scientists specializing in the miniaturization of planet-based shields. As they are primarily used by Foundation Traders, most other inhabitants of the Galactic Empire do not know about this technology. In an unrelated short story Breeds There a Man...? by Asimov, scientists are working on a force field ("energy so channelled as to create a wall of matter-less inertia"), capable of protecting the population in case of a nuclear war. When activated by radiation, the force field becomes a solid hemisphere, completely opaque and reflective from both sides. Asimov explores the force field concept again in the short story Not Final!. The concept of force fields as a defensive measure from enemy attack or as a form of attack can be regularly found in films such as The War of the Worlds (1953, George Pál) and Independence Day, as well as modern video games. The ability to create a force field has been a common superpower in comic books and associated media. While only a few characters have the explicit ability to create force fields (for example, the Invisible Woman of the Fantastic Four and Violet Parr from The Incredibles), it has been emulated via other powers, such as Green Lantern's energy constructs, Jean Grey's telekinesis, and Magneto's manipulation of electromagnetic fields. Apart from this, its importance is also highlighted in Dr. Michio Kaku's books (such as Physics of the Impossible). == Fiction == Science fiction and fantasy avenues suggest a number of potential uses of force fields: A barrier allowing workers to function in areas exposed to the vacuum of space. The atmosphere inside would be habitable by humans, while at the same time allowing permissible objects to pass through the barrier A walkable surface between two points without the necessity of building a bridge. An emergency quarantine area to service those afflicted by harmful biological or chemical agents A fire extinguisher where oxygen is exhausted by the use of a space confined by a force field thereby starving the fire As a shield to protect against damage from natural forces or an enemy attack As a deflector to allow fast spaceships to traverse space without colliding with small particles or objects. A temporary habitable space in an area otherwise unsuitable for sustaining life As a security apparatus used to confine or contain a captive The capabilities and functionality of force fields vary; in some works of fiction (such as in the Star Trek universe), energy shields can nullify or mitigate the effects of both energy and particle (e.g., phasers) and conventional weapons. In many fictional scenarios, the shields function primarily as a defensive measure against weapons fired from other spacecraft. Force fields in these stories also generally prevent transporting. There are generally two kinds of force fields postulated: one in which energy is projected as a flat plane from emitters around the edges of a spacecraft and another where energy surrounds a ship like a bubble. The ability to create force fields has become a frequent superpower in superhero media. While sometimes an explicit power on their own, force fields have also been attributed to other fictional abilities. Marvel Comics' Jean Grey is able to use her telekinesis to create a barrier of telekinetic energy that acts as a force field by repelling objects. Similarly, Magneto is able to use his magnetism to manipulate magnetic fields into acting as shields. The most common superpower link seen with force fields is the power of invisibility. This is seen with Marvel Comics' Invisible Woman and Disney Pixar's Violet Parr. Force fields often vary in what they are made of, though are commonly made of energy. The 2017 series The Gifted featured character Lauren Strucker who had the ability to create shields by pushing molecules together. This resulted in her being able to construct force fields out of air and water particles rather than energy. == Research == In 2005, the NASA Institute for Advanced Concepts devised a way to protect from radiation by applying an electric field to spheres made of a thin, non-conductive material coated with a layer of gold with either positive or negative charges, which could be arranged to bend a stream of charged particles to protect from radiation. In 2006, a University of Washington group in Seattle, Washington, had been experimenting with using a bubble of charged plasma, contained by a fine mesh of superconducting wire, to surround a spacecraft. This would protect the spacecraft from interstellar radiation and some particles without needing physical shielding. The Rutherford Appleton Laboratory was in 2007 attempting to design an actual test satellite, which would orbit Earth with a charged plasma field around it. In 2008, Cosmos Magazine reported on research into creating an artificial replica of Earth's magnetic field around a spacecraft to protect astronauts from dangerous cosmic rays. British and Portuguese scientists used a mathematical simulation to prove that it would be possible to create a "mini-magnetosphere" bubble several hundred meters across, possibly generated by a small uncrewed vessel that could accompany a future crewed mission to Mars. In 2014, a group of students from the University of Leicester released a study describing functioning of spaceship plasma deflector shields. In 2015, Boeing was granted a patent on a force field system designed to protect against shock waves generated by explosions. It is not intended to protect against projectiles, radiation, or energy weapons such as lasers. The field purportedly works by using a combination of lasers, electricity and microwaves to rapidly heat the air creating a field of (ionised) superheated air-plasma which disrupts, or at least attenuates, the shock wave. As of March 2016, no working models were known to have been demonstrated. In 2016, Rice University scientists discovered that Tesla coils can generate force fields able to manipulate matter (process called teslaphoresis). == See also == Force field (physics) Li's field Magic circle Plasma window Stasis (fiction) Telekinesis Tractor beam == Notes == == Further reading == Andrews, Dana G. (2004-07-13). Things to do While Coasting Through Interstellar Space (PDF). 40th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit. Future Flight II. Fort Lauderdale, Florida. AIAA 2004-3706. Archived from the original (PDF) on 2013-04-20. Retrieved 2008-12-13. Stanway, Elizabeth (2025-06-01). "Protection or Prison?". Warwick University. Cosmic Stories Blog. Archived from the original on 2025-06-04. Retrieved 2025-06-04.
Wikipedia/Force_field_(technology)
In computational complexity theory and quantum computing, Simon's problem is a computational problem that is proven to be solved exponentially faster on a quantum computer than on a classical (that is, traditional) computer. The quantum algorithm solving Simon's problem, usually called Simon's algorithm, served as the inspiration for Shor's algorithm. Both problems are special cases of the abelian hidden subgroup problem, which is now known to have efficient quantum algorithms. The problem is set in the model of decision tree complexity or query complexity and was conceived by Daniel R. Simon in 1994. Simon exhibited a quantum algorithm that solves Simon's problem exponentially faster with exponentially fewer queries than the best probabilistic (or deterministic) classical algorithm. In particular, Simon's algorithm uses a linear number of queries and any classical probabilistic algorithm must use an exponential number of queries. This problem yields an oracle separation between the complexity classes BPP (bounded-error classical query complexity) and BQP (bounded-error quantum query complexity). This is the same separation that the Bernstein–Vazirani algorithm achieves, and different from the separation provided by the Deutsch–Jozsa algorithm, which separates P and EQP. Unlike the Bernstein–Vazirani algorithm, Simon's algorithm's separation is exponential. Because this problem assumes the existence of a highly-structured "black box" oracle to achieve its speedup, this problem has little practical value. However, without such an oracle, exponential speedups cannot easily be proven, since this would prove that P is different from PSPACE. == Problem description == Simon's problem considers access to a function f : { 0 , 1 } n → { 0 , 1 } m , m ≥ n {\displaystyle f:\{0,1\}^{n}\to \{0,1\}^{m},\;m\geq n} as implemented by a black box or an oracle. This function is promised to be either a one-to-one function, or a two-to-one function; if f {\displaystyle f} is two-to-one, it is furthermore promised that two inputs x {\displaystyle x} and x ′ {\displaystyle x'} evaluate to the same value if and only if x {\displaystyle x} and x ′ {\displaystyle x'} differ in a fixed set of bits. I.e., If f {\displaystyle f} is not one-to-one, it is promised that there exists a non-zero s {\displaystyle s} such that, for all x ≠ x ′ {\displaystyle x\neq x'} , f ( x ) = f ( x ′ ) {\displaystyle f(x)=f(x')} if and only if x ′ = x ⊕ s {\displaystyle x'=x\oplus s} where ⊕ {\displaystyle \oplus } denotes bitwise exclusive-or. Simon's problem asks, in its decision version, whether f {\displaystyle f} is one-to-one or two-to-one. In its non-decision version, Simon's problem asks whether f {\displaystyle f} is one-to-one or what is the value of s {\displaystyle s} (as defined above). The goal is to solve this task with the least number of queries (evaluations) of f {\displaystyle f} . Note that if x ′ = x {\displaystyle x'=x} , then f ( x ′ ) = f ( x ) {\displaystyle f(x')=f(x)} and x ′ = x ⊕ s {\displaystyle x'=x\oplus s} with s = 0 {\displaystyle s=0} . On the other hand (because a ⊕ b ⊕ b = a {\displaystyle a\oplus b\oplus b=a} for all a {\displaystyle a} and b {\displaystyle b} ), x ′ = x ⊕ s ⟺ x ′ ⊕ x = s {\displaystyle x'=x\oplus s\iff x'\oplus x=s} . Thus, Simon's problem may be restated in the following form: Given black-box or oracle access to f {\displaystyle f} , promised to satisfy, for some s {\displaystyle s} and all x , x ′ {\displaystyle x,x'} , f ( x ) = f ( x ′ ) {\displaystyle f(x)=f(x')} if and only if x ′ ⊕ x ∈ { 0 , s } {\displaystyle x'\oplus x\in \{0,s\}} , determine whether s ≠ 0 {\displaystyle s\neq 0} (decision version), or output s {\displaystyle s} (non-decision version). Note also that the promise on f {\displaystyle f} implies that if f {\displaystyle f} is two-to-one then it is a periodic function: f ( x ) = f ( y ) = f ( x ⊕ s ) . {\displaystyle f(x)=f(y)=f(x\oplus s).} === Example === The following function is an example of a function that satisfies the required property for n = 3 {\displaystyle n=3} : In this case, s = 110 {\displaystyle s=110} (i.e. the solution). Every output of f {\displaystyle f} occurs twice, and the two input strings corresponding to any one given output have bitwise XOR equal to s = 110 {\displaystyle s=110} . For example, the input strings 010 {\displaystyle 010} and 100 {\displaystyle 100} are both mapped (by f {\displaystyle f} ) to the same output string 000 {\displaystyle 000} . That is, f ( 010 ) = 000 {\displaystyle {\displaystyle f(010)=000}} and f ( 100 ) = 000 {\displaystyle {\displaystyle f(100)=000}} . Applying XOR to 010 and 100 obtains 110, that is 010 ⊕ 100 = 110 = s . {\displaystyle {\displaystyle 010\oplus 100=110=s}.} s = 110 {\displaystyle s=110} can also be verified using input strings 001 and 111 that are both mapped (by f) to the same output string 010. Applying XOR to 001 and 111 obtains 110, that is 001 ⊕ 111 = 110 = s {\displaystyle 001\oplus 111=110=s} . This gives the same solution s = 110 {\displaystyle s=110} as before. In this example the function f is indeed a two-to-one function where s ≠ 0 n {\displaystyle {\displaystyle s\neq 0^{n}}} . === Problem hardness === Intuitively, this is a hard problem to solve in a "classical" way, even if one uses randomness and accepts a small probability of error. The intuition behind the hardness is reasonably simple: if you want to solve the problem classically, you need to find two different inputs x {\displaystyle x} and y {\displaystyle y} for which f ( x ) = f ( y ) {\displaystyle f(x)=f(y)} . There is not necessarily any structure in the function f {\displaystyle f} that would help us to find two such inputs: more specifically, we can discover something about f {\displaystyle f} (or what it does) only when, for two different inputs, we obtain the same output. In any case, we would need to guess Ω ( 2 n ) {\displaystyle {\displaystyle \Omega ({\sqrt {2^{n}}})}} different inputs before being likely to find a pair on which f {\displaystyle f} takes the same output, as per the birthday problem. Since, classically to find s with a 100% certainty it would require checking Θ ( 2 n ) {\displaystyle {\displaystyle \Theta ({\sqrt {2^{n}}})}} inputs, Simon's problem seeks to find s using fewer queries than this classical method. == Simon's algorithm == The algorithm as a whole uses a subroutine to execute the following two steps: Run the quantum subroutine an expected O ( n ) {\displaystyle O(n)} times to get a list of linearly independent bitstrings y 1 , . . . , y n − 1 {\displaystyle y_{1},...,y_{n-1}} . Each y k {\displaystyle y_{k}} satisfies y k ⋅ s = 0 {\displaystyle y_{k}\cdot s=0} , so we can solve the system of equations this produces to get s {\displaystyle s} . === Quantum subroutine === The quantum circuit (see the picture) is the implementation of the quantum part of Simon's algorithm. The quantum subroutine of the algorithm makes use of the Hadamard transform H ⊗ n | k ⟩ = 1 2 n ∑ j = 0 2 n − 1 ( − 1 ) k ⋅ j | j ⟩ {\displaystyle H^{\otimes n}|k\rangle ={\frac {1}{\sqrt {2^{n}}}}\sum _{j=0}^{2^{n}-1}(-1)^{k\cdot j}|j\rangle } where k ⋅ j = k 1 j 1 ⊕ … ⊕ k n j n {\displaystyle k\cdot j=k_{1}j_{1}\oplus \ldots \oplus k_{n}j_{n}} , where ⊕ {\displaystyle \oplus } denotes XOR. First, the algorithm starts with two registers, initialized to | 0 ⟩ ⊗ n | 0 ⟩ ⊗ n {\displaystyle |0\rangle ^{\otimes n}|0\rangle ^{\otimes n}} . Then, we apply the Hadamard transform to the first register, which gives the state 1 2 n ∑ k = 0 2 n − 1 | k ⟩ | 0 ⟩ ⊗ n . {\displaystyle {\frac {1}{\sqrt {2^{n}}}}\sum _{k=0}^{2^{n}-1}|k\rangle |0\rangle ^{\otimes n}.} Query the oracle U f {\displaystyle U_{f}} to get the state 1 2 n ∑ k = 0 2 n − 1 | k ⟩ | f ( k ) ⟩ {\displaystyle {\frac {1}{\sqrt {2^{n}}}}\sum _{k=0}^{2^{n}-1}|k\rangle |f(k)\rangle } . Apply another Hadamard transform to the first register. This will produce the state 1 2 n ∑ k = 0 2 n − 1 [ 1 2 n ∑ j = 0 2 n − 1 ( − 1 ) j ⋅ k | j ⟩ ] | f ( k ) ⟩ = ∑ j = 0 2 n − 1 | j ⟩ [ 1 2 n ∑ k = 0 2 n − 1 ( − 1 ) j ⋅ k | f ( k ) ⟩ ] . {\displaystyle {\frac {1}{\sqrt {2^{n}}}}\sum _{k=0}^{2^{n}-1}\left[{\frac {1}{\sqrt {2^{n}}}}\sum _{j=0}^{2^{n}-1}(-1)^{j\cdot k}|j\rangle \right]|f(k)\rangle =\sum _{j=0}^{2^{n}-1}|j\rangle \left[{\frac {1}{2^{n}}}\sum _{k=0}^{2^{n}-1}(-1)^{j\cdot k}|f(k)\rangle \right].} Finally, we measure the first register (the algorithm also works if the second register is measured before the first, but this is unnecessary). The probability of measuring a state | j ⟩ {\displaystyle |j\rangle } is | | 1 2 n ∑ k = 0 2 n − 1 ( − 1 ) j ⋅ k | f ( k ) ⟩ | | 2 {\displaystyle \left|\left|{\frac {1}{2^{n}}}\sum _{k=0}^{2^{n}-1}(-1)^{j\cdot k}|f(k)\rangle \right|\right|^{2}} This is due to the fact that taking the magnitude of this vector and squaring it sums up all the probabilities of all the possible measurements of the second register that must have the first register as | j ⟩ {\displaystyle |j\rangle } . There are two cases for our measurement: s = 0 n {\displaystyle s=0^{n}} and f {\displaystyle f} is one-to-one. s ≠ 0 n {\displaystyle s\neq 0^{n}} and f {\displaystyle f} is two-to-one. For the first case, | | 1 2 n ∑ k = 0 2 n − 1 ( − 1 ) j ⋅ k | f ( k ) ⟩ | | 2 = 1 2 n {\displaystyle \left|\left|{\frac {1}{2^{n}}}\sum _{k=0}^{2^{n}-1}(-1)^{j\cdot k}|f(k)\rangle \right|\right|^{2}={\frac {1}{2^{n}}}} since in this case, f {\displaystyle f} is one-to-one, implying that the range of f {\displaystyle f} is { 0 , 1 } n {\displaystyle \{0,1\}^{n}} , meaning that the summation is over every basis vector. For the second case, note that there exist two strings, x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} , such that f ( x 1 ) = f ( x 2 ) = z {\displaystyle f(x_{1})=f(x_{2})=z} , where z ∈ r a n g e ( f ) {\displaystyle z\in \mathrm {range} (f)} . Thus, | | 1 2 n ∑ k = 0 2 n − 1 ( − 1 ) j ⋅ k | f ( k ) ⟩ | | 2 = | | 1 2 n ∑ z ∈ r a n g e ( f ) ( ( − 1 ) j ⋅ x 1 + ( − 1 ) j ⋅ x 2 ) | z ⟩ | | 2 {\displaystyle \left|\left|{\frac {1}{2^{n}}}\sum _{k=0}^{2^{n}-1}(-1)^{j\cdot k}|f(k)\rangle \right|\right|^{2}=\left|\left|{\frac {1}{2^{n}}}\sum _{z\,\in \,\mathrm {range} (f)}((-1)^{j\cdot x_{1}}+(-1)^{j\cdot x_{2}})|z\rangle \right|\right|^{2}} Furthermore, since x 1 ⊕ x 2 = s {\displaystyle x_{1}\oplus x_{2}=s} , x 2 = x 1 ⊕ s {\displaystyle x_{2}=x_{1}\oplus s} , and so | | 1 2 n ∑ z ∈ r a n g e ( f ) ( ( − 1 ) j ⋅ x 1 + ( − 1 ) j ⋅ x 2 ) | z ⟩ | | 2 = | | 1 2 n ∑ z ∈ r a n g e ( f ) ( ( − 1 ) j ⋅ x 1 + ( − 1 ) j ⋅ ( x 1 ⊕ s ) ) | z ⟩ | | 2 = | | 1 2 n ∑ z ∈ r a n g e ( f ) ( ( − 1 ) j ⋅ x 1 + ( − 1 ) j ⋅ x 1 ⊕ j ⋅ s ) | z ⟩ | | 2 = | | 1 2 n ∑ z ∈ r a n g e ( f ) ( − 1 ) j ⋅ x 1 ( 1 + ( − 1 ) j ⋅ s ) | z ⟩ | | 2 {\displaystyle {\begin{aligned}\left|\left|{\frac {1}{2^{n}}}\sum _{z\,\in \,\mathrm {range} (f)}((-1)^{j\cdot x_{1}}+(-1)^{j\cdot x_{2}})|z\rangle \right|\right|^{2}&=\left|\left|{\frac {1}{2^{n}}}\sum _{z\,\in \,\mathrm {range} (f)}((-1)^{j\cdot x_{1}}+(-1)^{j\cdot (x_{1}\oplus s)})|z\rangle \right|\right|^{2}\\&=\left|\left|{\frac {1}{2^{n}}}\sum _{z\,\in \,\mathrm {range} (f)}((-1)^{j\cdot x_{1}}+(-1)^{j\cdot x_{1}\oplus j\cdot s})|z\rangle \right|\right|^{2}\\&=\left|\left|{\frac {1}{2^{n}}}\sum _{z\,\in \,\mathrm {range} (f)}(-1)^{j\cdot x_{1}}(1+(-1)^{j\cdot s})|z\rangle \right|\right|^{2}\end{aligned}}} This expression is now easy to evaluate. Recall that we are measuring j {\displaystyle j} . When j ⋅ s = 1 {\displaystyle j\cdot s=1} , then this expression will evaluate to 0 {\displaystyle 0} , and when j ⋅ s = 0 {\displaystyle j\cdot s=0} , then this expression will be 2 − n + 1 {\displaystyle 2^{-n+1}} . Thus, both when s = 0 n {\displaystyle s=0^{n}} and when s ≠ 0 n {\displaystyle s\neq 0^{n}} , our measured j {\displaystyle j} satisfies j ⋅ s = 0 {\displaystyle j\cdot s=0} . === Classical post-processing === We run the quantum part of the algorithm until we have a linearly independent list of bitstrings y 1 , … , y n − 1 {\displaystyle y_{1},\ldots ,y_{n-1}} , and each y k {\displaystyle y_{k}} satisfies y k ⋅ s = 0 {\displaystyle y_{k}\cdot s=0} . Thus, we can efficiently solve this system of equations classically to find s {\displaystyle s} . The probability that y 1 , y 2 , … , y n − 1 {\displaystyle y_{1},y_{2},\dots ,y_{n-1}} are linearly independent is at least ∏ k = 1 ∞ ( 1 − 1 2 k ) = 0.288788 … {\displaystyle \prod _{k=1}^{\infty }\left(1-{\frac {1}{2^{k}}}\right)=0.288788\dots } Once we solve the system of equations, and produce a solution s ′ {\displaystyle s'} , we can test if f ( 0 n ) = f ( s ′ ) {\displaystyle f(0^{n})=f(s')} . If this is true, then we know s ′ = s {\displaystyle s'=s} , since f ( 0 n ) = f ( 0 n ⊕ s ) = f ( s ) {\displaystyle f(0^{n})=f(0^{n}\oplus s)=f(s)} . If it is the case that f ( 0 n ) ≠ f ( s ′ ) {\displaystyle f(0^{n})\neq f(s')} , then that means that s = 0 n {\displaystyle s=0^{n}} , and f ( 0 n ) ≠ f ( s ′ ) {\displaystyle f(0^{n})\neq f(s')} since f {\displaystyle f} is one-to-one. We can repeat Simon's algorithm a constant number of times to increase the probability of success arbitrarily, while still having the same time complexity. == Explicit examples of Simon's algorithm for few qubits == === One qubit === Consider the simplest instance of the algorithm, with n = 1 {\displaystyle n=1} . In this case evolving the input state through an Hadamard gate and the oracle results in the state (up to renormalization): | 0 ⟩ | f ( 0 ) ⟩ + | 1 ⟩ | f ( 1 ) ⟩ . {\displaystyle |0\rangle |f(0)\rangle +|1\rangle |f(1)\rangle .} If s = 1 {\displaystyle s=1} , that is, f ( 0 ) = f ( 1 ) {\displaystyle f(0)=f(1)} , then measuring the second register always gives the outcome | f ( 0 ) ⟩ {\displaystyle |f(0)\rangle } , and always results in the first register collapsing to the state (up to renormalization): | 0 ⟩ + | 1 ⟩ . {\displaystyle |0\rangle +|1\rangle .} Thus applying an Hadamard and measuring the first register always gives the outcome | 0 ⟩ {\displaystyle |0\rangle } . On the other hand, if f {\displaystyle f} is one-to-one, that is, s = 0 {\displaystyle s=0} , then measuring the first register after the second Hadamard can result in both | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } , with equal probability. We recover s {\displaystyle s} from the measurement outcomes by looking at whether we measured always | 0 ⟩ {\displaystyle |0\rangle } , in which case s = 1 {\displaystyle s=1} , or we measured both | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } with equal probability, in which case we infer that s = 0 {\displaystyle s=0} . This scheme will fail if s = 0 {\displaystyle s=0} but we nonetheless always found the outcome | 0 ⟩ {\displaystyle |0\rangle } , but the probability of this event is 2 − N {\displaystyle 2^{-N}} with N {\displaystyle N} the number of performed measurements, and can thus be made exponentially small by increasing the statistics. === Two qubits === Consider now the case with n = 2 {\displaystyle n=2} . The initial part of the algorithm results in the state (up to renormalization): | 00 ⟩ | f ( 00 ) ⟩ + | 01 ⟩ | f ( 01 ) ⟩ + | 10 ⟩ | f ( 10 ) ⟩ + | 11 ⟩ | f ( 11 ) ⟩ . {\displaystyle |00\rangle |f(00)\rangle +|01\rangle |f(01)\rangle +|10\rangle |f(10)\rangle +|11\rangle |f(11)\rangle .} If s = ( 00 ) {\displaystyle s=(00)} , meaning f {\displaystyle f} is injective, then finding | f ( x ) ⟩ {\displaystyle |f(x)\rangle } on the second register always collapses the first register to | x ⟩ {\displaystyle |x\rangle } , for all x ∈ { 0 , 1 } 2 {\displaystyle x\in \{0,1\}^{2}} . In other words, applying Hadamard gates and measuring the first register the four outcomes 00 , 01 , 10 , 11 {\displaystyle 00,01,10,11} are thus found with equal probability. Suppose on the other hand s ≠ ( 00 ) {\displaystyle s\neq (00)} , for example, s = ( 01 ) {\displaystyle s=(01)} . Then measuring | f ( 00 ) ⟩ {\displaystyle |f(00)\rangle } on the second register collapses the first register to the state | 00 ⟩ + | 10 ⟩ {\displaystyle |00\rangle +|10\rangle } . And more generally, measuring | f ( x y ) ⟩ {\displaystyle |f(xy)\rangle } gives | x , y ⟩ + | x , y ⊕ 1 ⟩ = | x ⟩ ( | 0 ⟩ + | 1 ⟩ ) {\displaystyle |x,y\rangle +|x,y\oplus 1\rangle =|x\rangle (|0\rangle +|1\rangle )} on the first register. Applying Hadamard gates and measuring on the first register can thus result in the outcomes 00 {\displaystyle 00} and 10 {\displaystyle 10} with equal probabilities. Similar reasoning applies to the other cases: if s = ( 10 ) {\displaystyle s=(10)} then the possible outcomes are 00 {\displaystyle 00} and 01 {\displaystyle 01} , while if s = ( 11 ) {\displaystyle s=(11)} the possible outcomes are 00 {\displaystyle 00} and 11 {\displaystyle 11} , compatibly with the j ⋅ s = 0 {\displaystyle j\cdot s=0} rule discussed in the general case. To recover s {\displaystyle s} we thus only need to distinguish between these four cases, collecting enough statistics to ensure that the probability of mistaking one outcome probability distribution for another is sufficiently small. == Complexity == Simon's algorithm requires O ( n ) {\displaystyle O(n)} queries to the black box, whereas a classical algorithm would need at least Ω ( 2 n / 2 ) {\displaystyle \Omega (2^{n/2})} queries. It is also known that Simon's algorithm is optimal in the sense that any quantum algorithm to solve this problem requires Ω ( n ) {\displaystyle \Omega (n)} queries. == Simon's algorithm Qiskit implementation == The quantum circuit shown here is from a simple example of how Simon's algorithm can be implemented in Python using Qiskit, an open-source quantum computing software development framework by IBM. == See also == Deutsch–Jozsa algorithm Shor's algorithm Bernstein–Vazirani algorithm == References ==
Wikipedia/Simon's_algorithm
Higher-spin theory or higher-spin gravity is a common name for field theories that contain massless fields of spin greater than two. Usually, the spectrum of such theories contains the graviton as a massless spin-two field, which explains the second name. Massless fields are gauge fields and the theories should be (almost) completely fixed by these higher-spin symmetries. Higher-spin theories are supposed to be consistent quantum theories and, for this reason, to give examples of quantum gravity. Most of the interest in the topic is due to the AdS/CFT correspondence where there is a number of conjectures relating higher-spin theories to weakly coupled conformal field theories. It is important to note that only certain parts of these theories are known at present (in particular, standard action principles are not known) and not many examples have been worked out in detail except some specific toy models (such as the higher-spin extension of pure Chern–Simons, Jackiw–Teitelboim, selfdual (chiral) and Weyl gravity theories). == Free higher-spin fields == Systematic study of massless arbitrary spin fields was initiated by Christian Fronsdal. A free spin-s field can be represented by a tensor gauge field. δ Φ μ 1 μ 2 . . . μ s = ∂ μ 1 ξ μ 2 . . . μ s + permutations {\displaystyle \delta \Phi _{\mu _{1}\mu _{2}...\mu _{s}}=\partial _{\mu _{1}}\xi _{\mu _{2}...\mu _{s}}+{\text{permutations}}} This (linearised) gauge symmetry generalises that of massless spin-one (photon) δ A μ = ∂ μ ξ {\displaystyle \delta A_{\mu }=\partial _{\mu }\xi } and that of massless spin-two (graviton) δ h μ ν = ∂ μ ξ ν + ∂ ν ξ μ {\displaystyle \delta h_{\mu \nu }=\partial _{\mu }\xi _{\nu }+\partial _{\nu }\xi _{\mu }} . Fronsdal also found linear equations of motion and a quadratic action that is invariant under the symmetries above. For example, the equations are ◻ Φ μ 1 μ 2 . . . μ s − ( ∂ μ 1 ∂ ν Φ ν μ 2 . . . μ s + permutations ) + 1 2 ( ∂ μ 1 ∂ μ 2 Φ ν ν μ 3 . . . μ s + permutations ) = 0 {\displaystyle \square \Phi _{\mu _{1}\mu _{2}...\mu _{s}}-\left(\partial _{\mu _{1}}\partial ^{\nu }\Phi _{\nu \mu _{2}...\mu _{s}}+{\text{ permutations}}\right)+{\frac {1}{2}}\left(\partial _{\mu _{1}}\partial _{\mu _{2}}\Phi ^{\nu }{}_{\nu \mu _{3}...\mu _{s}}+{\text{permutations}}\right)=0} where in the first bracket one needs s − 1 {\displaystyle s-1} terms more to make the expression symmetric and in the second bracket one needs s ( s − 1 ) / 2 − 1 {\displaystyle s(s-1)/2-1} permutations. The equations are gauge invariant provided the field is double-traceless Φ ν ν λ λ μ 5 . . . μ s = 0 {\displaystyle \Phi ^{\nu }{}_{\nu }{}^{\lambda }{}_{\lambda \mu _{5}...\mu _{s}}=0} and the gauge parameter is traceless ξ ν ν μ 3 . . . μ s − 1 = 0 {\displaystyle \xi ^{\nu }{}_{\nu \mu _{3}...\mu _{s-1}}=0} . Essentially, the higher spin problem can be stated as a problem to find a nontrivial interacting theory with at least one massless higher-spin field (higher in this context usually means greater than two). A theory for massive arbitrary higher-spin fields is proposed by C. Hagen and L. Singh. This massive theory is important because, according to various conjectures, spontaneously broken gauges of higher-spins may contain an infinite tower of massive higher-spin particles on the top of the massless modes of lower spins s ≤ 2 like graviton similarly as in string theories. The linearized version of the higher-spin supergravity gives rise to dual graviton field in first order form. Interestingly, the Curtright field of such dual gravity model is of a mixed symmetry, hence the dual gravity theory can also be massive. Also the chiral and nonchiral actions can be obtained from the manifestly covariant Curtright action. == No-go theorems == Possible interactions of massless higher spin particles with themselves and with low spin particles are (over)constrained by the basic principles of quantum field theory like Lorentz invariance. Many results in the form of no-go theorems have been obtained up to date === Flat space === Most of the no-go theorems constrain interactions in the flat space. One of the most well-known is the Weinberg low energy theorem that explains why there are no macroscopic fields corresponding to particles of spin 3 or higher. The Weinberg theorem can be interpreted in the following way: Lorentz invariance of the S-matrix is equivalent, for massless particles, to decoupling of longitudinal states. The latter is equivalent to gauge invariance under the linearised gauge symmetries above. These symmetries lead, for s > 2 {\displaystyle s>2} , to 'too many' conservation laws that trivialise scattering so that S = 1 {\displaystyle S=1} . Another well-known result is the Coleman–Mandula theorem. that, under certain assumptions, states that any symmetry group of S-matrix is necessarily locally isomorphic to the direct product of an internal symmetry group and the Poincaré group. This means that there cannot be any symmetry generators transforming as tensors of the Lorentz group – S-matrix cannot have symmetries that would be associated with higher spin charges. Massless higher spin particles also cannot consistently couple to nontrivial gravitational backgrounds. An attempt to simply replace partial derivatives with the covariant ones turns out to be inconsistent with gauge invariance. Nevertheless, a consistent gravitational coupling does exist in the light-cone gauge (to the lowest order). Other no-go results include a direct analysis of possible interactions and show, for example, that the gauge symmetries cannot be deformed in a consistent way so that they form an algebra. === Anti-de Sitter space === In anti-de Sitter space some of the flat space no-go results are still valid and some get slightly modified. In particular, it was shown by Fradkin and Vasiliev that one can consistently couple massless higher-spin fields to gravity at the first non-trivial order. The same result in flat space was obtained by Bengtsson, Bengtsson and Linden in the light-cone gauge the same year. The difference between the flat space result and the AdS one is that the gravitational coupling of massless higher-spin fields cannot be written in the manifestly covariant form in flat space as different from the AdS case. An AdS analog of the Coleman–Mandula theorem was obtained by Maldacena and Zhiboedov. AdS/CFT correspondence replaces the flat space S-matrix with the holographic correlation functions. It then can be shown that the asymptotic higher-spin symmetry in anti-de Sitter space implies that the holographic correlation functions are those of the singlet sector a free vector model conformal field theory (see also higher-spin AdS/CFT correspondence below). Let us stress that all n-point correlation functions are not vanishing so this statement is not exactly the analogue of the triviality of the S-matrix. An important difference from the flat space results, e.g. Coleman–Mandula and Weinberg theorems, is that one can break higher-spin symmetry in a controllable way, which is called slightly broken higher-spin symmetry. In the latter case the holographic S-matrix corresponds to highly nontrivial Chern–Simons matter theories rather than to a free CFT. As in the flat space case, other no-go results include a direct analysis of possible interactions. Starting from the quartic order a generic higher-spin gravity (defined to be the dual of the free vector model, see also higher-spin AdS/CFT correspondence below) is plagued by non-localities, which is the same problem as in flat space. == Various approaches to higher-spin theories == The existence of many higher-spin theories is well-justified on the basis of AdS/correspondence, but none of these hypothetical theories is known in full detail. Most of the common approaches to the higher-spin problem are described below. === Chiral higher-spin gravity === Generic theories with massless higher-spin fields are obstructed by non-localities, see No-go theorems. Chiral higher-spin gravity is a unique higher-spin theory with propagating massless fields that is not plagued by non-localities. It is the smallest nontrivial extension of the graviton with massless higher-spin fields in four dimensions. It has a simple action in the light-cone gauge: S = ∫ d 4 x [ ∑ λ ≥ 0 Φ − λ ◻ Φ λ + ∑ λ 1 , 2 , 3 g l p λ 1 + λ 2 + λ 3 − 1 Γ ( λ 1 + λ 2 + λ 3 ) V λ 1 , λ 2 , λ 3 Φ λ 1 Φ λ 2 Φ λ 3 ] {\displaystyle {\mathcal {S}}=\int \mathrm {d} ^{4}x\left[\sum _{\lambda \geq 0}\Phi _{-\lambda }\square \Phi _{\lambda }+\sum _{\lambda _{1,2,3}}{\frac {g\,{\mathrm {l_{p}} }^{\lambda _{1}+\lambda _{2}+\lambda _{3}-1}}{\Gamma (\lambda _{1}+\lambda _{2}+\lambda _{3})}}V_{\lambda _{1},\lambda _{2},\lambda _{3}}\Phi _{\lambda _{1}}\Phi _{\lambda _{2}}\Phi _{\lambda _{3}}\right]} where Φ λ ( x ) {\displaystyle \Phi _{\lambda }(x)} represents two helicity eigen-states λ = ± s {\displaystyle \lambda =\pm s} of a massless spin- s {\displaystyle s} field in four dimensions (for low spins one finds Φ 0 {\displaystyle \Phi _{0}} representing a scalar field, where light-cone gauge makes no difference; one finds Φ ± 1 {\displaystyle \Phi _{\pm 1}} for photons and Φ ± 2 {\displaystyle \Phi _{\pm 2}} for gravitons). The action has two coupling constants: a dimensionless g {\displaystyle g} and a dimensionful l p {\displaystyle \mathrm {l} _{p}} which can be associated with the Planck length. Given three helicities λ 1 , 2 , 3 {\displaystyle \lambda _{1,2,3}} fixed there is a unique cubic interaction V λ 1 , λ 2 , λ 3 {\displaystyle V_{\lambda _{1},\lambda _{2},\lambda _{3}}} , which in the spinor-helicity base can be represented as [ 12 ] λ 1 + λ 2 − λ 3 [ 23 ] λ 2 + λ 3 − λ 1 [ 13 ] λ 1 + λ 3 − λ 2 {\displaystyle [12]^{\lambda _{1}+\lambda _{2}-\lambda _{3}}[23]^{\lambda _{2}+\lambda _{3}-\lambda _{1}}[13]^{\lambda _{1}+\lambda _{3}-\lambda _{2}}} for positive λ 1 + λ 2 + λ 3 {\displaystyle \lambda _{1}+\lambda _{2}+\lambda _{3}} . The main feature of chiral theory is the dependence of couplings on the helicities Γ ( λ 1 + λ 2 + λ 3 ) − 1 {\displaystyle \Gamma (\lambda _{1}+\lambda _{2}+\lambda _{3})^{-1}} , which forces the sum λ 1 + λ 2 + λ 3 {\displaystyle \lambda _{1}+\lambda _{2}+\lambda _{3}} to be positive (there exists an anti-chiral theory where the sum is negative). The theory is one-loop finite and its one-loop amplitudes are related to those of self-dual Yang-Mills theory. The theory can be thought of as a higher-spin extension of self-dual Yang–Mills theory. Chiral theory admits an extension to anti-de Sitter space, where it is a unique perturbatively local higher-spin theory with propagating massless higher-spin fields. === Conformal higher-spin gravity === Usual massless higher-spin symmetries generalise the action of the linearised diffeomorphisms from the metric tensor to higher-spin fields. In the context of gravity one may also be interested in conformal gravity that enlarges diffeomorphisms with Weyl transformations g μ ν → Ω 2 ( x ) g μ ν {\displaystyle g_{\mu \nu }\rightarrow \Omega ^{2}(x)g_{\mu \nu }} where Ω ( x ) {\displaystyle \Omega (x)} is an arbitrary function. The simplest example of a conformal gravity is in four dimensions S = ∫ d 4 x − g C μ ν λ ρ C μ ν λ ρ {\displaystyle {\mathcal {S}}=\int \mathrm {d} ^{4}x{\sqrt {-g}}C_{\mu \nu \lambda \rho }C^{\mu \nu \lambda \rho }} One can try to generalise this idea to higher-spin fields by postulating the linearised gauge transformations of the form δ Φ μ 1 μ 2 . . . μ s = ∂ μ 1 ξ μ 2 . . . μ s + g μ 1 μ 2 ζ μ 3 . . . μ s + permutations {\displaystyle \delta \Phi _{\mu _{1}\mu _{2}...\mu _{s}}=\partial _{\mu _{1}}\xi _{\mu _{2}...\mu _{s}}+g_{\mu _{1}\mu _{2}}\zeta _{\mu _{3}...\mu _{s}}+{\text{permutations}}} where ζ μ 1 . . . μ s − 2 {\displaystyle \zeta _{\mu _{1}...\mu _{s-2}}} is a higher-spin generalisation of the Weyl symmetry. As different from massless higher-spin fields, conformal higher-spin fields are much more tractable: they can propagate on nontrivial gravitational background and admit interactions in flat space. In particular, the action of conformal higher-spin theories is known to some extent – it can be obtained as an effective action for a free conformal field theory coupled to the conformal higher-spin background. === Collective dipole === The idea is conceptually similar to the reconstruction approach just described, but performs a complete reconstruction in some sense. One begins with the free O ( N ) {\displaystyle O(N)} model partition function and performs a change of variables by passing from the O ( N ) {\displaystyle O(N)} scalar fields ϕ i ( x ) {\displaystyle \phi ^{i}(x)} , i = 1 , . . . , N {\displaystyle i=1,...,N} to a new bi-local variable Ψ ( x , y ) = ∑ i ϕ i ( x ) ϕ i ( y ) {\displaystyle \Psi (x,y)=\sum _{i}\phi ^{i}(x)\phi ^{i}(y)} . In the limit of large N {\displaystyle N} this change of variables is well-defined, but has a nontrivial Jacobian. The same partition function can then be rewritten as a path integral over bi-local Ψ ( x , y ) {\displaystyle \Psi (x,y)} . It can also be shown that in the free approximation the bi-local variables describe free massless fields of all spins s = 0 , 1 , 2 , 3 , . . . . {\displaystyle s=0,1,2,3,....} in anti-de Sitter space. Therefore, the action in term of the bi-local Ψ ( x , y ) {\displaystyle \Psi (x,y)} is a candidate for the action of a higher-spin theory === Holographic RG flow === The idea is that the equations of the exact renormalization group can be reinterpreted as equations of motions with the RG energy scale playing the role of the radial coordinate in anti-de Sitter space. This idea can be applied to the conjectural duals of higher-spin theories, for example, to the free O ( N ) {\displaystyle O(N)} model. === Noether procedure === Noether procedure is a canonical perturbative method to introduce interactions. One begins with a sum of free (quadratic) actions S 2 {\displaystyle S_{2}} and linearised gauge symmetries δ 0 {\displaystyle \delta _{0}} , which are given by Fronsdal Lagrangian and by the gauge transformations above. The idea is to add all possible corrections that are cubic in the fields S 3 {\displaystyle S_{3}} and, at the same time, allow for field-dependent deformations δ 1 {\displaystyle \delta _{1}} of the gauge transformations. One then requires the full action to be gauge invariant 0 = δ S = δ 0 S 2 + δ 0 S 3 + δ 1 S 2 + . . . {\displaystyle 0=\delta S=\delta _{0}S_{2}+\delta _{0}S_{3}+\delta _{1}S_{2}+...} and solves this constraint at the first nontrivial order in the weak-field expansion (note that δ 0 S 2 = 0 {\displaystyle \delta _{0}S_{2}=0} because the free action is gauge invariant). Therefore, the first condition is δ 0 S 3 + δ 1 S 2 = 0 {\displaystyle \delta _{0}S_{3}+\delta _{1}S_{2}=0} . One has to mod out by the trivial solutions that result from nonlinear field redefinitions in the free action. The deformation procedure may not stop at this order and one may have to add quartic terms S 4 {\displaystyle S_{4}} and further corrections δ 2 {\displaystyle \delta _{2}} to the gauge transformations that are quadratic in the fields and so on. The systematic approach is via BV-BRST techniques. Unfortunately, the Noether procedure approach has not given yet any complete example of a higher-spin theory, the difficulties being not only in the technicalities but also in the conceptual understanding of locality in higher-spin theories. Unless locality is imposed one can always find a solution to the Noether procedure (for example, by inverting the kinetic operator in δ 0 S 3 + δ 1 S 2 = 0 {\displaystyle \delta _{0}S_{3}+\delta _{1}S_{2}=0} that results from the second term) or, the same time, by performing a suitable nonlocal redefinition one can remove any interaction. At present, it seems that higher-spin theories cannot be fully understood as field theories due to quite non-local interactions they have. === Reconstruction === The higher-spin AdS/CFT correspondence can be used in the reverse order – one can attempt to build the interaction vertices of the higher-spin theory in such a way that they reproduce the correlation functions of a given conjectural CFT dual. This approach takes advantage of the fact that the kinematics of AdS theories is, to some extent, equivalent to the kinematics of conformal field theories in one dimension lower – one has exactly the same number of independent structures on both sides. In particular, the cubic part of the action of the Type-A higher-spin theory was found by inverting the three-point functions of the higher-spin currents in the free scalar CFT. Some quartic vertices have been reconstructed too. === Three dimensions and Chern–Simons === In three dimensions neither gravity nor massless higher-spin fields have any propagating degrees of freedom. It is known that the Einstein–Hilbert action with negative cosmological constant can be rewritten in the Chern–Simons form for S L ( 2 , R ) ⊕ S L ( 2 , R ) {\displaystyle SL(2,\mathbb {R} )\oplus SL(2,\mathbb {R} )} S = S C S ( A ) − S C S ( A ¯ ) S C S ( A ) = k 4 π ∫ t r ( A ∧ d A + 2 3 A ∧ A ∧ A ) , {\displaystyle S=S_{CS}(A)-S_{CS}({\bar {A}})\qquad \qquad S_{CS}(A)={\frac {k}{4\pi }}\int \mathrm {tr} (A\wedge dA+{\frac {2}{3}}A\wedge A\wedge A)\,,} where there are two independent s l ( 2 , R ) {\displaystyle sl(2,\mathbb {R} )} -connections, A {\displaystyle A} and A ¯ {\displaystyle {\bar {A}}} . Due to isomorphisms s o ( 2 , 2 ) ∼ s l ( 2 , R ) ⊕ s l ( 2 , R ) {\displaystyle so(2,2)\sim sl(2,\mathbb {R} )\oplus sl(2,\mathbb {R} )} and s l ( 2 , R ) ∼ s o ( 2 , 1 ) {\displaystyle sl(2,\mathbb {R} )\sim so(2,1)} the algebra s l ( 2 , R ) {\displaystyle sl(2,\mathbb {R} )} can be understood as the Lorentz algebra in three dimensions. These two connections are related to vielbein e μ a {\displaystyle e_{\mu }^{a}} and spin-connection ω μ a , b {\displaystyle \omega _{\mu }^{a,b}} (Note that in three dimensions, the spin-connection, being anti-symmetric in a , b {\displaystyle a,b} is equivalent to an s o ( 2 , 1 ) {\displaystyle so(2,1)} vector via ω ~ μ a = ϵ a b c ω μ b , c {\displaystyle {\tilde {\omega }}_{\mu }^{a}=\epsilon ^{a}{}_{bc}\omega _{\mu }^{b,c}} , where ϵ a b c {\displaystyle \epsilon ^{abc}} is the totally anti-symmetric Levi-Civita symbol). Higher-spin extensions are straightforward to construct: instead of s l ( 2 , R ) ⊕ s l ( 2 , R ) {\displaystyle sl(2,\mathbb {R} )\oplus sl(2,\mathbb {R} )} connection one can take a connection of g ⊕ g {\displaystyle {\mathfrak {g}}\oplus {\mathfrak {g}}} , where g {\displaystyle {\mathfrak {g}}} is any Lie algebra containing the 'gravitational' s l ( 2 , R ) {\displaystyle sl(2,\mathbb {R} )} subalgebra. Such theories have been extensively studied due their relation to AdS/CFT and W-algebras as asymptotic symmetries. === Vasiliev equations === Vasiliev equations are formally consistent gauge invariant nonlinear equations whose linearization over a specific vacuum solution describes free massless higher-spin fields on anti-de Sitter space. The Vasiliev equations are classical equations and no Lagrangian is known that starts from canonical two-derivative Fronsdal Lagrangian and is completed by interactions terms. There is a number of variations of Vasiliev equations that work in three, four and arbitrary number of space-time dimensions. Vasiliev's equations admit supersymmetric extensions with any number of super-symmetries and allow for Yang–Mills gaugings. Vasiliev's equations are background independent, the simplest exact solution being anti-de Sitter space. However, locality has not been an assumption used in the derivation and, for this reason, some of the results obtained from the equations are inconsistent with higher-spin theories and AdS/CFT duality. Locality issues remain to be clarified. == Higher-spin AdS/CFT correspondence == Higher-spin theories are of interest as models of AdS/CFT correspondence. === Klebanov–Polyakov conjecture === In 2002, Klebanov and Polyakov put forward a conjecture that the free and critical O ( N ) {\displaystyle O(N)} vector models, as conformal field theories in three dimensions, should be dual to a theory in four-dimensional anti-de Sitter space with infinite number of massless higher-spin gauge fields. This conjecture was further extended and generalised to Gross–Neveu and super-symmetric models. The most general extension is to a class of Chern–Simons matter theories. The rationale for the conjectures is that there are some conformal field theories that, in addition to the stress-tensor, have an infinite number of conserved tensors ∂ c j c a 2 . . . a s = 0 {\displaystyle \partial ^{c}j_{ca_{2}...a_{s}}=0} , where spin runs over all positive integers s = 1 , 2 , 3 , . . . {\displaystyle s=1,2,3,...} (in the O ( N ) {\displaystyle O(N)} model the spin is even). The stress-tensor corresponds to the s = 2 {\displaystyle s=2} case. By the standard AdS/CFT lore, the fields that are dual to conserved currents have to be gauge fields. For example, the stress-tensor is dual to the spin-two graviton field. A generic example of a conformal field theory with higher-spin currents is any free CFT. For instance, the free O ( N ) {\displaystyle O(N)} model is defined by S = 1 2 ∫ d d x ∂ m ϕ i ∂ m ϕ j δ i j , {\displaystyle S={\frac {1}{2}}\int d^{d}x\,\partial _{m}\phi ^{i}\partial ^{m}\phi ^{j}\delta _{ij},} where i , j = 1 , . . . , N {\displaystyle i,j=1,...,N} . It can be shown that there exist an infinite number of quasi-primary operators j a 1 a 2 . . . a s = ∂ a 1 . . . ∂ a s ϕ i ϕ j δ i j + plus terms with different arrangement of derivatives and minus traces {\displaystyle j_{a_{1}a_{2}...a_{s}}=\partial _{a_{1}}...\partial _{a_{s}}\phi ^{i}\phi ^{j}\delta _{ij}+{\text{plus terms with different arrangement of derivatives and minus traces}}} that are conserved. Under certain assumptions it was shown by Maldacena and Zhiboedov that 3d conformal field theories with higher spin currents are free, which can be extended to any dimension greater than two. Therefore, higher-spin theories are generic duals of free conformal field theories. A theory that is dual to the free scalar CFT is called Type-A in the literature and the theory that is dual to the free fermion CFT is called Type-B. Another example is the critical vector model, which is a theory with action S = ∫ d 3 x 1 2 ∂ m ϕ i ∂ m ϕ j δ i j + λ 4 ( ϕ i ϕ j δ i j ) 2 {\displaystyle S=\int d^{3}x\,{\frac {1}{2}}\partial _{m}\phi ^{i}\partial ^{m}\phi ^{j}\delta _{ij}+{\frac {\lambda }{4}}(\phi ^{i}\phi ^{j}\delta _{ij})^{2}} taken at the fixed point. This theory is interacting and does not have conserved higher-spin currents. However, in the large N limit it can be shown to have 'almost' conserved higher-spin currents and the conservation is broken by 1 / N {\displaystyle 1/N} effects. More generally, free and critical vector models belong to the class of Chern–Simons matter theories that have slightly broken higher-spin symmetry. === Gaberdiel–Gopakumar conjecture === The conjecture put forward by Gaberdiel and Gopakumar is an extension of the Klebanov–Polyakov conjecture to A d S 3 / C F T 2 {\displaystyle AdS_{3}/CFT^{2}} . It states that the W N {\displaystyle W_{N}} minimal models in the large N {\displaystyle N} limit should be dual to theories with massless higher-spin fields and two scalar fields. Massless higher-spin fields do not propagate in three dimensions, but can be described, as is discussed above, by the Chern–Simons action. However, it is not known to extend this action as to include the matter fields required by the duality. == See also == Bargmann–Wigner equations Joos–Weinberg equation == References ==
Wikipedia/Higher-spin_theory
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system.: 1–2  Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, an Austrian physicist, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933. Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.: II:268  The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics". The equation given by Schrödinger is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space and time are not on equal footing. Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. Another partial differential equation, the Klein–Gordon equation, led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square root of the Klein–Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein–Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles. == Definition == === Preliminaries === Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension: i ℏ ∂ ∂ t Ψ ( x , t ) = [ − ℏ 2 2 m ∂ 2 ∂ x 2 + V ( x , t ) ] Ψ ( x , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (x,t)=\left[-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x,t)\right]\Psi (x,t).} Here, Ψ ( x , t ) {\displaystyle \Psi (x,t)} is a wave function, a function that assigns a complex number to each point x {\displaystyle x} at each time t {\displaystyle t} . The parameter m {\displaystyle m} is the mass of the particle, and V ( x , t ) {\displaystyle V(x,t)} is the potential that represents the environment in which the particle exists.: 74  The constant i {\displaystyle i} is the imaginary unit, and ℏ {\displaystyle \hbar } is the reduced Planck constant, which has units of action (energy multiplied by time).: 10  Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector | ψ ⟩ {\displaystyle |\psi \rangle } belonging to a separable complex Hilbert space H {\displaystyle {\mathcal {H}}} . This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys ⟨ ψ | ψ ⟩ = 1 {\displaystyle \langle \psi |\psi \rangle =1} . The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions L 2 {\displaystyle L^{2}} , while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space C 2 {\displaystyle \mathbb {C} ^{2}} with the usual inner product.: 322  Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue λ {\displaystyle \lambda } is non-degenerate and the probability is given by | ⟨ λ | ψ ⟩ | 2 {\displaystyle |\langle \lambda |\psi \rangle |^{2}} , where | λ ⟩ {\displaystyle |\lambda \rangle } is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by ⟨ ψ | P λ | ψ ⟩ {\displaystyle \langle \psi |P_{\lambda }|\psi \rangle } , where P λ {\displaystyle P_{\lambda }} is the projector onto its associated eigenspace. A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates, composed of elements outside the Hilbert space, as "generalized eigenvectors". These are used for calculational convenience and do not represent physical states.: 100–105  Thus, a position-space wave function Ψ ( x , t ) {\displaystyle \Psi (x,t)} as used above can be written as the inner product of a time-dependent state vector | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } with unphysical but convenient "position eigenstates" | x ⟩ {\displaystyle |x\rangle } : Ψ ( x , t ) = ⟨ x | Ψ ( t ) ⟩ . {\displaystyle \Psi (x,t)=\langle x|\Psi (t)\rangle .} === Time-dependent equation === The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:: 143  where t {\displaystyle t} is time, | Ψ ( t ) ⟩ {\displaystyle \vert \Psi (t)\rangle } is the state vector of the quantum system ( Ψ {\displaystyle \Psi } being the Greek letter psi), and H ^ {\displaystyle {\hat {H}}} is an observable, the Hamiltonian operator. The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory). To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function.: 78  For example, given a wave function in position space Ψ ( x , t ) {\displaystyle \Psi (x,t)} as above, we have Pr ( x , t ) = | Ψ ( x , t ) | 2 . {\displaystyle \Pr(x,t)=|\Psi (x,t)|^{2}.} === Time-independent equation === The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation. where E {\displaystyle E} is the energy of the system.: 134  This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) E {\displaystyle E} . == Properties == === Linearity === The Schrödinger equation is a linear differential equation, meaning that if two state vectors | ψ 1 ⟩ {\displaystyle |\psi _{1}\rangle } and | ψ 2 ⟩ {\displaystyle |\psi _{2}\rangle } are solutions, then so is any linear combination | ψ ⟩ = a | ψ 1 ⟩ + b | ψ 2 ⟩ {\displaystyle |\psi \rangle =a|\psi _{1}\rangle +b|\psi _{2}\rangle } of the two state vectors where a and b are any complex numbers.: 25  Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } can be written as the linear combination | Ψ ( t ) ⟩ = ∑ n A n e − i E n t / ℏ | ψ E n ⟩ , {\displaystyle |\Psi (t)\rangle =\sum _{n}A_{n}e^{{-iE_{n}t}/\hbar }|\psi _{E_{n}}\rangle ,} where A n {\displaystyle A_{n}} are complex numbers and the vectors | ψ E n ⟩ {\displaystyle |\psi _{E_{n}}\rangle } are solutions of the time-independent equation H ^ | ψ E n ⟩ = E n | ψ E n ⟩ {\displaystyle {\hat {H}}|\psi _{E_{n}}\rangle =E_{n}|\psi _{E_{n}}\rangle } . === Unitarity === Holding the Hamiltonian H ^ {\displaystyle {\hat {H}}} constant, the Schrödinger equation has the solution | Ψ ( t ) ⟩ = e − i H ^ t / ℏ | Ψ ( 0 ) ⟩ . {\displaystyle |\Psi (t)\rangle =e^{-i{\hat {H}}t/\hbar }|\Psi (0)\rangle .} The operator U ^ ( t ) = e − i H ^ t / ℏ {\displaystyle {\hat {U}}(t)=e^{-i{\hat {H}}t/\hbar }} is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is | Ψ ( 0 ) ⟩ {\displaystyle |\Psi (0)\rangle } , then the state at a later time t {\displaystyle t} will be given by | Ψ ( t ) ⟩ = U ^ ( t ) | Ψ ( 0 ) ⟩ {\displaystyle |\Psi (t)\rangle ={\hat {U}}(t)|\Psi (0)\rangle } for some unitary operator U ^ ( t ) {\displaystyle {\hat {U}}(t)} . Conversely, suppose that U ^ ( t ) {\displaystyle {\hat {U}}(t)} is a continuous family of unitary operators parameterized by t {\displaystyle t} . Without loss of generality, the parameterization can be chosen so that U ^ ( 0 ) {\displaystyle {\hat {U}}(0)} is the identity operator and that U ^ ( t / N ) N = U ^ ( t ) {\displaystyle {\hat {U}}(t/N)^{N}={\hat {U}}(t)} for any N > 0 {\displaystyle N>0} . Then U ^ ( t ) {\displaystyle {\hat {U}}(t)} depends upon the parameter t {\displaystyle t} in such a way that U ^ ( t ) = e − i G ^ t {\displaystyle {\hat {U}}(t)=e^{-i{\hat {G}}t}} for some self-adjoint operator G ^ {\displaystyle {\hat {G}}} , called the generator of the family U ^ ( t ) {\displaystyle {\hat {U}}(t)} . A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units). To see that the generator is Hermitian, note that with U ^ ( δ t ) ≈ U ^ ( 0 ) − i G ^ δ t {\displaystyle {\hat {U}}(\delta t)\approx {\hat {U}}(0)-i{\hat {G}}\delta t} , we have U ^ ( δ t ) † U ^ ( δ t ) ≈ ( U ^ ( 0 ) † + i G ^ † δ t ) ( U ^ ( 0 ) − i G ^ δ t ) = I + i δ t ( G ^ † − G ^ ) + O ( δ t 2 ) , {\displaystyle {\hat {U}}(\delta t)^{\dagger }{\hat {U}}(\delta t)\approx ({\hat {U}}(0)^{\dagger }+i{\hat {G}}^{\dagger }\delta t)({\hat {U}}(0)-i{\hat {G}}\delta t)=I+i\delta t({\hat {G}}^{\dagger }-{\hat {G}})+O(\delta t^{2}),} so U ^ ( t ) {\displaystyle {\hat {U}}(t)} is unitary only if, to first order, its derivative is Hermitian. === Changes of basis === The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle.: 182  The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term: i ℏ d d t | Ψ ( t ) ⟩ = ( 1 2 m p ^ 2 + V ^ ) | Ψ ( t ) ⟩ . {\displaystyle i\hbar {\frac {d}{dt}}|\Psi (t)\rangle =\left({\frac {1}{2m}}{\hat {p}}^{2}+{\hat {V}}\right)|\Psi (t)\rangle .} Writing r {\displaystyle \mathbf {r} } for a three-dimensional position vector and p {\displaystyle \mathbf {p} } for a three-dimensional momentum vector, the position-space Schrödinger equation is i ℏ ∂ ∂ t Ψ ( r , t ) = − ℏ 2 2 m ∇ 2 Ψ ( r , t ) + V ( r ) Ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).} The momentum-space counterpart involves the Fourier transforms of the wave function and the potential: i ℏ ∂ ∂ t Ψ ~ ( p , t ) = p 2 2 m Ψ ~ ( p , t ) + ( 2 π ℏ ) − 3 / 2 ∫ d 3 p ′ V ~ ( p − p ′ ) Ψ ~ ( p ′ , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}{\tilde {\Psi }}(\mathbf {p} ,t)={\frac {\mathbf {p} ^{2}}{2m}}{\tilde {\Psi }}(\mathbf {p} ,t)+(2\pi \hbar )^{-3/2}\int d^{3}\mathbf {p} '\,{\tilde {V}}(\mathbf {p} -\mathbf {p} '){\tilde {\Psi }}(\mathbf {p} ',t).} The functions Ψ ( r , t ) {\displaystyle \Psi (\mathbf {r} ,t)} and Ψ ~ ( p , t ) {\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)} are derived from | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } by Ψ ( r , t ) = ⟨ r | Ψ ( t ) ⟩ , {\displaystyle \Psi (\mathbf {r} ,t)=\langle \mathbf {r} |\Psi (t)\rangle ,} Ψ ~ ( p , t ) = ⟨ p | Ψ ( t ) ⟩ , {\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)=\langle \mathbf {p} |\Psi (t)\rangle ,} where | r ⟩ {\displaystyle |\mathbf {r} \rangle } and | p ⟩ {\displaystyle |\mathbf {p} \rangle } do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space. When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables x {\displaystyle x} and p {\displaystyle p} are promoted to self-adjoint operators x ^ {\displaystyle {\hat {x}}} and p ^ {\displaystyle {\hat {p}}} that satisfy the canonical commutation relation [ x ^ , p ^ ] = i ℏ . {\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar .} This implies that: 190  ⟨ x | p ^ | Ψ ⟩ = − i ℏ d d x Ψ ( x ) , {\displaystyle \langle x|{\hat {p}}|\Psi \rangle =-i\hbar {\frac {d}{dx}}\Psi (x),} so the action of the momentum operator p ^ {\displaystyle {\hat {p}}} in the position-space representation is − i ℏ d d x {\textstyle -i\hbar {\frac {d}{dx}}} . Thus, p ^ 2 {\displaystyle {\hat {p}}^{2}} becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian ∇ 2 {\displaystyle \nabla ^{2}} . The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform.: 103–104  In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples Ψ ~ ( p ) {\displaystyle {\tilde {\Psi }}(p)} with Ψ ~ ( p + ℏ K ) {\displaystyle {\tilde {\Psi }}(p+\hbar K)} for only discrete reciprocal lattice vectors K {\displaystyle K} . This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.: 138  === Probability current === The Schrödinger equation is consistent with local probability conservation.: 238  It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent. The continuity equation for probability in non relativistic quantum mechanics is stated as: ∂ ∂ t ρ ( r , t ) + ∇ ⋅ j = 0 , {\displaystyle {\frac {\partial }{\partial t}}\rho \left(\mathbf {r} ,t\right)+\nabla \cdot \mathbf {j} =0,} where j = 1 2 m ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) = − i ℏ 2 m ( ψ ∗ ∇ ψ − ψ ∇ ψ ∗ ) = ℏ m Im ⁡ ( ψ ∗ ∇ ψ ) {\displaystyle \mathbf {j} ={\frac {1}{2m}}\left(\Psi ^{*}{\hat {\mathbf {p} }}\Psi -\Psi {\hat {\mathbf {p} }}\Psi ^{*}\right)=-{\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {\hbar }{m}}\operatorname {Im} (\psi ^{*}\nabla \psi )} is the probability current or probability flux (flow per unit area). If the wavefunction is represented as ψ ( x , t ) = ρ ( x , t ) exp ⁡ ( i S ( x , t ) ℏ ) , {\textstyle \psi ({\bf {x}},t)={\sqrt {\rho ({\bf {x}},t)}}\exp \left({\frac {iS({\bf {x}},t)}{\hbar }}\right),} where S ( x , t ) {\displaystyle S(\mathbf {x} ,t)} is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as: j = ρ ∇ S m {\displaystyle \mathbf {j} ={\frac {\rho \nabla S}{m}}} Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the ∇ S m {\textstyle {\frac {\nabla S}{m}}} term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle. === Separation of variables === If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads: i ℏ ∂ ∂ t Ψ ( r , t ) = [ − ℏ 2 2 m ∇ 2 + V ( r ) ] Ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=\left[-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} )\right]\Psi (\mathbf {r} ,t).} The operator on the left side depends only on time; the one on the right side depends only on space. Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts Ψ ( r , t ) = ψ ( r ) τ ( t ) , {\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )\tau (t),} where ψ ( r ) {\displaystyle \psi (\mathbf {r} )} is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and τ ( t ) {\displaystyle \tau (t)} is a function of time only. Substituting this expression for Ψ {\displaystyle \Psi } into the time dependent left hand side shows that τ ( t ) {\displaystyle \tau (t)} is a phase factor: Ψ ( r , t ) = ψ ( r ) e − i E t / ℏ . {\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )e^{-i{Et/\hbar }}.} A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.: 143ff  The spatial part of the full wave function solves the equation ∇ 2 ψ ( r ) + 2 m ℏ 2 [ E − V ( r ) ] ψ ( r ) = 0 , {\displaystyle \nabla ^{2}\psi (\mathbf {r} )+{\frac {2m}{\hbar ^{2}}}\left[E-V(\mathbf {r} )\right]\psi (\mathbf {r} )=0,} where the energy E {\displaystyle E} appears in the phase factor. This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is an example of the spectral theorem, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix. Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated, as in ψ ( r ) = ψ x ( x ) ψ y ( y ) ψ z ( z ) , {\displaystyle \psi (\mathbf {r} )=\psi _{x}(x)\psi _{y}(y)\psi _{z}(z),} or radial and angular coordinates might be separated: ψ ( r ) = ψ r ( r ) ψ θ ( θ ) ψ ϕ ( ϕ ) . {\displaystyle \psi (\mathbf {r} )=\psi _{r}(r)\psi _{\theta }(\theta )\psi _{\phi }(\phi ).} == Examples == === Particle in a box === The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside.: 77–78  For the one-dimensional case in the x {\displaystyle x} direction, the time-independent Schrödinger equation may be written − ℏ 2 2 m d 2 ψ d x 2 = E ψ . {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined by p ^ x = − i ℏ d d x {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}} the previous equation is evocative of the classic kinetic energy analogue, 1 2 m p ^ x 2 = E , {\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,} with state ψ {\displaystyle \psi } in this case having energy E {\displaystyle E} coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are ψ ( x ) = A e i k x + B e − i k x E = ℏ 2 k 2 2 m {\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}} or, from Euler's formula, ψ ( x ) = C sin ⁡ ( k x ) + D cos ⁡ ( k x ) . {\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).} The infinite potential walls of the box determine the values of C , D , {\displaystyle C,D,} and k {\displaystyle k} at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} where ψ {\displaystyle \psi } must be zero. Thus, at x = 0 {\displaystyle x=0} , ψ ( 0 ) = 0 = C sin ⁡ ( 0 ) + D cos ⁡ ( 0 ) = D {\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D} and D = 0 {\displaystyle D=0} . At x = L {\displaystyle x=L} , ψ ( L ) = 0 = C sin ⁡ ( k L ) , {\displaystyle \psi (L)=0=C\sin(kL),} in which C {\displaystyle C} cannot be zero as this would conflict with the postulate that ψ {\displaystyle \psi } has norm 1. Therefore, since sin ⁡ ( k L ) = 0 {\displaystyle \sin(kL)=0} , k L {\displaystyle kL} must be an integer multiple of π {\displaystyle \pi } , k = n π L n = 1 , 2 , 3 , … . {\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint on k {\displaystyle k} implies a constraint on the energy levels, yielding E n = ℏ 2 π 2 n 2 2 m L 2 = n 2 h 2 8 m L 2 . {\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. === Harmonic oscillator === The Schrödinger equation for this situation is E ψ = − ℏ 2 2 m d 2 d x 2 ψ + 1 2 m ω 2 x 2 ψ , {\displaystyle E\psi =-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\psi +{\frac {1}{2}}m\omega ^{2}x^{2}\psi ,} where x {\displaystyle x} is the displacement and ω {\displaystyle \omega } the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics. The solutions in position space are ψ n ( x ) = 1 2 n n ! ( m ω π ℏ ) 1 / 4 e − m ω x 2 2 ℏ H n ( m ω ℏ x ) , {\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\ \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\ e^{-{\frac {m\omega x^{2}}{2\hbar }}}\ {\mathcal {H}}_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),} where n ∈ { 0 , 1 , 2 , … } {\displaystyle n\in \{0,1,2,\ldots \}} , and the functions H n {\displaystyle {\mathcal {H}}_{n}} are the Hermite polynomials of order n {\displaystyle n} . The solution set may be generated by ψ n ( x ) = 1 n ! ( m ω 2 ℏ ) n ( x − ℏ m ω d d x ) n ( m ω π ℏ ) 1 4 e − m ω x 2 2 ℏ . {\displaystyle \psi _{n}(x)={\frac {1}{\sqrt {n!}}}\left({\sqrt {\frac {m\omega }{2\hbar }}}\right)^{n}\left(x-{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)^{n}\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}e^{\frac {-m\omega x^{2}}{2\hbar }}.} The eigenvalues are E n = ( n + 1 2 ) ℏ ω . {\displaystyle E_{n}=\left(n+{\frac {1}{2}}\right)\hbar \omega .} The case n = 0 {\displaystyle n=0} is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian. The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.: 352  === Hydrogen atom === The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is E ψ = − ℏ 2 2 μ ∇ 2 ψ − q 2 4 π ε 0 r ψ {\displaystyle E\psi =-{\frac {\hbar ^{2}}{2\mu }}\nabla ^{2}\psi -{\frac {q^{2}}{4\pi \varepsilon _{0}r}}\psi } where q {\displaystyle q} is the electron charge, r {\displaystyle \mathbf {r} } is the position of the electron relative to the nucleus, r = | r | {\displaystyle r=|\mathbf {r} |} is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein ε 0 {\displaystyle \varepsilon _{0}} is the permittivity of free space and μ = m q m p m q + m p {\displaystyle \mu ={\frac {m_{q}m_{p}}{m_{q}+m_{p}}}} is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass m p {\displaystyle m_{p}} and the electron of mass m q {\displaystyle m_{q}} . The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass. The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus, ψ ( r , θ , φ ) = R ( r ) Y ℓ m ( θ , φ ) = R ( r ) Θ ( θ ) Φ ( φ ) , {\displaystyle \psi (r,\theta ,\varphi )=R(r)Y_{\ell }^{m}(\theta ,\varphi )=R(r)\Theta (\theta )\Phi (\varphi ),} where R are radial functions and Y l m ( θ , φ ) {\displaystyle Y_{l}^{m}(\theta ,\varphi )} are spherical harmonics of degree ℓ {\displaystyle \ell } and order m {\displaystyle m} . This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are: ψ n ℓ m ( r , θ , φ ) = ( 2 n a 0 ) 3 ( n − ℓ − 1 ) ! 2 n [ ( n + ℓ ) ! ] e − r / n a 0 ( 2 r n a 0 ) ℓ L n − ℓ − 1 2 ℓ + 1 ( 2 r n a 0 ) ⋅ Y ℓ m ( θ , φ ) {\displaystyle \psi _{n\ell m}(r,\theta ,\varphi )={\sqrt {\left({\frac {2}{na_{0}}}\right)^{3}{\frac {(n-\ell -1)!}{2n[(n+\ell )!]}}}}e^{-r/na_{0}}\left({\frac {2r}{na_{0}}}\right)^{\ell }L_{n-\ell -1}^{2\ell +1}\left({\frac {2r}{na_{0}}}\right)\cdot Y_{\ell }^{m}(\theta ,\varphi )} where a 0 = 4 π ε 0 ℏ 2 m q q 2 {\displaystyle a_{0}={\frac {4\pi \varepsilon _{0}\hbar ^{2}}{m_{q}q^{2}}}} is the Bohr radius, L n − ℓ − 1 2 ℓ + 1 ( ⋯ ) {\displaystyle L_{n-\ell -1}^{2\ell +1}(\cdots )} are the generalized Laguerre polynomials of degree n − ℓ − 1 {\displaystyle n-\ell -1} , n , ℓ , m {\displaystyle n,\ell ,m} are the principal, azimuthal, and magnetic quantum numbers respectively, which take the values n = 1 , 2 , 3 , … , {\displaystyle n=1,2,3,\dots ,} ℓ = 0 , 1 , 2 , … , n − 1 , {\displaystyle \ell =0,1,2,\dots ,n-1,} m = − ℓ , … , ℓ . {\displaystyle m=-\ell ,\dots ,\ell .} === Approximate solutions === It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory. == Semiclassical limit == One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics.: 302  The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential V {\displaystyle V} , the Ehrenfest theorem says m d d t ⟨ x ⟩ = ⟨ p ⟩ ; d d t ⟨ p ⟩ = − ⟨ V ′ ( X ) ⟩ . {\displaystyle m{\frac {d}{dt}}\langle x\rangle =\langle p\rangle ;\quad {\frac {d}{dt}}\langle p\rangle =-\left\langle V'(X)\right\rangle .} Although the first of these equations is consistent with the classical behavior, the second is not: If the pair ( ⟨ X ⟩ , ⟨ P ⟩ ) {\displaystyle (\langle X\rangle ,\langle P\rangle )} were to satisfy Newton's second law, the right-hand side of the second equation would have to be − V ′ ( ⟨ X ⟩ ) {\displaystyle -V'\left(\left\langle X\right\rangle \right)} which is typically not the same as − ⟨ V ′ ( X ) ⟩ {\displaystyle -\left\langle V'(X)\right\rangle } . For a general V ′ {\displaystyle V'} , therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however, V ′ {\displaystyle V'} is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories. For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point x 0 {\displaystyle x_{0}} , then V ′ ( ⟨ X ⟩ ) {\displaystyle V'\left(\left\langle X\right\rangle \right)} and ⟨ V ′ ( X ) ⟩ {\displaystyle \left\langle V'(X)\right\rangle } will be almost the same, since both will be approximately equal to V ′ ( x 0 ) {\displaystyle V'(x_{0})} . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position. The Schrödinger equation in its general form i ℏ ∂ ∂ t Ψ ( r , t ) = H ^ Ψ ( r , t ) {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi \left(\mathbf {r} ,t\right)={\hat {H}}\Psi \left(\mathbf {r} ,t\right)} is closely related to the Hamilton–Jacobi equation (HJE) − ∂ ∂ t S ( q i , t ) = H ( q i , ∂ S ∂ q i , t ) {\displaystyle -{\frac {\partial }{\partial t}}S(q_{i},t)=H\left(q_{i},{\frac {\partial S}{\partial q_{i}}},t\right)} where S {\displaystyle S} is the classical action and H {\displaystyle H} is the Hamiltonian function (not operator).: 308  Here the generalized coordinates q i {\displaystyle q_{i}} for i = 1 , 2 , 3 {\displaystyle i=1,2,3} (used in the context of the HJE) can be set to the position in Cartesian coordinates as r = ( q 1 , q 2 , q 3 ) = ( x , y , z ) {\displaystyle \mathbf {r} =(q_{1},q_{2},q_{3})=(x,y,z)} . Substituting Ψ = ρ ( r , t ) e i S ( r , t ) / ℏ {\displaystyle \Psi ={\sqrt {\rho (\mathbf {r} ,t)}}e^{iS(\mathbf {r} ,t)/\hbar }} where ρ {\displaystyle \rho } is the probability density, into the Schrödinger equation and then taking the limit ℏ → 0 {\displaystyle \hbar \to 0} in the resulting equation yield the Hamilton–Jacobi equation. == Density matrices == Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead.: 74  A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written ρ ^ = | Ψ ⟩ ⟨ Ψ | . {\displaystyle {\hat {\rho }}=|\Psi \rangle \langle \Psi |.} The density-matrix analogue of the Schrödinger equation for wave functions is i ℏ ∂ ρ ^ ∂ t = [ H ^ , ρ ^ ] , {\displaystyle i\hbar {\frac {\partial {\hat {\rho }}}{\partial t}}=[{\hat {H}},{\hat {\rho }}],} where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices.: 312  If the Hamiltonian is time-independent, this equation can be easily solved to yield ρ ^ ( t ) = e − i H ^ t / ℏ ρ ^ ( 0 ) e i H ^ t / ℏ . {\displaystyle {\hat {\rho }}(t)=e^{-i{\hat {H}}t/\hbar }{\hat {\rho }}(0)e^{i{\hat {H}}t/\hbar }.} More generally, if the unitary operator U ^ ( t ) {\displaystyle {\hat {U}}(t)} describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by ρ ^ ( t ) = U ^ ( t ) ρ ^ ( 0 ) U ^ ( t ) † . {\displaystyle {\hat {\rho }}(t)={\hat {U}}(t){\hat {\rho }}(0){\hat {U}}(t)^{\dagger }.} Unitary evolution of a density matrix conserves its von Neumann entropy.: 267  == Relativistic quantum physics and quantum field theory == The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method. === Klein–Gordon and Dirac equations === Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation E 2 = ( p c ) 2 + ( m 0 c 2 ) 2 , {\displaystyle E^{2}=(pc)^{2}+\left(m_{0}c^{2}\right)^{2},} instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation, − 1 c 2 ∂ 2 ∂ t 2 ψ + ∇ 2 ψ = m 2 c 2 ℏ 2 ψ , {\displaystyle -{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi +\nabla ^{2}\psi ={\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi ,} was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices α 1 , α 2 , α 3 , β {\displaystyle \alpha _{1},\alpha _{2},\alpha _{3},\beta } . Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read ( β m c 2 + c ( ∑ n = ⁡ 1 3 α n p n ) ) ψ = i ℏ ∂ ψ ∂ t . {\displaystyle \left(\beta mc^{2}+c\left(\sum _{n\mathop {=} 1}^{3}\alpha _{n}p_{n}\right)\right)\psi =i\hbar {\frac {\partial \psi }{\partial t}}.} This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is: H ^ Dirac = γ 0 [ c γ ⋅ ( p ^ − q A ) + m c 2 + γ 0 q φ ] , {\displaystyle {\hat {H}}_{\text{Dirac}}=\gamma ^{0}\left[c{\boldsymbol {\gamma }}\cdot \left({\hat {\mathbf {p} }}-q\mathbf {A} \right)+mc^{2}+\gamma ^{0}q\varphi \right],} in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-1⁄2 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle. For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass). In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields. === Fock space === As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function Ψ ( x , t ) {\displaystyle \Psi (x,t)} . This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways. == History == Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum p {\displaystyle p} of a photon is inversely proportional to its wavelength λ {\displaystyle \lambda } , or proportional to its wave number k {\displaystyle k} : p = h λ = ℏ k , {\displaystyle p={\frac {h}{\lambda }}=\hbar k,} where h {\displaystyle h} is the Planck constant and ℏ = h / 2 π {\displaystyle \hbar ={h}/{2\pi }} is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed. These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum L {\displaystyle L} according to L = n h 2 π = n ℏ . {\displaystyle L=n{\frac {h}{2\pi }}=n\hbar .} According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit: n λ = 2 π r . {\displaystyle n\lambda =2\pi r.} This approach essentially confined the electron wave in one dimension, along a circular orbit of radius r {\displaystyle r} . In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the Physical Review, according to Kamen. Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action. The equation he found is i ℏ ∂ ∂ t Ψ ( r , t ) = − ℏ 2 2 m ∇ 2 Ψ ( r , t ) + V ( r ) Ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).} By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units): ( E + e 2 r ) 2 ψ ( x ) = − ∇ 2 ψ ( x ) + m 2 ψ ( x ) . {\displaystyle \left(E+{\frac {e^{2}}{r}}\right)^{2}\psi (x)=-\nabla ^{2}\psi (x)+m^{2}\psi (x).} He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925. While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl: 3 ) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.: 1  Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave Ψ ( x , t ) {\displaystyle \Psi (\mathbf {x} ,t)} , moving in a potential well V {\displaystyle V} , created by the proton. This computation accurately reproduced the energy levels of the Bohr model. The Schrödinger equation details the behavior of Ψ {\displaystyle \Psi } but says nothing of its nature. Schrödinger tried to interpret the real part of Ψ ∂ Ψ ∗ ∂ t {\displaystyle \Psi {\frac {\partial \Psi ^{*}}{\partial t}}} as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of Ψ {\displaystyle \Psi } is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted Ψ {\displaystyle \Psi } as the probability amplitude, whose modulus squared is equal to probability density.: 220  Later, Schrödinger himself explained this interpretation as follows: The already ... mentioned psi-function.... is now the means for predicting probability of measurement results. In it is embodied the momentarily attained sum of theoretically based future expectation, somewhat as laid down in a catalog. == Interpretation == The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts. In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort. Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why should we assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful. Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation. == See also == == Notes == == References == == External links == "Schrödinger equation". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Quantum Cook Book (PDF) and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware The Modern Revolution in Physics – an online textbook. Quantum Physics I at MIT OpenCourseWare
Wikipedia/Time-independent_Schrödinger_equation
In mathematics, a basis function is an element of a particular basis for a function space. Every function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors. In numerical analysis and approximation theory, basis functions are also called blending functions, because of their use in interpolation: In this application, a mixture of the basis functions provides an interpolating function (with the "blend" depending on the evaluation of the basis functions at the data points). == Examples == === Monomial basis for Cω === The monomial basis for the vector space of analytic functions is given by { x n ∣ n ∈ N } . {\displaystyle \{x^{n}\mid n\in \mathbb {N} \}.} This basis is used in Taylor series, amongst others. === Monomial basis for polynomials === The monomial basis also forms a basis for the vector space of polynomials. After all, every polynomial can be written as a 0 + a 1 x 1 + a 2 x 2 + ⋯ + a n x n {\displaystyle a_{0}+a_{1}x^{1}+a_{2}x^{2}+\cdots +a_{n}x^{n}} for some n ∈ N {\displaystyle n\in \mathbb {N} } , which is a linear combination of monomials. === Fourier basis for L2[0,1] === Sines and cosines form an (orthonormal) Schauder basis for square-integrable functions on a bounded domain. As a particular example, the collection { 2 sin ⁡ ( 2 π n x ) ∣ n ∈ N } ∪ { 2 cos ⁡ ( 2 π n x ) ∣ n ∈ N } ∪ { 1 } {\displaystyle \{{\sqrt {2}}\sin(2\pi nx)\mid n\in \mathbb {N} \}\cup \{{\sqrt {2}}\cos(2\pi nx)\mid n\in \mathbb {N} \}\cup \{1\}} forms a basis for L2[0,1]. == See also == == References == Itô, Kiyosi (1993). Encyclopedic Dictionary of Mathematics (2nd ed.). MIT Press. p. 1141. ISBN 0-262-59020-4.
Wikipedia/Basis_function
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right. The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior. == Elements of a mathematical model == Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements: Governing equations Supplementary sub-models Defining equations Constitutive equations Assumptions and constraints Initial and boundary conditions Classical constraints and kinematic equations == Classifications == Mathematical models are of different types: Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity. Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations. Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties. Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge. Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions. Deductive, inductive, or floating. A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model. Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players. == Construction == In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables). Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables. === A priori information === Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take. Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model. In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque. ==== Subjective information ==== Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data. An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability. === Complexity === In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification. For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before. === Training, tuning, and fitting === Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting. === Evaluation and assessment === A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation. ==== Prediction of empirical data ==== Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics. Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form. ==== Scope of the model ==== Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation. As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics. ==== Philosophical considerations ==== Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied. An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation. == Significance in the natural sciences == Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used. It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis. Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean. == Some applications == Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations. A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables. == Examples == One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s: M = ( Q , Σ , δ , q 0 , F ) {\displaystyle M=(Q,\Sigma ,\delta ,q_{0},F)} where Q = { S 1 , S 2 } , {\displaystyle Q=\{S_{1},S_{2}\},} Σ = { 0 , 1 } , {\displaystyle \Sigma =\{0,1\},} q 0 = S 1 , {\displaystyle q_{0}=S_{1},} F = { S 1 } , {\displaystyle F=\{S_{1}\},} and δ {\displaystyle \delta } is defined by the following state-transition table: The state S 1 {\displaystyle S_{1}} represents that there has been an even number of 0s in the input so far, while S 2 {\displaystyle S_{2}} signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, M {\displaystyle M} will finish in state S 1 , {\displaystyle S_{1},} an accepting state, so the input string will be accepted. The language recognized by M {\displaystyle M} is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1". Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel. Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning. Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions. Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function V : R 3 → R {\displaystyle V\!:\mathbb {R} ^{3}\!\to \mathbb {R} } and the trajectory, that is a function r : R → R 3 , {\displaystyle \mathbf {r} \!:\mathbb {R} \to \mathbb {R} ^{3},} is the solution of the differential equation: − d 2 r ( t ) d t 2 m = ∂ V [ r ( t ) ] ∂ x x ^ + ∂ V [ r ( t ) ] ∂ y y ^ + ∂ V [ r ( t ) ] ∂ z z ^ , {\displaystyle -{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}m={\frac {\partial V[\mathbf {r} (t)]}{\partial x}}\mathbf {\hat {x}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial y}}\mathbf {\hat {y}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial z}}\mathbf {\hat {z}} ,} that can be written also as m d 2 r ( t ) d t 2 = − ∇ V [ r ( t ) ] . {\displaystyle m{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}=-\nabla V[\mathbf {r} (t)].} Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion. Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of n {\displaystyle n} commodities labeled 1 , 2 , … , n {\displaystyle 1,2,\dots ,n} each with a market price p 1 , p 2 , … , p n . {\displaystyle p_{1},p_{2},\dots ,p_{n}.} The consumer is assumed to have an ordinal utility function U {\displaystyle U} (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} consumed. The model further assumes that the consumer has a budget M {\displaystyle M} which is used to purchase a vector x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} in such a way as to maximize U ( x 1 , x 2 , … , x n ) . {\displaystyle U(x_{1},x_{2},\dots ,x_{n}).} The problem of rational behavior in this model then becomes a mathematical optimization problem, that is: max U ( x 1 , x 2 , … , x n ) {\displaystyle \max \,U(x_{1},x_{2},\ldots ,x_{n})} subject to: ∑ i = 1 n p i x i ≤ M , {\displaystyle \sum _{i=1}^{n}p_{i}x_{i}\leq M,} x i ≥ 0 for all i = 1 , 2 , … , n . {\displaystyle x_{i}\geq 0\;\;\;{\text{ for all }}i=1,2,\dots ,n.} This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria. Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network. In computer science, mathematical models may be used to simulate computer networks. In mechanics, mathematical models may be used to analyze the movement of a rocket model. == See also == == References == == Further reading == === Books === Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover. ISBN 0-486-68131-9 Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover. ISBN 0-486-41180-X Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt ISBN 0871502364 Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press. Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press ISBN 0-521-57095-6 . Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM. ISBN 0-89871-229-7 Models as Mediators: Perspectives on Natural and Social Science edited by Mary S. Morgan and Margaret Morrison, 1999. Mary S. Morgan The World in the Model: How Economists Work and Think, 2012. === Specific applications === Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67–80. doi:10.24057/2071-9388-2010-3-1-67-80 Peierls, R. (1980). "Model-making in physics". Contemporary Physics. 21: 3–17. Bibcode:1980ConPh..21....3P. doi:10.1080/00107518008210938. An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White. == External links == General reference Patrone, F. Introduction to modeling via differential equations, with critical remarks. Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine, the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge. Philosophical Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition) Griffiths, E. C. (2010) What is a model?
Wikipedia/Mathematical_models
In molecular physics and chemistry, the van der Waals force (sometimes van der Waals' force) is a distance-dependent interaction between atoms or molecules. Unlike ionic or covalent bonds, these attractions do not result from a chemical electronic bond; they are comparatively weak and therefore more susceptible to disturbance. The van der Waals force quickly vanishes at longer distances between interacting molecules. Named after Dutch physicist Johannes Diderik van der Waals, the van der Waals force plays a fundamental role in fields as diverse as supramolecular chemistry, structural biology, polymer science, nanotechnology, surface science, and condensed matter physics. It also underlies many properties of organic compounds and molecular solids, including their solubility in polar and non-polar media. If no other force is present, the distance between atoms at which the force becomes repulsive rather than attractive as the atoms approach one another is called the van der Waals contact distance; this phenomenon results from the mutual repulsion between the atoms' electron clouds. The van der Waals forces are usually described as a combination of the London dispersion forces between "instantaneously induced dipoles", Debye forces between permanent dipoles and induced dipoles, and the Keesom force between permanent molecular dipoles whose rotational orientations are dynamically averaged over time. == Definition == Van der Waals forces include attraction and repulsions between atoms, molecules, as well as other intermolecular forces. They differ from covalent and ionic bonding in that they are caused by correlations in the fluctuating polarizations of nearby particles (a consequence of quantum dynamics). The force results from a transient shift in electron density. Specifically, the electron density may temporarily shift to be greater on one side of the nucleus. This shift generates a transient charge which a nearby atom can be attracted to or repelled by. The force is repulsive at very short distances, reaches zero at an equilibrium distance characteristic for each atom, or molecule, and becomes attractive for distances larger than the equilibrium distance. For individual atoms, the equilibrium distance is between 0.3 nm and 0.5 nm, depending on the atomic-specific diameter. When the interatomic distance is greater than 1.0 nm the force is not strong enough to be easily observed as it decreases as a function of distance r approximately with the 7th power (~r−7). Van der Waals forces are often among the weakest chemical forces. For example, the pairwise attractive van der Waals interaction energy between H (hydrogen) atoms in different H2 molecules equals 0.06 kJ/mol (0.6 meV) and the pairwise attractive interaction energy between O (oxygen) atoms in different O2 molecules equals 0.44 kJ/mol (4.6 meV). The corresponding vaporization energies of H2 and O2 molecular liquids, which result as a sum of all van der Waals interactions per molecule in the molecular liquids, amount to 0.90 kJ/mol (9.3 meV) and 6.82 kJ/mol (70.7 meV), respectively, and thus approximately 15 times the value of the individual pairwise interatomic interactions (excluding covalent bonds). The strength of van der Waals bonds increases with higher polarizability of the participating atoms. For example, the pairwise van der Waals interaction energy for more polarizable atoms such as S (sulfur) atoms in H2S and sulfides exceeds 1 kJ/mol (10 meV), and the pairwise interaction energy between even larger, more polarizable Xe (xenon) atoms is 2.35 kJ/mol (24.3 meV). These van der Waals interactions are up to 40 times stronger than in H2, which has only one valence electron, and they are still not strong enough to achieve an aggregate state other than gas for Xe under standard conditions. The interactions between atoms in metals can also be effectively described as van der Waals interactions and account for the observed solid aggregate state with bonding strengths comparable to covalent and ionic interactions. The strength of pairwise van der Waals type interactions is on the order of 12 kJ/mol (120 meV) for low-melting Pb (lead) and on the order of 32 kJ/mol (330 meV) for high-melting Pt (platinum), which is about one order of magnitude stronger than in Xe due to the presence of a highly polarizable free electron gas. Accordingly, van der Waals forces can range from weak to strong interactions, and support integral structural loads when multitudes of such interactions are present. === Force contributions === More broadly, intermolecular forces have several possible contributions. They are ordered from strongest to weakest: A repulsive component resulting from the Pauli exclusion principle that prevents close contact of atoms, or the collapse of molecules. Attractive or repulsive electrostatic interactions between permanent charges (in the case of molecular ions), dipoles (in the case of molecules without inversion centre), quadrupoles (all molecules with symmetry lower than cubic), and in general between permanent multipoles. These interactions also include hydrogen bonds, cation-pi, and pi-stacking interactions. Orientation-averaged contributions from electrostatic interactions are sometimes called the Keesom interaction or Keesom force after Willem Hendrik Keesom. Induction (also known as polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced multipole on another. This interaction is sometimes called Debye force after Peter J. W. Debye. The interactions (2) and (3) are labelled polar Interactions. Dispersion (usually named London dispersion interactions after Fritz London), which is the attractive interaction between any pair of molecules, including non-polar atoms, arising from the interactions of instantaneous multipoles. When to apply the term "van der Waals" force depends on the text. The broadest definitions include all intermolecular forces which are electrostatic in origin, namely (2), (3) and (4). Some authors, whether or not they consider other forces to be of van der Waals type, focus on (3) and (4) as these are the components which act over the longest range. All intermolecular/van der Waals forces are anisotropic (except those between two noble gas atoms), which means that they depend on the relative orientation of the molecules. The induction and dispersion interactions are always attractive, irrespective of orientation, but the electrostatic interaction changes sign upon rotation of the molecules. That is, the electrostatic force can be attractive or repulsive, depending on the mutual orientation of the molecules. When molecules are in thermal motion, as they are in the gas and liquid phase, the electrostatic force is averaged out to a large extent because the molecules thermally rotate and thus probe both repulsive and attractive parts of the electrostatic force. Random thermal motion can disrupt or overcome the electrostatic component of the van der Waals force but the averaging effect is much less pronounced for the attractive induction and dispersion forces. The Lennard-Jones potential is often used as an approximate model for the isotropic part of a total (repulsion plus attraction) van der Waals force as a function of distance. Van der Waals forces are responsible for certain cases of pressure broadening (van der Waals broadening) of spectral lines and the formation of van der Waals molecules. The London–van der Waals forces are related to the Casimir effect for dielectric media, the former being the microscopic description of the latter bulk property. The first detailed calculations of this were done in 1955 by E. M. Lifshitz. A more general theory of van der Waals forces has also been developed. The main characteristics of van der Waals forces are: They are weaker than normal covalent and ionic bonds. The van der Waals forces are additive in nature, consisting of several individual interactions, and cannot be saturated. They have no directional characteristic. They are all short-range forces and hence only interactions between the nearest particles need to be considered (instead of all the particles). Van der Waals attraction is greater if the molecules are closer. Van der Waals forces are independent of temperature except for dipole-dipole interactions. In low molecular weight alcohols, the hydrogen-bonding properties of their polar hydroxyl group dominate other weaker van der Waals interactions. In higher molecular weight alcohols, the properties of the nonpolar hydrocarbon chain(s) dominate and determine their solubility. Van der Waals forces are also responsible for the weak hydrogen bond interactions between unpolarized dipoles particularly in acid-base aqueous solution and between biological molecules. == London dispersion force == London dispersion forces, named after the German-American physicist Fritz London, are weak intermolecular forces that arise from the interactive forces between instantaneous multipoles in molecules without permanent multipole moments. In and between organic molecules the multitude of contacts can lead to larger contribution of dispersive attraction, particularly in the presence of heteroatoms. London dispersion forces are also known as 'dispersion forces', 'London forces', or 'instantaneous dipole–induced dipole forces'. The strength of London dispersion forces is proportional to the polarizability of the molecule, which in turn depends on the total number of electrons and the area over which they are spread. Hydrocarbons display small dispersive contributions, the presence of heteroatoms lead to increased LD forces as function of their polarizability, e.g. in the sequence RI>RBr>RCl>RF. In absence of solvents weakly polarizable hydrocarbons form crystals due to dispersive forces; their sublimation heat is a measure of the dispersive interaction. == Van der Waals forces between macroscopic objects == For macroscopic bodies with known volumes and numbers of atoms or molecules per unit volume, the total van der Waals force is often computed based on the "microscopic theory" as the sum over all interacting pairs. It is necessary to integrate over the total volume of the object, which makes the calculation dependent on the objects' shapes. For example, the van der Waals interaction energy between spherical bodies of radii R1 and R2 and with smooth surfaces was approximated in 1937 by Hamaker (using London's famous 1937 equation for the dispersion interaction energy between atoms/molecules as the starting point) by: where A is the Hamaker coefficient, which is a constant (~10−19 − 10−20 J) that depends on the material properties (it can be positive or negative in sign depending on the intervening medium), and z is the center-to-center distance; i.e., the sum of R1, R2, and r (the distance between the surfaces): z = R 1 + R 2 + r {\displaystyle \ z=R_{1}+R_{2}+r} . The van der Waals force between two spheres of constant radii (R1 and R2 are treated as parameters) is then a function of separation since the force on an object is the negative of the derivative of the potential energy function, F V d W ( z ) = − d d z U ( z ) {\displaystyle \ F_{\rm {VdW}}(z)=-{\frac {d}{dz}}U(z)} . This yields: In the limit of close-approach, the spheres are sufficiently large compared to the distance between them; i.e., r ≪ R 1 {\displaystyle \ r\ll R_{1}} or R 2 {\displaystyle R_{2}} , so that equation (1) for the potential energy function simplifies to: with the force: The van der Waals forces between objects with other geometries using the Hamaker model have been published in the literature. From the expression above, it is seen that the van der Waals force decreases with decreasing size of bodies (R). Nevertheless, the strength of inertial forces, such as gravity and drag/lift, decrease to a greater extent. Consequently, the van der Waals forces become dominant for collections of very small particles such as very fine-grained dry powders (where there are no capillary forces present) even though the force of attraction is smaller in magnitude than it is for larger particles of the same substance. Such powders are said to be cohesive, meaning they are not as easily fluidized or pneumatically conveyed as their more coarse-grained counterparts. Generally, free-flow occurs with particles greater than about 250 μm. The van der Waals force of adhesion is also dependent on the surface topography. If there are surface asperities, or protuberances, that result in a greater total area of contact between two particles or between a particle and a wall, this increases the van der Waals force of attraction as well as the tendency for mechanical interlocking. The microscopic theory assumes pairwise additivity. It neglects many-body interactions and retardation. A more rigorous approach accounting for these effects, called the "macroscopic theory", was developed by Lifshitz in 1956. Langbein derived a much more cumbersome "exact" expression in 1970 for spherical bodies within the framework of the Lifshitz theory while a simpler macroscopic model approximation had been made by Derjaguin as early as 1934. Expressions for the van der Waals forces for many different geometries using the Lifshitz theory have likewise been published. == Use by geckos and arthropods == The ability of geckos – which can hang on a glass surface using only one toe – to climb on sheer surfaces has been for many years mainly attributed to the van der Waals forces between these surfaces and the spatulae, or microscopic projections, which cover the hair-like setae found on their footpads. There were efforts in 2008 to create a dry glue that exploits the effect, and success was achieved in 2011 to create an adhesive tape on similar grounds (i.e. based on van der Waals forces). In 2011, a paper was published relating the effect to both velcro-like hairs and the presence of lipids in gecko footprints. A later study suggested that capillary adhesion might play a role, but that hypothesis has been rejected by more recent studies. A 2014 study has shown that gecko adhesion to smooth Teflon and polydimethylsiloxane surfaces is mainly determined by electrostatic interaction (caused by contact electrification), not van der Waals or capillary forces. Among the arthropods, some spiders have similar setae on their scopulae or scopula pads, enabling them to climb or hang upside-down from extremely smooth surfaces such as glass or porcelain. == See also == == References == == Further reading == Brevik, Iver; Marachevsky, V. N.; Milton, Kimball A. (1999). "Identity of the van der Waals Force and the Casimir Effect and the Irrelevance of These Phenomena to Sonoluminescence". Physical Review Letters. 82 (20): 3948–3951. arXiv:hep-th/9810062. Bibcode:1999PhRvL..82.3948B. doi:10.1103/PhysRevLett.82.3948. S2CID 14762105. Dzyaloshinskii, I. D.; Lifshitz, E. M.; Pitaevskii, Lev P. (1961). "Общая теория ван-дер-ваальсовых сил" [General theory of van der Waals forces] (PDF). Uspekhi Fizicheskikh Nauk (in Russian). 73 (381). English translation: Dzyaloshinskii, I. D.; Lifshitz, E. M.; Pitaevskii, L. P. (1961). "General theory of van der Waalsforces". Soviet Physics Uspekhi. 4 (2): 153. Bibcode:1961SvPhU...4..153D. doi:10.1070/PU1961v004n02ABEH003330. Landau, L. D.; Lifshitz, E. M. (1960). Electrodynamics of Continuous Media. Oxford: Pergamon. pp. 368–376. Langbein, Dieter (1974). Theory of Van der Waals Attraction. Springer Tracts in Modern Physics. Vol. 72. New York, Heidelberg: Springer-Verlag. Lefers, Mark. "Van der Waals dispersion force". Life Science Glossary. Holmgren Lab. Archived from the original on 24 July 2019. Retrieved 2 October 2017. Lifshitz, E. M. (1955). "Russian title is missing" [The Theory of Molecular Attractive Forces between Solids]. Zhurnal Éksperimental'noĭ i Teoreticheskoĭ Fiziki (in Russian). 29 (1): 94. English translation: Lifshitz, E. M. (January 1956). "The Theory of Molecular Attractive Forces between Solids" (PDF). Soviet Physics. 2 (1): 73. Archived from the original (PDF) on 13 July 2019. Retrieved 8 August 2020. "London force animation". Intermolecular Forces. Western Oregon University. Lyklema, J. Fundamentals of Interface and Colloid Science. p. 4.43. Israelachvili, Jacob N. (1992). Intermolecular and Surface Forces. Academic Press. ISBN 9780123751812. == External links == Senese, Fred (1999). "What are van der Waals forces?". Frostburg State University. Retrieved 1 March 2010. An introductory description of the van der Waals force (as a sum of attractive components only) "Robert Full: Learning from the gecko's tail". TED. 1 February 2009. Retrieved 5 October 2016. TED Talk on biomimicry, including applications of van der Waals force. Wolff, J. O.; Gorb, S. N. (18 May 2011). "The influence of humidity on the attachment ability of the spider Philodromus dispar (Araneae, Philodromidae)". Proceedings of the Royal Society B: Biological Sciences. 279 (1726): 139–143. doi:10.1098/rspb.2011.0505. PMC 3223641. PMID 21593034.
Wikipedia/Van_der_Waals_forces
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy (or simply the degeneracy) of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue.: 48  When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy. Degeneracy plays a fundamental role in quantum statistical mechanics. For an N-particle system in three dimensions, a single energy level may correspond to several different wave functions or energy states. These degenerate states at the same level all have an equal probability of being filled. The number of such states gives the degeneracy of a particular energy level. == Mathematics == The possible states of a quantum mechanical system may be treated mathematically as abstract vectors in a separable, complex Hilbert space, while the observables may be represented by linear Hermitian operators acting upon them. By selecting a suitable basis, the components of these vectors and the matrix elements of the operators in that basis may be determined. If A is a N × N matrix, X a non-zero vector, and λ is a scalar, such that A X = λ X {\displaystyle AX=\lambda X} , then the scalar λ is said to be an eigenvalue of A and the vector X is said to be the eigenvector corresponding to λ. Together with the zero vector, the set of all eigenvectors corresponding to a given eigenvalue λ form a subspace of Cn, which is called the eigenspace of λ. An eigenvalue λ which corresponds to two or more different linearly independent eigenvectors is said to be degenerate, i.e., A X 1 = λ X 1 {\displaystyle AX_{1}=\lambda X_{1}} and A X 2 = λ X 2 {\displaystyle AX_{2}=\lambda X_{2}} , where X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} are linearly independent eigenvectors. The dimension of the eigenspace corresponding to that eigenvalue is known as its degree of degeneracy, which can be finite or infinite. An eigenvalue is said to be non-degenerate if its eigenspace is one-dimensional. The eigenvalues of the matrices representing physical observables in quantum mechanics give the measurable values of these observables while the eigenstates corresponding to these eigenvalues give the possible states in which the system may be found, upon measurement. The measurable values of the energy of a quantum system are given by the eigenvalues of the Hamiltonian operator, while its eigenstates give the possible energy states of the system. A value of energy is said to be degenerate if there exist at least two linearly independent energy states associated with it. Moreover, any linear combination of two or more degenerate eigenstates is also an eigenstate of the Hamiltonian operator corresponding to the same energy eigenvalue. This clearly follows from the fact that the eigenspace of the energy value eigenvalue λ is a subspace (being the kernel of the Hamiltonian minus λ times the identity), hence is closed under linear combinations. == Effect of degeneracy on the measurement of energy == In the absence of degeneracy, if a measured value of energy of a quantum system is determined, the corresponding state of the system is assumed to be known, since only one eigenstate corresponds to each energy eigenvalue. However, if the Hamiltonian H ^ {\displaystyle {\hat {H}}} has a degenerate eigenvalue E n {\displaystyle E_{n}} of degree gn, the eigenstates associated with it form a vector subspace of dimension gn. In such a case, several final states can be possibly associated with the same result E n {\displaystyle E_{n}} , all of which are linear combinations of the gn orthonormal eigenvectors | E n , i ⟩ {\displaystyle |E_{n,i}\rangle } . In this case, the probability that the energy value measured for a system in the state | ψ ⟩ {\displaystyle |\psi \rangle } will yield the value E n {\displaystyle E_{n}} is given by the sum of the probabilities of finding the system in each of the states in this basis, i.e., P ( E n ) = ∑ i = 1 g n | ⟨ E n , i | ψ ⟩ | 2 {\displaystyle P(E_{n})=\sum _{i=1}^{g_{n}}|\langle E_{n,i}|\psi \rangle |^{2}} == Degeneracy in different dimensions == This section intends to illustrate the existence of degenerate energy levels in quantum systems studied in different dimensions. The study of one and two-dimensional systems aids the conceptual understanding of more complex systems. === Degeneracy in one dimension === In several cases, analytic results can be obtained more easily in the study of one-dimensional systems. For a quantum particle with a wave function | ψ ⟩ {\displaystyle |\psi \rangle } moving in a one-dimensional potential V ( x ) {\displaystyle V(x)} , the time-independent Schrödinger equation can be written as − ℏ 2 2 m d 2 ψ d x 2 + V ψ = E ψ {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}+V\psi =E\psi } Since this is an ordinary differential equation, there are two independent eigenfunctions for a given energy E {\displaystyle E} at most, so that the degree of degeneracy never exceeds two. It can be proven that in one dimension, there are no degenerate bound states for normalizable wave functions. A sufficient condition on a piecewise continuous potential V {\displaystyle V} and the energy E {\displaystyle E} is the existence of two real numbers M , x 0 {\displaystyle M,x_{0}} with M ≠ 0 {\displaystyle M\neq 0} such that ∀ x > x 0 {\displaystyle \forall x>x_{0}} we have V ( x ) − E ≥ M 2 {\displaystyle V(x)-E\geq M^{2}} . In particular, V {\displaystyle V} is bounded below in this criterion. === Degeneracy in two-dimensional quantum systems === Two-dimensional quantum systems exist in all three states of matter and much of the variety seen in three dimensional matter can be created in two dimensions. Real two-dimensional materials are made of monoatomic layers on the surface of solids. Some examples of two-dimensional electron systems achieved experimentally include MOSFET, two-dimensional superlattices of Helium, Neon, Argon, Xenon etc. and surface of liquid Helium. The presence of degenerate energy levels is studied in the cases of Particle in a box and two-dimensional harmonic oscillator, which act as useful mathematical models for several real world systems. === Particle in a rectangular plane === Consider a free particle in a plane of dimensions L x {\displaystyle L_{x}} and L y {\displaystyle L_{y}} in a plane of impenetrable walls. The time-independent Schrödinger equation for this system with wave function | ψ ⟩ {\displaystyle |\psi \rangle } can be written as − ℏ 2 2 m ( ∂ 2 ψ ∂ x 2 + ∂ 2 ψ ∂ y 2 ) = E ψ {\displaystyle -{\frac {\hbar ^{2}}{2m}}\left({\frac {\partial ^{2}\psi }{{\partial x}^{2}}}+{\frac {\partial ^{2}\psi }{{\partial y}^{2}}}\right)=E\psi } The permitted energy values are E n x , n y = π 2 ℏ 2 2 m ( n x 2 L x 2 + n y 2 L y 2 ) {\displaystyle E_{n_{x},n_{y}}={\frac {\pi ^{2}\hbar ^{2}}{2m}}\left({\frac {n_{x}^{2}}{L_{x}^{2}}}+{\frac {n_{y}^{2}}{L_{y}^{2}}}\right)} The normalized wave function is ψ n x , n y ( x , y ) = 2 L x L y sin ⁡ ( n x π x L x ) sin ⁡ ( n y π y L y ) {\displaystyle \psi _{n_{x},n_{y}}(x,y)={\frac {2}{\sqrt {L_{x}L_{y}}}}\sin \left({\frac {n_{x}\pi x}{L_{x}}}\right)\sin \left({\frac {n_{y}\pi y}{L_{y}}}\right)} where n x , n y = 1 , 2 , 3 , … {\displaystyle n_{x},n_{y}=1,2,3,\dots } So, quantum numbers n x {\displaystyle n_{x}} and n y {\displaystyle n_{y}} are required to describe the energy eigenvalues and the lowest energy of the system is given by E 1 , 1 = π 2 ℏ 2 2 m ( 1 L x 2 + 1 L y 2 ) {\displaystyle E_{1,1}=\pi ^{2}{\frac {\hbar ^{2}}{2m}}\left({\frac {1}{L_{x}^{2}}}+{\frac {1}{L_{y}^{2}}}\right)} For some commensurate ratios of the two lengths L x {\displaystyle L_{x}} and L y {\displaystyle L_{y}} , certain pairs of states are degenerate. If L x / L y = p / q {\displaystyle L_{x}/L_{y}=p/q} , where p and q are integers, the states ( n x , n y ) {\displaystyle (n_{x},n_{y})} and ( p n y / q , q n x / p ) {\displaystyle (pn_{y}/q,qn_{x}/p)} have the same energy and so are degenerate to each other. === Particle in a square box === In this case, the dimensions of the box L x = L y = L {\displaystyle L_{x}=L_{y}=L} and the energy eigenvalues are given by E n x , n y = π 2 ℏ 2 2 m L 2 ( n x 2 + n y 2 ) {\displaystyle E_{n_{x},n_{y}}={\frac {\pi ^{2}\hbar ^{2}}{2mL^{2}}}(n_{x}^{2}+n_{y}^{2})} Since n x {\displaystyle n_{x}} and n y {\displaystyle n_{y}} can be interchanged without changing the energy, each energy level has a degeneracy of at least two when n x {\displaystyle n_{x}} and n y {\displaystyle n_{y}} are different. Degenerate states are also obtained when the sum of squares of quantum numbers corresponding to different energy levels are the same. For example, the three states (nx = 7, ny = 1), (nx = 1, ny = 7) and (nx = ny = 5) all have E = 50 π 2 ℏ 2 2 m L 2 {\displaystyle E=50{\frac {\pi ^{2}\hbar ^{2}}{2mL^{2}}}} and constitute a degenerate set. Degrees of degeneracy of different energy levels for a particle in a square box: === Particle in a cubic box === In this case, the dimensions of the box L x = L y = L z = L {\displaystyle L_{x}=L_{y}=L_{z}=L} and the energy eigenvalues depend on three quantum numbers. E n x , n y , n z = π 2 ℏ 2 2 m L 2 ( n x 2 + n y 2 + n z 2 ) {\displaystyle E_{n_{x},n_{y},n_{z}}={\frac {\pi ^{2}\hbar ^{2}}{2mL^{2}}}(n_{x}^{2}+n_{y}^{2}+n_{z}^{2})} Since n x {\displaystyle n_{x}} , n y {\displaystyle n_{y}} and n z {\displaystyle n_{z}} can be interchanged without changing the energy, each energy level has a degeneracy of at least three when the three quantum numbers are not all equal. == Finding a unique eigenbasis in case of degeneracy == If two operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} commute, i.e., [ A ^ , B ^ ] = 0 {\displaystyle [{\hat {A}},{\hat {B}}]=0} , then for every eigenvector | ψ ⟩ {\displaystyle |\psi \rangle } of A ^ {\displaystyle {\hat {A}}} , B ^ | ψ ⟩ {\displaystyle {\hat {B}}|\psi \rangle } is also an eigenvector of A ^ {\displaystyle {\hat {A}}} with the same eigenvalue. However, if this eigenvalue, say λ {\displaystyle \lambda } , is degenerate, it can be said that B ^ | ψ ⟩ {\displaystyle {\hat {B}}|\psi \rangle } belongs to the eigenspace E λ {\displaystyle E_{\lambda }} of A ^ {\displaystyle {\hat {A}}} , which is said to be globally invariant under the action of B ^ {\displaystyle {\hat {B}}} . For two commuting observables A and B, one can construct an orthonormal basis of the state space with eigenvectors common to the two operators. However, λ {\displaystyle \lambda } is a degenerate eigenvalue of A ^ {\displaystyle {\hat {A}}} , then it is an eigensubspace of A ^ {\displaystyle {\hat {A}}} that is invariant under the action of B ^ {\displaystyle {\hat {B}}} , so the representation of B ^ {\displaystyle {\hat {B}}} in the eigenbasis of A ^ {\displaystyle {\hat {A}}} is not a diagonal but a block diagonal matrix, i.e. the degenerate eigenvectors of A ^ {\displaystyle {\hat {A}}} are not, in general, eigenvectors of B ^ {\displaystyle {\hat {B}}} . However, it is always possible to choose, in every degenerate eigensubspace of A ^ {\displaystyle {\hat {A}}} , a basis of eigenvectors common to A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} . === Choosing a complete set of commuting observables === If a given observable A is non-degenerate, there exists a unique basis formed by its eigenvectors. On the other hand, if one or several eigenvalues of A ^ {\displaystyle {\hat {A}}} are degenerate, specifying an eigenvalue is not sufficient to characterize a basis vector. If, by choosing an observable B ^ {\displaystyle {\hat {B}}} , which commutes with A ^ {\displaystyle {\hat {A}}} , it is possible to construct an orthonormal basis of eigenvectors common to A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} , which is unique, for each of the possible pairs of eigenvalues {a,b}, then A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} are said to form a complete set of commuting observables. However, if a unique set of eigenvectors can still not be specified, for at least one of the pairs of eigenvalues, a third observable C ^ {\displaystyle {\hat {C}}} , which commutes with both A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} can be found such that the three form a complete set of commuting observables. It follows that the eigenfunctions of the Hamiltonian of a quantum system with a common energy value must be labelled by giving some additional information, which can be done by choosing an operator that commutes with the Hamiltonian. These additional labels required naming of a unique energy eigenfunction and are usually related to the constants of motion of the system. === Degenerate energy eigenstates and the parity operator === The parity operator is defined by its action in the | r ⟩ {\displaystyle |r\rangle } representation of changing r to −r, i.e. ⟨ r | P | ψ ⟩ = ψ ( − r ) {\displaystyle \langle r|P|\psi \rangle =\psi (-r)} The eigenvalues of P can be shown to be limited to ± 1 {\displaystyle \pm 1} , which are both degenerate eigenvalues in an infinite-dimensional state space. An eigenvector of P with eigenvalue +1 is said to be even, while that with eigenvalue −1 is said to be odd. Now, an even operator A ^ {\displaystyle {\hat {A}}} is one that satisfies, A ~ = P A ^ P {\displaystyle {\tilde {A}}=P{\hat {A}}P} [ P , A ^ ] = 0 {\displaystyle [P,{\hat {A}}]=0} while an odd operator B ^ {\displaystyle {\hat {B}}} is one that satisfies P B ^ + B ^ P = 0 {\displaystyle P{\hat {B}}+{\hat {B}}P=0} Since the square of the momentum operator p ^ 2 {\displaystyle {\hat {p}}^{2}} is even, if the potential V(r) is even, the Hamiltonian H ^ {\displaystyle {\hat {H}}} is said to be an even operator. In that case, if each of its eigenvalues are non-degenerate, each eigenvector is necessarily an eigenstate of P, and therefore it is possible to look for the eigenstates of H ^ {\displaystyle {\hat {H}}} among even and odd states. However, if one of the energy eigenstates has no definite parity, it can be asserted that the corresponding eigenvalue is degenerate, and P | ψ ⟩ {\displaystyle P|\psi \rangle } is an eigenvector of H ^ {\displaystyle {\hat {H}}} with the same eigenvalue as | ψ ⟩ {\displaystyle |\psi \rangle } . == Degeneracy and symmetry == The physical origin of degeneracy in a quantum-mechanical system is often the presence of some symmetry in the system. Studying the symmetry of a quantum system can, in some cases, enable us to find the energy levels and degeneracies without solving the Schrödinger equation, hence reducing effort. Mathematically, the relation of degeneracy with symmetry can be clarified as follows. Consider a symmetry operation associated with a unitary operator S. Under such an operation, the new Hamiltonian is related to the original Hamiltonian by a similarity transformation generated by the operator S, such that H ′ = S H S − 1 = S H S † {\displaystyle H'=SHS^{-1}=SHS^{\dagger }} , since S is unitary. If the Hamiltonian remains unchanged under the transformation operation S, we have S H S † = H S H S − 1 = H S H = H S [ S , H ] = 0 {\displaystyle {\begin{aligned}SHS^{\dagger }&=H\\[1ex]SHS^{-1}&=H\\[1ex]SH&=HS\\[1ex][S,H]&=0\end{aligned}}} Now, if | α ⟩ {\displaystyle |\alpha \rangle } is an energy eigenstate, H | α ⟩ = E | α ⟩ {\displaystyle H|\alpha \rangle =E|\alpha \rangle } where E is the corresponding energy eigenvalue. H S | α ⟩ = S H | α ⟩ = S E | α ⟩ = E S | α ⟩ {\displaystyle HS|\alpha \rangle =SH|\alpha \rangle =SE|\alpha \rangle =ES|\alpha \rangle } which means that S | α ⟩ {\displaystyle S|\alpha \rangle } is also an energy eigenstate with the same eigenvalue E. If the two states | α ⟩ {\displaystyle |\alpha \rangle } and S | α ⟩ {\displaystyle S|\alpha \rangle } are linearly independent (i.e. physically distinct), they are therefore degenerate. In cases where S is characterized by a continuous parameter ϵ {\displaystyle \epsilon } , all states of the form S ( ϵ ) | α ⟩ {\displaystyle S(\epsilon )|\alpha \rangle } have the same energy eigenvalue. === Symmetry group of the Hamiltonian === The set of all operators which commute with the Hamiltonian of a quantum system are said to form the symmetry group of the Hamiltonian. The commutators of the generators of this group determine the algebra of the group. An n-dimensional representation of the Symmetry group preserves the multiplication table of the symmetry operators. The possible degeneracies of the Hamiltonian with a particular symmetry group are given by the dimensionalities of the irreducible representations of the group. The eigenfunctions corresponding to a n-fold degenerate eigenvalue form a basis for a n-dimensional irreducible representation of the Symmetry group of the Hamiltonian. == Types of degeneracy == Degeneracies in a quantum system can be systematic or accidental in nature. === Systematic or essential degeneracy === This is also called a geometrical or normal degeneracy and arises due to the presence of some kind of symmetry in the system under consideration, i.e. the invariance of the Hamiltonian under a certain operation, as described above. The representation obtained from a normal degeneracy is irreducible and the corresponding eigenfunctions form a basis for this representation. === Accidental degeneracy === It is a type of degeneracy resulting from some special features of the system or the functional form of the potential under consideration, and is related possibly to a hidden dynamical symmetry in the system. It also results in conserved quantities, which are often not easy to identify. Accidental symmetries lead to these additional degeneracies in the discrete energy spectrum. An accidental degeneracy can be due to the fact that the group of the Hamiltonian is not complete. These degeneracies are connected to the existence of bound orbits in classical Physics. ==== Examples: Coulomb and Harmonic Oscillator potentials ==== For a particle in a central 1/r potential, the Laplace–Runge–Lenz vector is a conserved quantity resulting from an accidental degeneracy, in addition to the conservation of angular momentum due to rotational invariance. For a particle moving on a cone under the influence of 1/r and r2 potentials, centred at the tip of the cone, the conserved quantities corresponding to accidental symmetry will be two components of an equivalent of the Runge-Lenz vector, in addition to one component of the angular momentum vector. These quantities generate SU(2) symmetry for both potentials. ==== Example: Particle in a constant magnetic field ==== A particle moving under the influence of a constant magnetic field, undergoing cyclotron motion on a circular orbit is another important example of an accidental symmetry. The symmetry multiplets in this case are the Landau levels which are infinitely degenerate. == Examples == === The hydrogen atom === In atomic physics, the bound states of an electron in a hydrogen atom show us useful examples of degeneracy. In this case, the Hamiltonian commutes with the total orbital angular momentum L ^ 2 {\displaystyle {\hat {L}}^{2}} , its component along the z-direction, L ^ z {\displaystyle {\hat {L}}_{z}} , total spin angular momentum S ^ 2 {\displaystyle {\hat {S}}^{2}} and its z-component S ^ z {\displaystyle {\hat {S}}_{z}} . The quantum numbers corresponding to these operators are ℓ {\displaystyle \ell } , m ℓ {\displaystyle m_{\ell }} , s {\displaystyle s} (always 1/2 for an electron) and m s {\displaystyle m_{s}} respectively. The energy levels in the hydrogen atom depend only on the principal quantum number n. For a given n, all the states corresponding to ℓ = 0 , … , n − 1 {\displaystyle \ell =0,\ldots ,n-1} have the same energy and are degenerate. Similarly for given values of n and ℓ, the ( 2 ℓ + 1 ) {\displaystyle (2\ell +1)} , states with m ℓ = − ℓ , … , ℓ {\displaystyle m_{\ell }=-\ell ,\ldots ,\ell } are degenerate. The degree of degeneracy of the energy level En is therefore ∑ ℓ = ⁡ 0 n − 1 ( 2 ℓ + 1 ) = n 2 , {\displaystyle \sum _{\ell \mathop {=} 0}^{n-1}(2\ell +1)=n^{2},} which is doubled if the spin degeneracy is included.: 267f  The degeneracy with respect to m ℓ {\displaystyle m_{\ell }} is an essential degeneracy which is present for any central potential, and arises from the absence of a preferred spatial direction. The degeneracy with respect to ℓ {\displaystyle \ell } is often described as an accidental degeneracy, but it can be explained in terms of special symmetries of the Schrödinger equation which are only valid for the hydrogen atom in which the potential energy is given by Coulomb's law.: 267f  === Isotropic three-dimensional harmonic oscillator === It is a spinless particle of mass m moving in three-dimensional space, subject to a central force whose absolute value is proportional to the distance of the particle from the centre of force. F = − k r {\displaystyle F=-kr} It is said to be isotropic since the potential V ( r ) {\displaystyle V(r)} acting on it is rotationally invariant, i.e., V ( r ) = 1 2 m ω 2 r 2 {\displaystyle V(r)={\tfrac {1}{2}}m\omega ^{2}r^{2}} where ω {\displaystyle \omega } is the angular frequency given by k / m {\textstyle {\sqrt {k/m}}} . Since the state space of such a particle is the tensor product of the state spaces associated with the individual one-dimensional wave functions, the time-independent Schrödinger equation for such a system is given by- − ℏ 2 2 m ( ∂ 2 ψ ∂ x 2 + ∂ 2 ψ ∂ y 2 + ∂ 2 ψ ∂ z 2 ) + 1 2 m ω 2 ( x 2 + y 2 + z 2 ) ψ = E ψ {\displaystyle -{\frac {\hbar ^{2}}{2m}}\left({\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}+{\frac {\partial ^{2}\psi }{\partial z^{2}}}\right)+{\frac {1}{2}}{m\omega ^{2}\left(x^{2}+y^{2}+z^{2}\right)\psi }=E\psi } So, the energy eigenvalues are E n x , n y , n z = ( n x + n y + n z + 3 2 ) ℏ ω {\displaystyle E_{n_{x},n_{y},n_{z}}=\left(n_{x}+n_{y}+n_{z}+{\tfrac {3}{2}}\right)\hbar \omega } or, E n = ( n + 3 2 ) ℏ ω {\displaystyle E_{n}=\left(n+{\tfrac {3}{2}}\right)\hbar \omega } where n is a non-negative integer. So, the energy levels are degenerate and the degree of degeneracy is equal to the number of different sets { n x , n y , n z } {\displaystyle \{n_{x},n_{y},n_{z}\}} satisfying n x + n y + n z = n {\displaystyle n_{x}+n_{y}+n_{z}=n} The degeneracy of the n {\displaystyle n} -th state can be found by considering the distribution of n {\displaystyle n} quanta across n x {\displaystyle n_{x}} , n y {\displaystyle n_{y}} and n z {\displaystyle n_{z}} . Having 0 in n x {\displaystyle n_{x}} gives n + 1 {\displaystyle n+1} possibilities for distribution across n y {\displaystyle n_{y}} and n z {\displaystyle n_{z}} . Having 1 quanta in n x {\displaystyle n_{x}} gives n {\displaystyle n} possibilities across n y {\displaystyle n_{y}} and n z {\displaystyle n_{z}} and so on. This leads to the general result of n − n x + 1 {\displaystyle n-n_{x}+1} and summing over all n {\displaystyle n} leads to the degeneracy of the n {\displaystyle n} -th state, ∑ n x = 0 n ( n − n x + 1 ) = ( n + 1 ) ( n + 2 ) 2 {\displaystyle \sum _{n_{x}=0}^{n}(n-n_{x}+1)={\frac {(n+1)(n+2)}{2}}} For the ground state n = 0 {\displaystyle n=0} , the degeneracy is 1 {\displaystyle 1} so the state is non-degenerate. For all higher states, the degeneracy is greater than 1 so the state is degenerate. == Removing degeneracy == The degeneracy in a quantum mechanical system may be removed if the underlying symmetry is broken by an external perturbation. This causes splitting in the degenerate energy levels. This is essentially a splitting of the original irreducible representations into lower-dimensional such representations of the perturbed system. Mathematically, the splitting due to the application of a small perturbation potential can be calculated using time-independent degenerate perturbation theory. This is an approximation scheme that can be applied to find the solution to the eigenvalue equation for the Hamiltonian H of a quantum system with an applied perturbation, given the solution for the Hamiltonian H0 for the unperturbed system. It involves expanding the eigenvalues and eigenkets of the Hamiltonian H in a perturbation series. The degenerate eigenstates with a given energy eigenvalue form a vector subspace, but not every basis of eigenstates of this space is a good starting point for perturbation theory, because typically there would not be any eigenstates of the perturbed system near them. The correct basis to choose is one that diagonalizes the perturbation Hamiltonian within the degenerate subspace. === Physical examples of removal of degeneracy by a perturbation === Some important examples of physical situations where degenerate energy levels of a quantum system are split by the application of an external perturbation are given below. === Symmetry breaking in two-level systems === A two-level system essentially refers to a physical system having two states whose energies are close together and very different from those of the other states of the system. All calculations for such a system are performed on a two-dimensional subspace of the state space. If the ground state of a physical system is two-fold degenerate, any coupling between the two corresponding states lowers the energy of the ground state of the system, and makes it more stable. If E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} are the energy levels of the system, such that E 1 = E 2 = E {\displaystyle E_{1}=E_{2}=E} , and the perturbation W {\displaystyle W} is represented in the two-dimensional subspace as the following 2×2 matrix W = [ 0 W 12 W 12 ∗ 0 ] . {\displaystyle \mathbf {W} ={\begin{bmatrix}0&W_{12}\\[1ex]W_{12}^{*}&0\end{bmatrix}}.} then the perturbed energies are E + = E + | W 12 | E − = E − | W 12 | {\displaystyle {\begin{aligned}E_{+}&=E+|W_{12}|\\E_{-}&=E-|W_{12}|\end{aligned}}} Examples of two-state systems in which the degeneracy in energy states is broken by the presence of off-diagonal terms in the Hamiltonian resulting from an internal interaction due to an inherent property of the system include: Benzene, with two possible dispositions of the three double bonds between neighbouring Carbon atoms. Ammonia molecule, where the Nitrogen atom can be either above or below the plane defined by the three Hydrogen atoms. H+2 molecule, in which the electron may be localized around either of the two nuclei. === Fine-structure splitting === The corrections to the Coulomb interaction between the electron and the proton in a Hydrogen atom due to relativistic motion and spin–orbit coupling result in breaking the degeneracy in energy levels for different values of l corresponding to a single principal quantum number n. The perturbation Hamiltonian due to relativistic correction is given by H r = − p 4 / 8 m 3 c 2 {\displaystyle H_{r}=-p^{4}/8m^{3}c^{2}} where p {\displaystyle p} is the momentum operator and m {\displaystyle m} is the mass of the electron. The first-order relativistic energy correction in the | n l m ⟩ {\displaystyle |nlm\rangle } basis is given by E r = ( − 1 / 8 m 3 c 2 ) ⟨ n ℓ m | p 4 | n ℓ m ⟩ {\displaystyle E_{r}=\left(-1/8m^{3}c^{2}\right)\left\langle n\ell m\right|p^{4}\left|n\ell m\right\rangle } Now p 4 = 4 m 2 ( H 0 + e 2 / r ) 2 {\displaystyle p^{4}=4m^{2}(H^{0}+e^{2}/r)^{2}} E r = − 1 2 m c 2 [ E n 2 + 2 E n e 2 ⟨ 1 r ⟩ + e 4 ⟨ 1 r 2 ⟩ ] = − 1 2 m c 2 α 4 [ − 3 / ( 4 n 4 ) + 1 / n 3 ( ℓ + 1 / 2 ) ] {\displaystyle {\begin{aligned}E_{r}&=-{\frac {1}{2mc^{2}}}\left[E_{n}^{2}+2E_{n}e^{2}\left\langle {\frac {1}{r}}\right\rangle +e^{4}\left\langle {\frac {1}{r^{2}}}\right\rangle \right]\\&=-{\frac {1}{2}}mc^{2}\alpha ^{4}\left[-3/(4n^{4})+1/{n^{3}(\ell +1/2)}\right]\end{aligned}}} where α {\displaystyle \alpha } is the fine structure constant. The spin–orbit interaction refers to the interaction between the intrinsic magnetic moment of the electron with the magnetic field experienced by it due to the relative motion with the proton. The interaction Hamiltonian is H s o = − e m c m ⋅ L r 3 = e 2 m 2 c 2 r 3 S ⋅ L {\displaystyle H_{so}=-{\frac {e}{mc}}{\frac {\mathbf {m} \cdot \mathbf {L} }{r^{3}}}={\frac {e^{2}}{m^{2}c^{2}r^{3}}}\mathbf {S} \cdot \mathbf {L} } which may be written as H s o = e 2 4 m 2 c 2 r 3 [ J 2 − L 2 − S 2 ] {\displaystyle H_{so}={\frac {e^{2}}{4m^{2}c^{2}r^{3}}}\left[J^{2}-L^{2}-S^{2}\right]} The first order energy correction in the | j , m , ℓ , 1 / 2 ⟩ {\displaystyle |j,m,\ell ,1/2\rangle } basis where the perturbation Hamiltonian is diagonal, is given by E s o = ℏ 2 e 2 4 m 2 c 2 j ( j + 1 ) − ℓ ( ℓ + 1 ) − 3 4 a 0 3 n 3 ℓ ( ℓ + 1 2 ) ( ℓ + 1 ) {\displaystyle E_{so}={\frac {\hbar ^{2}e^{2}}{4m^{2}c^{2}}}{\frac {j(j+1)-\ell (\ell +1)-{\frac {3}{4}}}{a_{0}^{3}n^{3}\ell (\ell +{\frac {1}{2}})(\ell +1)}}} where a 0 {\displaystyle a_{0}} is the Bohr radius. The total fine-structure energy shift is given by E f s = − m c 2 α 4 2 n 3 [ 1 / ( j + 1 / 2 ) − 3 / 4 n ] {\displaystyle E_{fs}=-{\frac {mc^{2}\alpha ^{4}}{2n^{3}}}\left[1/(j+1/2)-3/4n\right]} for j = ℓ ± 1 2 {\textstyle j=\ell \pm {\tfrac {1}{2}}} . === Zeeman effect === The splitting of the energy levels of an atom when placed in an external magnetic field because of the interaction of the magnetic moment m → {\displaystyle {\vec {m}}} of the atom with the applied field is known as the Zeeman effect. Taking into consideration the orbital and spin angular momenta, L {\displaystyle \mathbf {L} } and S {\displaystyle \mathbf {S} } , respectively, of a single electron in the Hydrogen atom, the perturbation Hamiltonian is given by V ^ = − ( m ℓ + m s ) ⋅ B {\displaystyle {\hat {V}}=-(\mathbf {m} _{\ell }+\mathbf {m} _{s})\cdot \mathbf {B} } where m ℓ = − e L / 2 m {\displaystyle \mathbf {m} _{\ell }=-e\mathbf {L} /2m} and m s = − e S / m {\displaystyle \mathbf {m} _{s}=-e\mathbf {S} /m} . Thus, V ^ = e 2 m ( L + 2 S ) ⋅ B {\displaystyle {\hat {V}}={\frac {e}{2m}}(\mathbf {L} +2\mathbf {S} )\cdot \mathbf {B} } Now, in case of the weak-field Zeeman effect, when the applied field is weak compared to the internal field, the spin–orbit coupling dominates and L {\textstyle \mathbf {L} } and S {\textstyle \mathbf {S} } are not separately conserved. The good quantum numbers are n, ℓ, j and mj, and in this basis, the first order energy correction can be shown to be given by E z = − μ B g j B m j , {\displaystyle E_{z}=-\mu _{B}g_{j}Bm_{j},} where μ B = e ℏ / 2 m {\displaystyle \mu _{B}={e\hbar }/2m} is called the Bohr Magneton. Thus, depending on the value of m j {\displaystyle m_{j}} , each degenerate energy level splits into several levels. In case of the strong-field Zeeman effect, when the applied field is strong enough, so that the orbital and spin angular momenta decouple, the good quantum numbers are now n, l, ml, and ms. Here, Lz and Sz are conserved, so the perturbation Hamiltonian is given by- V ^ = e B ( L z + 2 S z ) / 2 m {\displaystyle {\hat {V}}=eB(L_{z}+2S_{z})/2m} assuming the magnetic field to be along the z-direction. So, V ^ = e B ( m ℓ + 2 m s ) / 2 m {\displaystyle {\hat {V}}=eB(m_{\ell }+2m_{s})/2m} For each value of mℓ, there are two possible values of ms, ± 1 / 2 {\displaystyle \pm 1/2} . === Stark effect === The splitting of the energy levels of an atom or molecule when subjected to an external electric field is known as the Stark effect. For the hydrogen atom, the perturbation Hamiltonian is H ^ s = − | e | E z {\displaystyle {\hat {H}}_{s}=-|e|Ez} if the electric field is chosen along the z-direction. The energy corrections due to the applied field are given by the expectation value of H ^ s {\displaystyle {\hat {H}}_{s}} in the | n ℓ m ⟩ {\displaystyle |n\ell m\rangle } basis. It can be shown by the selection rules that ⟨ n ℓ m ℓ | z | n 1 ℓ 1 m ℓ 1 ⟩ ≠ 0 {\displaystyle \langle n\ell m_{\ell }|z|n_{1}\ell _{1}m_{\ell 1}\rangle \neq 0} when ℓ = ℓ 1 ± 1 {\displaystyle \ell =\ell _{1}\pm 1} and m ℓ = m ℓ 1 {\displaystyle m_{\ell }=m_{\ell 1}} . The degeneracy is lifted only for certain states obeying the selection rules, in the first order. The first-order splitting in the energy levels for the degenerate states | 2 , 0 , 0 ⟩ {\displaystyle |2,0,0\rangle } and | 2 , 1 , 0 ⟩ {\displaystyle |2,1,0\rangle } , both corresponding to n = 2, is given by Δ E 2 , 1 , m ℓ = ± | e | ℏ 2 / ( m e e 2 ) E {\displaystyle \Delta E_{2,1,m_{\ell }}=\pm |e|\hbar ^{2}/(m_{e}e^{2})E} . == See also == Density of states == References == == Further reading == Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck. Quantum Mechanics. Vol. 1. Hermann. ISBN 978-2-7056-8392-4. Shankar, Ramamurti (2013). Principles of Quantum Mechanics. Springer. ISBN 978-1-4615-7675-4. Larson, Ron; Falvo, David C. (30 March 2009). Elementary Linear Algebra, Enhanced Edition. Cengage Learning. pp. 8–. ISBN 978-1-305-17240-1. Hobson; Riley (27 August 2004). Mathematical Methods For Physics And Engineering (Clpe) 2Ed. Cambridge University Press. ISBN 978-0-521-61296-8. Hemmer (2005). Kvantemekanikk: P.C. Hemmer. Tapir akademisk forlag. Tillegg 3: supplement to sections 3.1, 3.3, and 3.5. ISBN 978-82-519-2028-5. Quantum degeneracy in two dimensional systems, Debnarayan Jana, Dept. of Physics, University College of Science and Technology Al-Hashimi, Munir (2008). Accidental Symmetry in Quantum Physics.
Wikipedia/Degenerate_energy_level
The Quantum Vacuum: An Introduction to Quantum Electrodynamics is a physics textbook authored by Peter W. Milonni in 1993. The book provides a careful and thorough treatment of zero-point energy, spontaneous emission, the Casimir, van der Waals forces, Lamb shift and anomalous magnetic moment of the electron at a level of detail not found in other introductory texts to quantum electrodynamics. The first chapter, Zero‐Point Energy in Early Quantum Theory, was originally published in 1991 in the American Journal of Physics. In 2008 Milonni received the Max Born Award "For exceptional contributions to the fields of theoretical optics, laser physics and quantum mechanics, and for dissemination of scientific knowledge through authorship of a series of outstanding books". == References == Milonni, P. W.; Shih, M.-L. (1991). "Zero-point energy in early quantum theory". American Journal of Physics. 59 (8): 684–698. Bibcode:1991AmJPh..59..684M. doi:10.1119/1.16772. ISSN 0002-9505. Milonni, Peter W.; Eberlein, Claudia (1994). "The Quantum Vacuum: An Introduction to Quantum Electrodynamics". American Journal of Physics. 62 (12): 1154. Bibcode:1994AmJPh..62.1154M. doi:10.1119/1.17618. ISSN 0002-9505. Milonni, Peter W. (1994). The Quantum Vacuum: An Introduction to Quantum Electrodynamics. Boston: Academic Press. ISBN 0-12-498080-5. LCCN 93029780. OCLC 422797902.
Wikipedia/The_Quantum_Vacuum:_An_Introduction_to_Quantum_Electrodynamics
Embodied energy is the sum of all the energy required to produce any goods or services, considered as if that energy were incorporated or 'embodied' in the product itself. The concept can help determine the effectiveness of energy-producing or energy saving devices, or the "real" replacement cost of a building, and, because energy-inputs usually entail greenhouse gas emissions, in deciding whether a product contributes to or mitigates global warming. One fundamental purpose for measuring this quantity is to compare the amount of energy produced or saved by the product in question to the amount of energy consumed in producing it. Embodied energy is an accounting method that aims to find the sum total of the energy necessary for an entire product lifecycle. Determining what constitutes this lifecycle includes assessing the relevance and extent of energy in raw material extraction, transport, manufacture, assembly, installation, disassembly, deconstruction and/or decomposition, as well as human and secondary resources. == History == The history of constructing a system of accounts that records the energy flows through an environment can be traced back to the origins of accounting. As a distinct method, it is often associated with the Physiocrat's "substance" theory of value, and later the agricultural energetics of Sergei Podolinsky, a Russian physician, and the ecological energetics of Vladmir Stanchinsky. The main methods of embodied energy accounting that are used today grew out of Wassily Leontief's input-output model and are called Input-Output Embodied Energy analysis. Leontief's input-output model was in turn an adaptation of the neo-classical theory of general equilibrium with application to "the empirical study of the quantitative interdependence between interrelated economic activities". According to Tennenbaum Leontief's Input-Output method was adapted to embodied energy analysis by Hannon to describe ecosystem energy flows. Hannon's adaptation tabulated the total direct and indirect energy requirements (the energy intensity) for each output made by the system. The total amount of energies, direct and indirect, for the entire amount of production was called the embodied energy. == Methodologies == Embodied energy analysis is interested in what energy goes to supporting a consumer, and so all energy depreciation is assigned to the final demand of the consumer. Different methodologies use different scales of data to calculate energy embodied in products and services of nature and human civilization. International consensus on the appropriateness of data scales and methodologies is pending. This difficulty can give a wide range in embodied energy values for any given material. In the absence of a comprehensive global embodied energy public dynamic database, embodied energy calculations may omit important data on, for example, the rural road/highway construction and maintenance needed to move a product, marketing, advertising, catering services, non-human services and the like. Such omissions can be a source of significant methodological error in embodied energy estimations. Without an estimation and declaration of the embodied energy error, it is difficult to calibrate the sustainability index, and so the value of any given material, process or service to environmental and economic processes. === Standards === The SBTool, UK Code for Sustainable Homes was, and USA LEED still is, a method in which the embodied energy of a product or material is rated, along with other factors, to assess a building's environmental impact. Embodied energy is a concept for which scientists have not yet agreed absolute universal values because there are many variables to take into account, but most agree that products can be compared to each other to see which has more and which has less embodied energy. Comparative lists (for an example, see the University of Bath Embodied Energy & Carbon Material Inventory) contain average absolute values, and explain the factors which have been taken into account when compiling the lists. Typical embodied energy units used are MJ/kg (megajoules of energy needed to make a kilogram of product), tCO2 (tonnes of carbon dioxide created by the energy needed to make a kilogram of product). Converting MJ to tCO2 is not straightforward because different types of energy (oil, wind, solar, nuclear and so on) emit different amounts of carbon dioxide, so the actual amount of carbon dioxide emitted when a product is made will be dependent on the type of energy used in the manufacturing process. For example, the Australian Government gives a global average of 0.098 tCO2 = 1 GJ. This is the same as 1 MJ = 0.098 kgCO2 = 98 gCO2 or 1 kgCO2 = 10.204 MJ. === Related methodologies === In the 2000s, drought conditions in Australia generated interest in the application of embodied energy analysis methods to water. This has led to the use of the concept of embodied water. == Data == Many databases exist to quantify the embodied energy of goods and services, including materials and products. These are based on various data sources, with geographic and temporal relevance variations and system boundary completeness. One such database is the Environmental Performance in Construction (EPiC) Database developed at The University of Melbourne, which includes embodied energy data for over 250 mainly construction materials. This database also includes values for embodied water and greenhouse gas emissions. The main reason for the differences in embodied energy data between databases is the source of data and methodology used in their compilation. Bottom-up 'process' data is typically sourced from product manufacturers and suppliers. While this data is generally more reliable and specific to particular products, the methodology used to collect process data typically results in much of the embodied energy of a product being excluded, mainly due to the time, costs and complexity of data collection. Based on national statistics, top-down environmentally-extended input-output (EEIO) data can be used to fill these data gaps. While EEIO analysis of products can be useful on its own for initial scoping of embodied energy, it is generally much less reliable than process data and rarely relevant for a specific product or material. Hence, hybrid methods for quantifying embodied energy have been developed, using available process data and filling any data gaps with EEIO data. Databases that rely on this hybrid approach, such as The University of Melbourne's EPiC Database, provide a more comprehensive assessment of the embodied energy of products and materials. == In common materials == Selected data from the Inventory of Carbon and Energy ('ICE') prepared by the University of Bath (UK) == In transportation == Theoretically, embodied energy is the energy used to extract materials from mines, manufacture vehicles, assemble, transport, maintain, transform them to transport energy, and ultimately recycle these vehicles. Besides, the energy needed to build and sustain transport networks, whether road or rail, should also be considered. The process to be implemented is so complex that no one dares to put forward a figure. According to the Institut du développement durable et des relations internationales, in the field of transportation, "it is striking to note that we consume more embodied energy in our transportation expenditures than direct energy", and "we consume less energy to move around in our personal vehicles than we consume the energy we need to produce, sell and transport the cars, trains or buses we use". Jean-Marc Jancovici advocates a carbon footprint analysis of any transportation infrastructure project, before its construction. === In automobiles === ==== Manufacturing ==== According to Volkswagen, the embodied energy contents of a Golf A3 with a petrol engine amounts to 18 000 kWh (i.e. 12% of 545 GJ as shown in the report). A Golf A4 (equipped with a turbocharged direct injection) will show an embodied energy amounting to 22 000 kWh (i.e. 15% of 545 GJ as shown in the report). According to the French energy and environment agency ADEME a motor car has an embodied energy contents of 20 800 kWh whereas an electric vehicle shows an embodied energy contents amounting to 34 700 kWh. An electric car has a higher embodied energy than a combustion engine one, owing to the battery and electronics. According to Science & Vie, the embodied energy of batteries is so high that rechargeable hybrid cars constitute the most appropriate solution, with their batteries smaller than those of an all-electric car. ==== Fuel ==== As regards energy itself, the factor energy returned on energy invested (EROEI) of fuel can be estimated at 8, which means that to some amount of useful energy provided by fuel should be added 1/7 of that amount in embodied energy of the fuel. In other words, the fuel consumption should be augmented by 14.3% due to the fuel EROEI. According to some authors, to produce 6 liters of petrol requires 42 kWh of embodied energy (which corresponds to approximately 4.2 liters of diesel in terms of energy content). ==== Road construction ==== We have to work here with figures, which prove still more difficult to obtain. In the case of road construction, the embodied energy would amount to 1/18 of the fuel consumption (i.e. 6%). ==== Other figures available ==== Treloar, et al. have estimated the embodied energy in an average automobile in Australia as 0.27 terajoules (i.e. 75 000 kWh) as one component in an overall analysis of the energy involved in road transportation. == In buildings == Although most of the focus for improving energy efficiency in buildings has been on their operational emissions, it is estimated that about 30% of all energy consumed throughout the lifetime of a building can be in its embodied energy (this percentage varies based on factors such as age of building, climate, and materials). In the past, this percentage was much lower, but as much focus has been placed on reducing operational emissions (such as efficiency improvements in heating and cooling systems), the embodied energy contribution has come much more into play. Examples of embodied energy include: the energy used to extract raw resources, process materials, assemble product components, transport between each step, construction, maintenance and repair, deconstruction and disposal. As such, it is important to employ a whole-life carbon accounting framework to analyze the carbon emissions in buildings. Studies have also shown the need to go beyond the building scale and to take into account the energy associated with mobility of occupants and the embodied energy of infrastructure requirements, in order to avoid shifting energy needs across scales of the built environment. == In the energy field == === EROEI === EROEI (Energy Returned On Energy Invested) provides a basis for evaluating the embodied energy due to energy. Final energy has to be multiplied by 1 EROEI-1 {\displaystyle {\frac {\hbox{1}}{\hbox{EROEI-1}}}} in order to get the embodied energy. Given an EROEI of eight, for example, a seventh of the final energy corresponds to the embodied energy. Not only that, but embodied energy due to the construction and maintenance of power plants should also be taken into account to really obtain overall embodied energy. Here, figures are badly needed. === Electricity === In the BP Statistical Review of World Energy June 2018, toe are converted into kWh "on the basis of thermal equivalence assuming 38% conversion efficiency in a modern thermal power station". In France, by convention, the ratio between primary energy and final energy in electricity amounts to 2.58, corresponding to an efficiency of 38.8%. In Germany, on the contrary, because of the swift development of the renewable energies, the ratio between primary energy and final energy in electricity amounts to only 1.8, corresponding to an efficiency of 55.5%. According to EcoPassenger, overall electricity efficiency would amount to 34% in the UK, 36% in Germany and 29% in France. == Data processing == According to association négaWatt, embodied energy related to digital services amounted to 3.5 TWh/a for networks and 10.0 TWh/a for data centres (half for the servers per se, i. e. 5 TWh/a, and the other half for the buildings in which they are housed, i. e. 5 TWh/a), figures valid in France, in 2015. The organization is optimistic about the evolution of the energy consumption in the digital field, underlining the technical progress being made. The Shift Project, chaired by Jean-Marc Jancovici, contradicts the optimistic vision of the association négaWatt, and notes that the digital energy footprint is growing at 9% per year. == See also == == References == == Bibliography == Clark, D.H.; Treloar, G.J.; Blair, R. (2003). "Estimating the increasing cost of commercial buildings in Australia due to greenhouse emissions trading". In Yang, J.; Brandon, P.S.; Sidwell, A.C. (eds.). Proceedings of the CIB 2003 International Conference on Smart and Sustainable Built Environment, Brisbane, Australia. hdl:10536/DRO/DU:30009596. ISBN 978-1741070415. OCLC 224896901. Costanza, R. (1979). Embodied Energy Basis for Economic-Ecologic Systems (Ph.D.). University of Florida. OCLC 05720193. UF00089540:00001. Crawford, R.H. (2005). "Validation of the Use of Input-Output Data for Embodied Energy Analysis of the Australian Construction Industry". Journal of Construction Research. 6 (1): 71–90. doi:10.1142/S1609945105000250. Crawford, R.H.; Stephan, A.; Prideaux, F. (2019). Environmental Performance in Construction (EPiC) Database. Melbourne, Victoria, Australia: The University of Melbourne. doi:10.26188/5dc1e272cbedc. Lenzen, M. (2001). "Errors in conventional and input-output-based life-cycle inventories". Journal of Industrial Ecology. 4 (4): 127–148. doi:10.1162/10881980052541981. S2CID 154022052. Lenzen, M.; Treloar, G.J. (February 2002). "Embodied energy in buildings: wood versus concrete-reply to Börjesson and Gustavsson". Energy Policy. 30 (3): 249–255. Bibcode:2002EnPol..30..249L. doi:10.1016/S0301-4215(01)00142-2. Treloar, G.J. (1997). "Extracting Embodied Energy Paths from Input-Output Tables: Towards an Input-Output-based Hybrid Energy Analysis Method". Economic Systems Research. 9 (4): 375–391. doi:10.1080/09535319700000032. Treloar, Graham J. (1998). A comprehensive embodied energy analysis framework (Ph.D.). Deakin University. hdl:10536/DRO/DU:30023444. Treloar, G.J.; Owen, C.; Fay, R. (2001). "Environmental assessment of rammed earth construction systems" (PDF). Structural Survey. 19 (2): 99–105. doi:10.1108/02630800110393680. Treloar, G.J.; Love, P.E.D.; Holt, G.D. (2001). "Using national input-output data for embodied energy analysis of individual residential buildings". Construction Management and Economics. 19 (1): 49–61. doi:10.1080/014461901452076. S2CID 110124981. == External links == Embodied energy data and research at The University of Melbourne Research on embodied energy at the University of Sydney, Australia Australian Greenhouse Office, Department of the Environment and Heritage University of Bath (UK), Inventory of Carbon & Energy (ICE) Material Inventory
Wikipedia/Virtual_energy
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems. Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it. As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology. Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others: Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals. == Definition == In mathematics, a linear map (or linear function) f ( x ) {\displaystyle f(x)} is one which satisfies both of the following properties: Additivity or superposition principle: f ( x + y ) = f ( x ) + f ( y ) ; {\displaystyle \textstyle f(x+y)=f(x)+f(y);} Homogeneity: f ( α x ) = α f ( x ) . {\displaystyle \textstyle f(\alpha x)=\alpha f(x).} Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle f ( α x + β y ) = α f ( x ) + β f ( y ) {\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)} An equation written as f ( x ) = C {\displaystyle f(x)=C} is called linear if f ( x ) {\displaystyle f(x)} is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if C = 0 {\displaystyle C=0} and f ( x ) {\displaystyle f(x)} is a homogeneous function. The definition f ( x ) = C {\displaystyle f(x)=C} is very general in that x {\displaystyle x} can be any sensible mathematical object (number, vector, function, etc.), and the function f ( x ) {\displaystyle f(x)} can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If f ( x ) {\displaystyle f(x)} contains differentiation with respect to x {\displaystyle x} , the result will be a differential equation. == Nonlinear systems of equations == A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation. For a single equation of the form f ( x ) = 0 , {\displaystyle f(x)=0,} many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as x 2 + x − 1 = 0. {\displaystyle x^{2}+x-1=0.} The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation. Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms. For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions. == Nonlinear recurrence relations == A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains. == Nonlinear differential equations == A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology. One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions. === Ordinary differential equations === First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation d u d x = − u 2 {\displaystyle {\frac {du}{dx}}=-u^{2}} has u = 1 x + C {\displaystyle u={\frac {1}{x+C}}} as a general solution (and also the special solution u = 0 , {\displaystyle u=0,} corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as d u d x + u 2 = 0 {\displaystyle {\frac {du}{dx}}+u^{2}=0} and the left-hand side of the equation is not a linear function of u {\displaystyle u} and its derivatives. Note that if the u 2 {\displaystyle u^{2}} term were replaced with u {\displaystyle u} , the problem would be linear (the exponential decay problem). Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered. Common methods for the qualitative analysis of nonlinear ordinary differential equations include: Examination of any conserved quantities, especially in Hamiltonian systems Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities Linearization via Taylor expansion Change of variables into something easier to study Bifurcation theory Perturbation methods (can be applied to algebraic equations too) Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations. === Partial differential equations === The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable. Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation. Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations. === Pendula === A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation d 2 θ d t 2 + sin ⁡ ( θ ) = 0 {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0} where gravity points "downwards" and θ {\displaystyle \theta } is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use d θ / d t {\displaystyle d\theta /dt} as an integrating factor, which would eventually yield ∫ d θ C 0 + 2 cos ⁡ ( θ ) = t + C 1 {\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}} which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless C 0 = 2 {\displaystyle C_{0}=2} ). Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at θ = 0 {\displaystyle \theta =0} , called the small angle approximation, is d 2 θ d t 2 + θ = 0 {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0} since sin ⁡ ( θ ) ≈ θ {\displaystyle \sin(\theta )\approx \theta } for θ ≈ 0 {\displaystyle \theta \approx 0} . This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at θ = π {\displaystyle \theta =\pi } , corresponding to the pendulum being straight up: d 2 θ d t 2 + π − θ = 0 {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0} since sin ⁡ ( θ ) ≈ π − θ {\displaystyle \sin(\theta )\approx \pi -\theta } for θ ≈ π {\displaystyle \theta \approx \pi } . The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that | θ | {\displaystyle |\theta |} will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state. One more interesting linearization is possible around θ = π / 2 {\displaystyle \theta =\pi /2} , around which sin ⁡ ( θ ) ≈ 1 {\displaystyle \sin(\theta )\approx 1} : d 2 θ d t 2 + 1 = 0. {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.} This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods. == Types of nonlinear dynamic behaviors == Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic Multistability – the presence of two or more stable states Solitons – self-reinforcing solitary waves Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted. Self-oscillations – feedback oscillations taking place in open dissipative physical systems. == Examples of nonlinear equations == == See also == == References == == Further reading == == External links == Command and Control Research Program (CCRP) New England Complex Systems Institute: Concepts in Complex Systems Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare Nonlinear Model Library – (in MATLAB) a Database of Physical Systems The Center for Nonlinear Studies at Los Alamos National Laboratory
Wikipedia/Nonlinear_equation
In quantum optics, the Jaynes–Cummings model (sometimes abbreviated JCM) is a theoretical model that describes the system of a two-level atom interacting with a quantized mode of an optical cavity (or a bosonic field), with or without the presence of light (in the form of a bath of electromagnetic radiation that can cause spontaneous emission and absorption). It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity. It is named after Edwin Thompson Jaynes and Fred Cummings in the 1960s and was confirmed experimentally in 1987. The Jaynes–Cummings model is of great interest to atomic physics, quantum optics, solid-state physics and quantum information circuits, both experimentally and theoretically. Journal special issues have commemorated the 50th anniversary, (which contains numerous relevant articles, including two interesting editorials, one by Cummings), and 60th anniversary. It also has applications in coherent control and quantum information processing. == History == === 1963: Jaynes and Cummings === The model was originally developed in a 1963 article by Edwin Jaynes and Fred Cummings to elucidate the effects of giving a fully quantum mechanical treatment to the behavior of atoms interacting with an electromagnetic field. In order to simplify the math and allow for a tractable calculation, Jaynes and Cummings restricted their attention to the interaction of an atom with a single mode of quantum electromagnetic field. (See below for further mathematical details.) This approach is in contrast to the earlier semi-classical method, in which only the dynamics of the atom are treated quantum mechanically, while the field with which it interacts is assumed to behave according to classical electromagnetic theory. The quantum mechanical treatment of the field in the Jaynes–Cummings model reveals a number of novel features, including: The existence of Rabi oscillations between the states of the two-level system as it interacts with the quantum field. This was originally believed to be a purely quantum mechanical effect, although a semi-classical explanation for it was later provided in terms of linear dispersion and absorption A ladder of quantized energy levels, called the Jaynes–Cummings ladder, that scales in energy non-linearly as n {\displaystyle {\sqrt {n}}} where n {\displaystyle n} is the total number of quanta in the coupled system. This quantization of energies and non-linear scaling is purely quantum mechanical in nature. The collapse and subsequent revivals of the probability to detect the two-level system in a given state when the field is initially in a coherent state. While the collapse has a simple classical explanation, the revivals can only be explained by the discreteness of the energy spectrum due to quantum nature of the field. To realize the dynamics predicted by the Jaynes–Cummings model experimentally requires a quantum mechanical resonator with a very high quality factor so that the transitions between the states in the two-level system (typically two energy sub-levels in an atom) are coupled very strongly by the interaction of the atom with the field mode. This simultaneously suppresses any coupling between other sub-levels in atom and coupling to other modes of the field, and thus makes any losses small enough to observe the dynamics predicted by the Jaynes–Cummings model. Because of the difficulty in realizing such an apparatus, the model remained a mathematical curiosity for quite some time. In 1985, several groups using Rydberg atoms along with a maser in a microwave cavity demonstrated the predicted Rabi oscillations. However, as noted before, this effect was later found to have a semi-classical explanation. === 1987: Rempe, Walther and Klein === It was not until 1987 that Gerhard Rempe, Herbert Walther, and Norbert Klein were finally able to use a single-atom maser to demonstrate the revivals of probabilities predicted by the model. Before that time, research groups were unable to build experimental setups capable of enhancing the coupling of an atom with a single field mode, simultaneously suppressing other modes. Experimentally, the quality factor of the cavity must be high enough to consider the dynamics of the system as equivalent to the dynamics of a single mode field. This successful demonstration of dynamics that could only be explained by a quantum mechanical model of the field spurred further development of high quality cavities for use in this research. With the advent of one-atom masers it was possible to study the interaction of a single atom (usually a Rydberg atom) with a single resonant mode of the electromagnetic field in a cavity from an experimental point of view, and study different aspects of the Jaynes–Cummings model. It was found that an hourglass geometry could be used to maximize the volume occupied by the mode, while simultaneously maintaining a high quality factor in order to maximize coupling strength, and thus better approximate the parameters of the model. To observe strong atom-field coupling in visible light frequencies, hour-glass-type optical modes can be helpful because of their large mode volume that eventually coincides with a strong field inside the cavity. A quantum dot inside a photonic crystal nano-cavity is also a promising system for observing collapse and revival of Rabi cycles in the visible light frequencies. === Further developments === Many recent experiments have focused on the application of the model to systems with potential applications in quantum information processing and coherent control. Various experiments have demonstrated the dynamics of the Jaynes–Cummings model in the coupling of a quantum dot to the modes of a micro-cavity, potentially allowing it to be applied in a physical system of much smaller size. Other experiments have focused on demonstrating the non-linear nature of the Jaynes–Cummings ladder of energy levels by direct spectroscopic observation. These experiments have found direct evidence for the non-linear behavior predicted from the quantum nature of the field in both superconducting circuits containing an artificial atom coupled to a very high quality oscillator in the form of a superconducting RLC circuit, and in a collection of Rydberg atoms coupled via their spins. In the latter case, the presence or absence of a collective Rydberg excitation in the ensemble serves the role of the two level system, while the role of the bosonic field mode is played by the total number of spin flips that take place. Theoretical work has extended the original model to include the effects of dissipation and damping, typically via a phenomenological approach. Proposed extensions have also incorporated the inclusion of multiple modes of the quantum field, allowing for coupling to additional energy levels within the atom, or the presence of multiple atoms interacting with the same field. Some attempt has also been made to go beyond the so-called rotating-wave approximation that is usually employed (see the mathematical derivation below). The coupling of a single quantum field mode with multiple ( N > 1 {\displaystyle N>1} ) two-state subsystems (equivalent to spins higher than 1/2) is known as the Dicke model or the Tavis–Cummings model. For example, it applies to a high quality resonant cavity containing multiple identical atoms with transitions near the cavity resonance, or a resonator coupled to multiple quantum dots on a superconducting circuit. It reduces to the Jaynes–Cummings model for the case N = 1 {\displaystyle N=1} . The model provides the possibility to realize several exotic theoretical possibilities in an experimental setting. For example, it was realized that during the periods of collapsed Rabi oscillations, the atom-cavity system exists in a quantum superposition state on a macroscopic scale. Such a state is sometimes referred to as a Schrödinger cat, since it allows the exploration of the counter intuitive effects of how quantum entanglement manifests in macroscopic systems. It can also be used to model how quantum information is transferred in a quantum field. == Mathematical formulation 1 == The Hamiltonian that describes the full system, H ^ = H ^ field + H ^ atom + H ^ int {\displaystyle {\hat {H}}={\hat {H}}_{\text{field}}+{\hat {H}}_{\text{atom}}+{\hat {H}}_{\text{int}}} consists of the free field Hamiltonian, the atomic excitation Hamiltonian, and the Jaynes–Cummings interaction Hamiltonian: H ^ field = ℏ ω c a ^ † a ^ H ^ atom = ℏ ω a σ ^ z 2 H ^ int = ℏ Ω 2 E ^ S ^ . {\displaystyle {\begin{aligned}{\hat {H}}_{\text{field}}&=\hbar \omega _{c}{\hat {a}}^{\dagger }{\hat {a}}\\{\hat {H}}_{\text{atom}}&=\hbar \omega _{a}{\frac {{\hat {\sigma }}_{z}}{2}}\\{\hat {H}}_{\text{int}}&={\frac {\hbar \Omega }{2}}{\hat {E}}{\hat {S}}.\end{aligned}}} Here, for convenience, the vacuum field energy is set to 0 {\displaystyle 0} . For deriving the JCM interaction Hamiltonian the quantized radiation field is taken to consist of a single bosonic mode with the field operator E ^ = E ZPF ( a ^ + a ^ † ) {\displaystyle {\hat {E}}=E_{\text{ZPF}}\left({\hat {a}}+{\hat {a}}^{\dagger }\right)} , where the operators a ^ † {\displaystyle {\hat {a}}^{\dagger }} and a ^ {\displaystyle {\hat {a}}} are the bosonic creation and annihilation operators and ω c {\displaystyle \omega _{c}} is the angular frequency of the mode. On the other hand, the two-level atom is equivalent to a spin-half whose state can be described using a three-dimensional Bloch vector. (It should be understood that "two-level atom" here is not an actual atom with spin, but rather a generic two-level quantum system whose Hilbert space is isomorphic to a spin-half.) The atom is coupled to the field through its polarization operator S ^ = σ ^ + + σ ^ − {\displaystyle {\hat {S}}={\hat {\sigma }}_{+}+{\hat {\sigma }}_{-}} . The operators σ ^ + = | e ⟩ ⟨ g | {\displaystyle {\hat {\sigma }}_{+}=|e\rangle \langle g|} and σ ^ − = | g ⟩ ⟨ e | {\displaystyle {\hat {\sigma }}_{-}=|g\rangle \langle e|} are the raising and lowering operators of the atom. The operator σ ^ z = | e ⟩ ⟨ e | − | g ⟩ ⟨ g | {\displaystyle {\hat {\sigma }}_{z}=|e\rangle \langle e|-|g\rangle \langle g|} is the atomic inversion operator, and ω a {\displaystyle \omega _{a}} is the atomic transition frequency. === Jaynes–Cummings Hamiltonian 1 === Moving from the Schrödinger picture into the interaction picture (a.k.a. rotating frame) defined by the choice H ^ 0 = H ^ field + H ^ atom {\displaystyle {\hat {H}}_{0}={\hat {H}}_{\text{field}}+{\hat {H}}_{\text{atom}}} , we obtain H ^ int ( t ) = ℏ Ω 2 ( a ^ σ ^ − e − i ( ω c + ω a ) t + a ^ † σ ^ + e i ( ω c + ω a ) t + a ^ σ ^ + e i ( − ω c + ω a ) t + a ^ † σ ^ − e − i ( − ω c + ω a ) t ) . {\displaystyle {\hat {H}}_{\text{int}}(t)={\frac {\hbar \Omega }{2}}\left({\hat {a}}{\hat {\sigma }}_{-}e^{-i(\omega _{c}+\omega _{a})t}+{\hat {a}}^{\dagger }{\hat {\sigma }}_{+}e^{i(\omega _{c}+\omega _{a})t}+{\hat {a}}{\hat {\sigma }}_{+}e^{i(-\omega _{c}+\omega _{a})t}+{\hat {a}}^{\dagger }{\hat {\sigma }}_{-}e^{-i(-\omega _{c}+\omega _{a})t}\right).} This Hamiltonian contains both quickly ( ω c + ω a ) {\displaystyle (\omega _{c}+\omega _{a})} and slowly ( ω c − ω a ) {\displaystyle (\omega _{c}-\omega _{a})} oscillating components. To get a solvable model, the quickly oscillating "counter-rotating" terms, ( ω c + ω a ) {\displaystyle (\omega _{c}+\omega _{a})} , are ignored. This is referred to as the rotating wave approximation, and it is valid since the fast oscillating term couples states of comparatively large energy difference: When the difference in energy is much larger than the coupling, the mixing of these states will be small, or put differently, the coupling is responsible for very little population transfer between the states. Transforming back into the Schrödinger picture the JCM Hamiltonian is thus written as H ^ JC = ℏ ω c a ^ † a ^ + ℏ ω a σ ^ z 2 + ℏ Ω 2 ( a ^ σ ^ + + a ^ † σ ^ − ) . {\displaystyle {\hat {H}}_{\text{JC}}=\hbar \omega _{c}{\hat {a}}^{\dagger }{\hat {a}}+\hbar \omega _{a}{\frac {{\hat {\sigma }}_{z}}{2}}+{\frac {\hbar \Omega }{2}}\left({\hat {a}}{\hat {\sigma }}_{+}+{\hat {a}}^{\dagger }{\hat {\sigma }}_{-}\right).} === Eigenstates === It is possible, and often very helpful, to write the Hamiltonian of the full system as a sum of two commuting parts: H ^ JC = H ^ I + H ^ II , {\displaystyle {\hat {H}}_{\text{JC}}={\hat {H}}_{\text{I}}+{\hat {H}}_{\text{II}},} where H ^ I = ℏ ω c ( a ^ † a ^ + σ ^ z 2 ) H ^ II = ℏ δ σ ^ z 2 + ℏ Ω 2 ( a ^ σ ^ + + a ^ † σ ^ − ) {\displaystyle {\begin{aligned}{\hat {H}}_{\text{I}}&=\hbar \omega _{c}\left({\hat {a}}^{\dagger }{\hat {a}}+{\frac {{\hat {\sigma }}_{z}}{2}}\right)\\{\hat {H}}_{\text{II}}&=\hbar \delta {\frac {{\hat {\sigma }}_{z}}{2}}+{\frac {\hbar \Omega }{2}}\left({\hat {a}}{\hat {\sigma }}_{+}+{\hat {a}}^{\dagger }{\hat {\sigma }}_{-}\right)\end{aligned}}} with δ = ω a − ω c {\displaystyle \delta =\omega _{a}-\omega _{c}} called the detuning (frequency) between the field and the two-level system. The eigenstates of H ^ I {\displaystyle {\hat {H}}_{I}} , being of tensor product form, are easily solved and denoted by | n + 1 , g ⟩ , | n , e ⟩ {\displaystyle |n+1,g\rangle ,|n,e\rangle } , where n ∈ N {\displaystyle n\in \mathbb {N} } denotes the number of radiation quanta in the mode. As the states | ψ 1 n ⟩ := | n , e ⟩ {\displaystyle |\psi _{1n}\rangle :=|n,e\rangle } and | ψ 2 n ⟩ := | n + 1 , g ⟩ {\displaystyle |\psi _{2n}\rangle :=|n+1,g\rangle } are degenerate with respect to H ^ I {\displaystyle {\hat {H}}_{I}} for all n {\displaystyle n} , it is enough to diagonalize H ^ JC {\displaystyle {\hat {H}}_{\text{JC}}} in the subspaces span ⁡ { | ψ 1 n ⟩ , | ψ 2 n ⟩ } {\displaystyle \operatorname {span} \{|\psi _{1n}\rangle ,|\psi _{2n}\rangle \}} . The matrix elements of H ^ JC {\displaystyle {\hat {H}}_{\text{JC}}} in this subspace, H i j ( n ) := ⟨ ψ i n | H ^ JC | ψ j n ⟩ , {\displaystyle {H}_{ij}^{(n)}:=\langle \psi _{in}|{\hat {H}}_{\text{JC}}|\psi _{jn}\rangle ,} read H ( n ) = ℏ ( n ω c + ω a 2 Ω 2 n + 1 Ω 2 n + 1 ( n + 1 ) ω c − ω a 2 ) {\displaystyle H^{(n)}=\hbar {\begin{pmatrix}n\omega _{c}+{\frac {\omega _{a}}{2}}&{\frac {\Omega }{2}}{\sqrt {n+1}}\\[8pt]{\frac {\Omega }{2}}{\sqrt {n+1}}&(n+1)\omega _{c}-{\frac {\omega _{a}}{2}}\end{pmatrix}}} For a given n {\displaystyle n} , the energy eigenvalues of H ( n ) {\displaystyle H^{(n)}} are E ± ( n ) = ℏ ω c ( n + 1 2 ) ± 1 2 ℏ Ω n ( δ ) , {\displaystyle E_{\pm }(n)=\hbar \omega _{c}\left(n+{\frac {1}{2}}\right)\pm {\frac {1}{2}}\hbar \Omega _{n}(\delta ),} where Ω n ( δ ) = δ 2 + Ω 2 ( n + 1 ) {\textstyle \Omega _{n}(\delta )={\sqrt {\delta ^{2}+\Omega ^{2}(n+1)}}} is the Rabi frequency for the specific detuning parameter. The eigenstates | n , ± ⟩ {\displaystyle |n,\pm \rangle } associated with the energy eigenvalues are given by | n , + ⟩ = cos ⁡ ( α n 2 ) | ψ 1 n ⟩ + sin ⁡ ( α n 2 ) | ψ 2 n ⟩ {\displaystyle |n,+\rangle =\cos \left({\frac {\alpha _{n}}{2}}\right)|\psi _{1n}\rangle +\sin \left({\frac {\alpha _{n}}{2}}\right)|\psi _{2n}\rangle } | n , − ⟩ = sin ⁡ ( α n 2 ) | ψ 1 n ⟩ − cos ⁡ ( α n 2 ) | ψ 2 n ⟩ {\displaystyle |n,-\rangle =\sin \left({\frac {\alpha _{n}}{2}}\right)|\psi _{1n}\rangle -\cos \left({\frac {\alpha _{n}}{2}}\right)|\psi _{2n}\rangle } where the angle α n {\displaystyle \alpha _{n}} is defined through α n := tan − 1 ⁡ ( Ω n + 1 δ ) . {\displaystyle \alpha _{n}:=\tan ^{-1}\left({\frac {\Omega {\sqrt {n+1}}}{\delta }}\right).} === Schrödinger picture dynamics === It is now possible to obtain the dynamics of a general state by expanding it on to the noted eigenstates. We consider a superposition of number states as the initial state for the field, | ψ field ( 0 ) ⟩ = ∑ n C n | n ⟩ {\textstyle |\psi _{\text{field}}(0)\rangle =\sum _{n}{C_{n}|n\rangle }} , and assume an atom in the excited state is injected into the field. The initial state of the system is | ψ tot ( 0 ) ⟩ = ∑ n C n | n , e ⟩ = ∑ n C n [ cos ⁡ ( α n 2 ) | n , + ⟩ + sin ⁡ ( α n 2 ) | n , − ⟩ ] . {\displaystyle |\psi _{\text{tot}}(0)\rangle =\sum _{n}{C_{n}|n,e\rangle }=\sum _{n}C_{n}\left[\cos \left({\frac {\alpha _{n}}{2}}\right)|n,+\rangle +\sin \left({\frac {\alpha _{n}}{2}}\right)|n,-\rangle \right].} Since the | n , ± ⟩ {\displaystyle |n,\pm \rangle } are stationary states of the field-atom system, then the state vector for times t > 0 {\displaystyle t>0} is just given by | ψ tot ( t ) ⟩ = e − i H ^ JC t / ℏ | ψ tot ( 0 ) ⟩ = ∑ n C n [ cos ⁡ ( α n 2 ) | n , + ⟩ e − i E + ( n ) t / ℏ + sin ⁡ ( α n 2 ) | n , − ⟩ e − i E − ( n ) t / ℏ ] . {\displaystyle |\psi _{\text{tot}}(t)\rangle =e^{-i{\hat {H}}_{\text{JC}}t/\hbar }|\psi _{\text{tot}}(0)\rangle =\sum _{n}C_{n}\left[\cos \left({\frac {\alpha _{n}}{2}}\right)|n,+\rangle e^{-iE_{+}(n)t/\hbar }+\sin \left({\frac {\alpha _{n}}{2}}\right)|n,-\rangle e^{-iE_{-}(n)t/\hbar }\right].} The Rabi oscillations can readily be seen in the sin and cos functions in the state vector. Different periods occur for different number states of photons. What is observed in experiment is the sum of many periodic functions that can be very widely oscillating and destructively sum to zero at some moment of time, but will be non-zero again at later moments. Finiteness of this moment results just from discreteness of the periodicity arguments. If the field amplitude were continuous, the revival would have never happened at finite time. === Heisenberg picture dynamics === It is possible in the Heisenberg notation to directly determine the unitary evolution operator from the Hamiltonian: U ^ ( t ) = e − i H ^ JC t / ℏ = ( e − i ω c t ( a ^ † a ^ + 1 2 ) ( cos ⁡ t φ ^ + g 2 − i δ / 2 sin ⁡ t φ ^ + g 2 φ ^ + g 2 ) − i g e − i ω c t ( a ^ † a ^ + 1 2 ) sin ⁡ t φ ^ + g 2 φ ^ + g 2 a ^ − i g e − i ω c t ( a ^ † a ^ − 1 2 ) sin ⁡ t φ ^ φ ^ a ^ † e − i ω c t ( a ^ † a ^ − 1 2 ) ( cos ⁡ t φ ^ + i δ / 2 sin ⁡ t φ ^ φ ^ ) ) {\displaystyle {\begin{matrix}{\begin{aligned}{\hat {U}}(t)&=e^{-i{\hat {H}}_{\text{JC}}t/\hbar }\\&={\begin{pmatrix}e^{-i\omega _{c}t\left({\hat {a}}^{\dagger }{\hat {a}}+{\frac {1}{2}}\right)}\left(\cos t{\sqrt {{\hat {\varphi }}+g^{2}}}-i\delta /2{\frac {\sin t{\sqrt {{\hat {\varphi }}+g^{2}}}}{\sqrt {{\hat {\varphi }}+g^{2}}}}\right)&-ige^{-i\omega _{c}t\left({\hat {a}}^{\dagger }{\hat {a}}+{\frac {1}{2}}\right)}{\frac {\sin t{\sqrt {{\hat {\varphi }}+g^{2}}}}{\sqrt {{\hat {\varphi }}+g^{2}}}}\,{\hat {a}}\\-ige^{-i\omega _{c}t\left({\hat {a}}^{\dagger }{\hat {a}}-{\frac {1}{2}}\right)}{\frac {\sin t{\sqrt {\hat {\varphi }}}}{\sqrt {\hat {\varphi }}}}{\hat {a}}^{\dagger }&e^{-i\omega _{c}t\left({\hat {a}}^{\dagger }{\hat {a}}-{\frac {1}{2}}\right)}\left(\cos t{\sqrt {\hat {\varphi }}}+i\delta /2{\frac {\sin t{\sqrt {\hat {\varphi }}}}{\sqrt {\hat {\varphi }}}}\right)\end{pmatrix}}\end{aligned}}\end{matrix}}} where the operator φ ^ {\displaystyle {\hat {\varphi }}} is defined as φ ^ = g 2 a ^ † a ^ + δ 2 / 4 {\displaystyle {\hat {\varphi }}=g^{2}{\hat {a}}^{\dagger }{\hat {a}}+\delta ^{2}/4} and g {\displaystyle g} is given by g = Ω ℏ {\displaystyle g={\frac {\Omega }{\hbar }}} The unitarity of U ^ {\displaystyle {\hat {U}}} is guaranteed by the identities sin ⁡ t φ ^ + g 2 φ ^ + g 2 a ^ = a ^ sin ⁡ t φ ^ φ ^ , cos ⁡ t φ ^ + g 2 a ^ = a ^ cos ⁡ t φ ^ , {\displaystyle {\begin{aligned}{\frac {\sin t\,{\sqrt {{\hat {\varphi }}+g^{2}}}}{\sqrt {{\hat {\varphi }}+g^{2}}}}\;{\hat {a}}&={\hat {a}}\;{\frac {\sin t\,{\sqrt {\hat {\varphi }}}}{\sqrt {\hat {\varphi }}}},\\\cos t\,{\sqrt {{\hat {\varphi }}+g^{2}}}\;{\hat {a}}&={\hat {a}}\;\cos t{\sqrt {\hat {\varphi }}},\end{aligned}}} and their Hermitian conjugates. By the unitary evolution operator one can calculate the time evolution of the state of the system described by its density matrix ρ ^ ( t ) {\displaystyle {\hat {\rho }}(t)} , and from there the expectation value of any observable, given the initial state: ρ ^ ( t ) = U ^ † ( t ) ρ ^ ( 0 ) U ^ ( t ) {\displaystyle {\hat {\rho }}(t)={\hat {U}}^{\dagger }(t){\hat {\rho }}(0){\hat {U}}(t)} ⟨ Θ ^ ⟩ t = Tr [ ρ ^ ( t ) Θ ^ ] {\displaystyle \langle {\hat {\Theta }}\rangle _{t}={\text{Tr}}[{\hat {\rho }}(t){\hat {\Theta }}]} The initial state of the system is denoted by ρ ^ ( 0 ) {\displaystyle {\hat {\rho }}(0)} and Θ ^ {\displaystyle {\hat {\Theta }}} is an operator denoting the observable. == Mathematical formulation 2 == For ease of illustration, consider the interaction of two energy sub-levels of an atom with a quantized electromagnetic field. The behavior of any other two-state system coupled to a bosonic field will be isomorphic to these dynamics. In that case, the Hamiltonian for the atom-field system is: H ^ = H ^ A + H ^ F + H ^ A F {\displaystyle {\hat {H}}={\hat {H}}_{A}+{\hat {H}}_{F}+{\hat {H}}_{AF}} Where we have made the following definitions: H ^ A = E g | g ⟩ ⟨ g | + E e | e ⟩ ⟨ e | {\displaystyle {\hat {H}}_{A}=E_{g}|g\rangle \langle g|+E_{e}|e\rangle \langle e|} is the Hamiltonian of the atom, where the letters e , g {\displaystyle e,g} are used to denote the excited and ground state respectively. Setting the zero of energy to the ground state energy of the atom simplifies this to H ^ A = E e | e ⟩ ⟨ e | = ℏ ω e g | e ⟩ ⟨ e | {\displaystyle {\hat {H}}_{A}=E_{e}|e\rangle \langle e|=\hbar \omega _{eg}|e\rangle \langle e|} where ω e g {\displaystyle \omega _{eg}} is the resonance frequency of transitions between the sub-levels of the atom. H ^ F = ∑ k , λ ℏ ω k ( a ^ k , λ † a ^ k , λ + 1 2 ) {\displaystyle {\hat {H}}_{F}=\sum _{\mathbf {k} ,\lambda }\hbar \omega _{\mathbf {k} }\left({\hat {a}}_{\mathbf {k} ,\lambda }^{\dagger }{\hat {a}}_{\mathbf {k} ,\lambda }+{\frac {1}{2}}\right)} is the Hamiltonian of the quantized electromagnetic field. Note the infinite sum over all possible wave-vectors k {\displaystyle \mathbf {k} } and two possible orthogonal polarization states λ {\displaystyle \lambda } . The operators a ^ k , λ † {\displaystyle {\hat {a}}_{\mathbf {k} ,\lambda }^{\dagger }} and a ^ k , λ {\displaystyle {\hat {a}}_{\mathbf {k} ,\lambda }} are the photon creation and annihilation operators for each indexed mode of the field. The simplicity of the Jaynes–Cummings model comes from suppressing this general sum by considering only a single mode of the field, allowing us to write H ^ F = ℏ ω c ( a ^ c † a ^ c + 1 2 ) {\textstyle {\hat {H}}_{F}=\hbar \omega _{c}\left({\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}+{\frac {1}{2}}\right)} where the subscript c {\displaystyle c} indicates that we are considering only the resonant mode of the cavity. H ^ A F = − d ^ ⋅ E ^ ( R ) {\displaystyle {\hat {H}}_{AF}=-{\hat {\mathbf {d} }}\cdot {\hat {\mathbf {E} }}(\mathbf {R} )} is the dipole atom-field interaction Hamiltonian (here R {\displaystyle \mathbf {R} } is the position of the atom). Electric field operator of a quantized electromagnetic field is given by E ^ ( R ) = i ∑ k , λ 2 π ℏ ω k V u k , λ ( a ^ k , λ e i k ⋅ R − a ^ k , λ † e − i k ⋅ R ) {\displaystyle {\hat {\mathbf {E} }}(\mathbf {R} )=i\sum _{\mathbf {k} ,\lambda }{\sqrt {\frac {2\pi \hbar \omega _{\mathbf {k} }}{V}}}\mathbf {u} _{\mathbf {k} ,\lambda }\left({\hat {a}}_{\mathbf {k} ,\lambda }e^{i\mathbf {k} \cdot \mathbf {R} }-{\hat {a}}_{\mathbf {k} ,\lambda }^{\dagger }e^{-i\mathbf {k} \cdot \mathbf {R} }\right)} and dipole operator is given by d ^ = σ ^ + ⟨ e | d ^ | g ⟩ + σ ^ − ⟨ g | d ^ | e ⟩ {\displaystyle {\hat {\mathbf {d} }}={\hat {\sigma }}_{+}\langle e|{\hat {\mathbf {d} }}|g\rangle +{\hat {\sigma }}_{-}\langle g|{\hat {\mathbf {d} }}|e\rangle } . Setting R = 0 {\displaystyle \mathbf {R} =\mathbf {0} } and making the definition ℏ g k , λ = i 2 π ℏ ω k V ⟨ e | d ^ | g ⟩ ⋅ u k , λ , {\displaystyle \hbar g_{\mathbf {k} ,\lambda }=i{\sqrt {\frac {2\pi \hbar \omega _{\mathbf {k} }}{V}}}\langle e|{\hat {\mathbf {d} }}|g\rangle \cdot \mathbf {u} _{\mathbf {k} ,\lambda },} where the u k , λ {\displaystyle \mathbf {u} _{\mathbf {k} ,\lambda }} s are the orthonormal field modes, we may write H ^ A F = − ∑ k , λ ℏ ( g k , λ σ ^ + a ^ k , λ − g k , λ ∗ σ ^ − a ^ k , λ † − g k , λ σ ^ + a ^ k , λ † + g k , λ ∗ σ ^ − a ^ k , λ ) , {\displaystyle {\hat {H}}_{AF}=-\sum _{\mathbf {k} ,\lambda }\hbar \left(g_{\mathbf {k} ,\lambda }{\hat {\sigma }}_{+}{\hat {a}}_{\mathbf {k} ,\lambda }-g_{\mathbf {k} ,\lambda }^{*}{\hat {\sigma }}_{-}{\hat {a}}_{\mathbf {k} ,\lambda }^{\dagger }-g_{\mathbf {k} ,\lambda }{\hat {\sigma }}_{+}{\hat {a}}_{\mathbf {k} ,\lambda }^{\dagger }+g_{\mathbf {k} ,\lambda }^{*}{\hat {\sigma }}_{-}{\hat {a}}_{\mathbf {k} ,\lambda }\right),} where σ ^ + = | e ⟩ ⟨ g | {\displaystyle {\hat {\sigma }}_{+}=|e\rangle \langle g|} and σ ^ − = | g ⟩ ⟨ e | {\displaystyle {\hat {\sigma }}_{-}=|g\rangle \langle e|} are the raising and lowering operators acting in the { | e ⟩ , | g ⟩ } {\displaystyle \{|e\rangle ,|g\rangle \}} subspace of the atom. The application of the Jaynes–Cummings model allows suppression of this sum, and restrict the attention to a single mode of the field. Thus the atom-field Hamiltonian becomes: H ^ A F = ℏ [ ( g c σ ^ + a ^ c − g c ∗ σ ^ − a ^ c † ) + ( − g c σ ^ + a ^ c † + g c ∗ σ ^ − a ^ c ) ] {\displaystyle {\hat {H}}_{AF}=\hbar \left[\left(g_{c}{\hat {\sigma }}_{+}{\hat {a}}_{c}-g_{c}^{*}{\hat {\sigma }}_{-}{\hat {a}}_{c}^{\dagger }\right)+\left(-g_{c}{\hat {\sigma }}_{+}{\hat {a}}_{c}^{\dagger }+g_{c}^{*}{\hat {\sigma }}_{-}{\hat {a}}_{c}\right)\right]} . === Rotating frame and rotating-wave approximation === Next, the analysis may be simplified by performing a passive transformation into the so-called "co-rotating" frame. To do this, we use the interaction picture. Take H ^ 0 = H ^ A + H ^ F {\displaystyle {\hat {H}}_{0}={\hat {H}}_{A}+{\hat {H}}_{F}} . Then the interaction Hamiltonian becomes: H ^ A F ( t ) = e i H ^ 0 t / ℏ H ^ A F e − i H ^ 0 t / ℏ = ℏ ( g c σ ^ + a ^ c † e i ( ω c + ω e g ) t + g c ∗ σ ^ − a ^ c e − i ( ω c + ω e g ) t − g c ∗ σ ^ − a ^ c † e − i ( ω e g − ω c ) t − g c σ ^ + a ^ c e i ( ω e g − ω c ) t ) {\displaystyle {\hat {H}}_{AF}(t)=e^{i{\hat {H}}_{0}t/\hbar }{\hat {H}}_{AF}e^{-i{\hat {H}}_{0}t/\hbar }=\hbar \left(g_{c}{\hat {\sigma }}_{+}{\hat {a}}_{c}^{\dagger }e^{i(\omega _{c}+\omega _{eg})t}+g_{c}^{*}{\hat {\sigma }}_{-}{\hat {a}}_{c}e^{-i(\omega _{c}+\omega _{eg})t}-g_{c}^{*}{\hat {\sigma }}_{-}{\hat {a}}_{c}^{\dagger }e^{-i(\omega _{eg}-\omega _{c})t}-g_{c}{\hat {\sigma }}_{+}{\hat {a}}_{c}e^{i(\omega _{eg}-\omega _{c})t}\right)} We now assume that the resonance frequency of the cavity is near the transition frequency of the atom, that is, we assume | ω e g − ω c | ≪ ω e g + ω c {\displaystyle |\omega _{eg}-\omega _{c}|\ll \omega _{eg}+\omega _{c}} . Under this condition, the exponential terms oscillating at ω e g − ω c ≃ 0 {\displaystyle \omega _{eg}-\omega _{c}\simeq 0} are nearly resonant, while the other exponential terms oscillating at ω e g + ω c ≃ 2 ω c {\displaystyle \omega _{eg}+\omega _{c}\simeq 2\omega _{c}} are nearly anti-resonant. In the time τ = 2 π Δ , Δ ≡ ω e g − ω c {\displaystyle \tau ={\frac {2\pi }{\Delta }},\Delta \equiv \omega _{eg}-\omega _{c}} that it takes for the resonant terms to complete one full oscillation, the anti-resonant terms will complete many full cycles. Since over each full cycle 2 π 2 ω c ≪ τ {\displaystyle {\frac {2\pi }{2\omega _{c}}}\ll \tau } of anti-resonant oscillation, the net effect of the quickly oscillating anti-resonant terms tends to average to 0 for the timescales over which we wish to analyze resonant behavior. We may thus neglect the anti-resonant terms altogether, since their value is negligible compared to that of the nearly resonant terms. This approximation is known as the rotating wave approximation, and it accords with the intuition that energy must be conserved. Then the interaction Hamiltonian (taking g c {\displaystyle g_{c}} to be real for simplicity) is: H ^ A F ( t ) = − ℏ g c ( σ ^ + a ^ c e i ( ω e g − ω c ) t + σ ^ − a ^ c † e − i ( ω e g − ω c ) t ) {\displaystyle {\hat {H}}_{AF}(t)=-\hbar g_{c}\left({\hat {\sigma }}_{+}{\hat {a}}_{c}e^{i(\omega _{eg}-\omega _{c})t}+{\hat {\sigma }}_{-}{\hat {a}}_{c}^{\dagger }e^{-i(\omega _{eg}-\omega _{c})t}\right)} With this approximation in hand (and absorbing the negative sign into g c {\displaystyle g_{c}} ), we may transform back to the Schrödinger picture: H ^ A F = e − i H ^ 0 t / ℏ H ^ A F ( t ) e i H ^ 0 t / ℏ = ℏ g c ( σ ^ + a ^ c + σ ^ − a ^ c † ) {\displaystyle {\hat {H}}_{AF}=e^{-i{\hat {H}}_{0}t/\hbar }{\hat {H}}_{AF}(t)e^{i{\hat {H}}_{0}t/\hbar }=\hbar g_{c}\left({\hat {\sigma }}_{+}{\hat {a}}_{c}+{\hat {\sigma }}_{-}{\hat {a}}_{c}^{\dagger }\right)} === Jaynes–Cummings Hamiltonian 2 === Using the results gathered in the last two sections, we may now write down the full Jaynes–Cummings Hamiltonian: H ^ J C = ℏ ω c ( a ^ c † a ^ c + 1 2 ) + ℏ ω e g | e ⟩ ⟨ e | + ℏ g c ( σ ^ + a ^ c + σ ^ − a ^ c † ) {\displaystyle {\hat {H}}_{JC}=\hbar \omega _{c}\left({\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}+{\frac {1}{2}}\right)+\hbar \omega _{eg}|e\rangle \langle e|+\hbar g_{c}\left({\hat {\sigma }}_{+}{\hat {a}}_{c}+{\hat {\sigma }}_{-}{\hat {a}}_{c}^{\dagger }\right)} The constant term 1 2 ℏ ω c {\displaystyle {\frac {1}{2}}\hbar \omega _{c}} represents the zero-point energy of the field. It will not contribute to the dynamics, so it may be neglected, giving: H ^ J C = ℏ ω c a ^ c † a ^ c + ℏ ω e g | e ⟩ ⟨ e | + ℏ g c ( σ ^ + a ^ c + σ ^ − a ^ c † ) {\displaystyle {\hat {H}}_{JC}=\hbar \omega _{c}{\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}+\hbar \omega _{eg}|e\rangle \langle e|+\hbar g_{c}\left({\hat {\sigma }}_{+}{\hat {a}}_{c}+{\hat {\sigma }}_{-}{\hat {a}}_{c}^{\dagger }\right)} Next, define the so-called number operator by: N ^ = | e ⟩ ⟨ e | + a ^ c † a ^ c {\displaystyle {\hat {N}}=|e\rangle \langle e|+{\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}} . Consider the commutator of this operator with the atom-field Hamiltonian: [ H ^ A F , N ^ ] = ℏ g c ( [ a ^ c σ ^ + , | e ⟩ ⟨ e | + a ^ c † a ^ c ] + [ a ^ c † σ ^ − , | e ⟩ ⟨ e | + a ^ c † a ^ c ] ) = ℏ g c ( a ^ c [ σ ^ + , | e ⟩ ⟨ e | ] + [ a ^ c , a ^ c † a ^ c ] σ ^ + + a ^ c † [ σ ^ − , | e ⟩ ⟨ e | ] + [ a ^ c † , a ^ c † a ^ c ] σ ^ − ) = ℏ g c ( − a ^ c σ ^ + + a ^ c σ ^ + + a ^ c † σ ^ − − a ^ c † σ ^ − ) = 0 {\displaystyle {\begin{aligned}\left[{\hat {H}}_{AF},{\hat {N}}\right]&=\hbar g_{c}\left(\left[{\hat {a}}_{c}{\hat {\sigma }}_{+},|e\rangle \langle e|+{\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}\right]+\left[{\hat {a}}_{c}^{\dagger }{\hat {\sigma }}_{-},|e\rangle \langle e|+{\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}\right]\right)\\&=\hbar g_{c}\left({\hat {a}}_{c}\left[{\hat {\sigma }}_{+},|e\rangle \langle e|\right]+\left[{\hat {a}}_{c},{\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}\right]{\hat {\sigma }}_{+}+{\hat {a}}_{c}^{\dagger }\left[{\hat {\sigma }}_{-},|e\rangle \langle e|\right]+\left[{\hat {a}}_{c}^{\dagger },{\hat {a}}_{c}^{\dagger }{\hat {a}}_{c}\right]{\hat {\sigma }}_{-}\right)\\&=\hbar g_{c}\left(-{\hat {a}}_{c}{\hat {\sigma }}_{+}+{\hat {a}}_{c}{\hat {\sigma }}_{+}+{\hat {a}}_{c}^{\dagger }{\hat {\sigma }}_{-}-{\hat {a}}_{c}^{\dagger }{\hat {\sigma }}_{-}\right)\\&=0\end{aligned}}} Thus the number operator commutes with the atom-field Hamiltonian. The eigenstates of the number operator are the basis of tensor product states { | g , 0 ⟩ ; | e , 0 ⟩ , | g , 1 ⟩ ; ⋯ ; | e , n − 1 ⟩ , | g , n ⟩ } {\displaystyle \left\{|g,0\rangle ;|e,0\rangle ,|g,1\rangle ;\cdots ;|e,n-1\rangle ,|g,n\rangle \right\}} where the states { | n ⟩ } {\displaystyle \left\{|n\rangle \right\}} of the field are those with a definite number n {\displaystyle n} of photons. The number operator N ^ {\displaystyle {\hat {N}}} counts the total number n {\displaystyle n} of quanta in the atom-field system. In this basis of eigenstates of N ^ {\displaystyle {\hat {N}}} (total number states), the Hamiltonian takes on a block diagonal structure: H ^ J C = [ H 0 0 0 0 ⋯ ⋯ ⋯ 0 H ^ 1 0 0 ⋱ ⋱ ⋱ 0 0 H ^ 2 0 ⋱ ⋱ ⋱ ⋮ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋮ ⋱ ⋱ 0 H ^ n 0 ⋱ ⋮ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ] {\displaystyle {\hat {H}}_{JC}={\begin{bmatrix}H_{0}&0&0&0&\cdots &\cdots &\cdots \\0&{\hat {H}}_{1}&0&0&\ddots &\ddots &\ddots \\0&0&{\hat {H}}_{2}&0&\ddots &\ddots &\ddots \\\vdots &\ddots &\ddots &\ddots &\ddots &\ddots &\ddots \\\vdots &\ddots &\ddots &0&{\hat {H}}_{n}&0&\ddots \\\vdots &\ddots &\ddots &\ddots &\ddots &\ddots &\ddots \\\end{bmatrix}}} With the exception of the scalar H 0 {\displaystyle H_{0}} , each H ^ n {\displaystyle {\hat {H}}_{n}} on the diagonal is itself a 2 × 2 {\displaystyle 2\times 2} matrix of the form; H ^ n = [ ℏ ω c ( n − 1 ) + ℏ ω e g ⟨ e , n − 1 | H ^ J C | g , n ⟩ ⟨ g , n | H ^ J C | e , n − 1 ⟩ n ℏ ω c ] {\displaystyle {\hat {H}}_{n}={\begin{bmatrix}\hbar \omega _{c}(n-1)+\hbar \omega _{eg}&\langle e,n-1|{\hat {H}}_{JC}|g,n\rangle \\\langle g,n|{\hat {H}}_{JC}|e,n-1\rangle &n\hbar \omega _{c}\\\end{bmatrix}}} Now, using the relation: ⟨ g , n | H ^ J C | e , n − 1 ⟩ = ℏ g c ⟨ g , n | a ^ c † σ ^ − | e , n − 1 ⟩ + ℏ g c ⟨ g , n | a ^ c σ ^ + | e , n − 1 ⟩ = n ℏ g c {\displaystyle \langle g,n|{\hat {H}}_{JC}|e,n-1\rangle =\hbar g_{c}\langle g,n|{\hat {a}}_{c}^{\dagger }{\hat {\sigma }}_{-}|e,n-1\rangle +\hbar g_{c}\langle g,n|{\hat {a}}_{c}{\hat {\sigma }}_{+}|e,n-1\rangle ={\sqrt {n}}\hbar g_{c}} We obtain the portion of the Hamiltonian that acts in the nth subspace as: H ^ n = [ n ℏ ω c − ℏ Δ n ℏ Ω 2 n ℏ Ω 2 n ℏ ω c ] {\displaystyle {\hat {H}}_{n}={\begin{bmatrix}n\hbar \omega _{c}-\hbar \Delta &{\frac {{\sqrt {n}}\hbar \Omega }{2}}\\{\frac {{\sqrt {n}}\hbar \Omega }{2}}&n\hbar \omega _{c}\\\end{bmatrix}}} By shifting the energy from | e ⟩ {\displaystyle |e\rangle } to | g ⟩ {\displaystyle |g\rangle } with the amount of 1 2 ℏ Δ {\displaystyle {\frac {1}{2}}\hbar \Delta } , we can get H ^ n = [ n ℏ ω c − 1 2 ℏ Δ n ℏ Ω 2 n ℏ Ω 2 n ℏ ω c + 1 2 ℏ Δ ] = n ℏ ω c I ^ ( n ) − ℏ Δ 2 σ ^ z ( n ) + 1 2 n ℏ Ω σ ^ x ( n ) {\displaystyle {\hat {H}}_{n}={\begin{bmatrix}n\hbar \omega _{c}-{\frac {1}{2}}\hbar \Delta &{\frac {{\sqrt {n}}\hbar \Omega }{2}}\\{\frac {{\sqrt {n}}\hbar \Omega }{2}}&n\hbar \omega _{c}+{\frac {1}{2}}\hbar \Delta \\\end{bmatrix}}=n\hbar \omega _{c}{\hat {I}}^{(n)}-{\frac {\hbar \Delta }{2}}{\hat {\sigma }}_{z}^{(n)}+{\frac {1}{2}}{\sqrt {n}}\hbar \Omega {\hat {\sigma }}_{x}^{(n)}} where we have identified 2 g c = Ω {\displaystyle 2g_{c}=\Omega } as the Rabi frequency of the system, and Δ = ω c − ω e g {\displaystyle \Delta =\omega _{c}-\omega _{eg}} is the so-called "detuning" between the frequencies of the cavity and atomic transition. We have also defined the operators: I ^ ( n ) = | e , n − 1 ⟩ ⟨ e , n − 1 | + | g , n ⟩ ⟨ g , n | σ ^ z ( n ) = | e , n − 1 ⟩ ⟨ e , n − 1 | − | g , n ⟩ ⟨ g , n | σ ^ x ( n ) = | e , n − 1 ⟩ ⟨ g , n | + | g , n ⟩ ⟨ e , n − 1 | . {\displaystyle {\begin{aligned}{\hat {I}}^{(n)}&=\left|e,n-1\right\rangle \left\langle e,n-1\right|+\left|g,n\right\rangle \left\langle g,n\right|\\[1ex]{\hat {\sigma }}_{z}^{(n)}&=\left|e,n-1\right\rangle \left\langle e,n-1\right|-\left|g,n\right\rangle \left\langle g,n\right|\\[1ex]{\hat {\sigma }}_{x}^{(n)}&=\left|e,n-1\right\rangle \left\langle g,n\right|+\left|g,n\right\rangle \left\langle e,n-1\right|.\\[-1ex]\,\end{aligned}}} to be the identity operator and Pauli x and z operators in the Hilbert space of the nth energy level of the atom-field system. This simple 2 × 2 {\displaystyle 2\times 2} Hamiltonian is of the same form as what would be found in the Rabi problem. Diagonalization gives the energy eigenvalues and eigenstates to be: E n , ± = ( n ℏ ω c − 1 2 ℏ Δ ) ± 1 2 ℏ Δ 2 + n Ω 2 | n , + ⟩ = cos ⁡ ( θ n 2 ) | e , n − 1 ⟩ + sin ⁡ ( θ n 2 ) | g , n ⟩ | n , − ⟩ = cos ⁡ ( θ n 2 ) | g , n ⟩ − sin ⁡ ( θ n 2 ) | e , n − 1 ⟩ {\displaystyle {\begin{aligned}E_{n,\pm }&=\left(n\hbar \omega _{c}-{\frac {1}{2}}\hbar \Delta \right)\pm {\frac {1}{2}}\hbar {\sqrt {\Delta ^{2}+n\Omega ^{2}}}\\|n,+\rangle &=\cos \left({\frac {\theta _{n}}{2}}\right)|e,n-1\rangle +\sin \left({\frac {\theta _{n}}{2}}\right)|g,n\rangle \\|n,-\rangle &=\cos \left({\frac {\theta _{n}}{2}}\right)|g,n\rangle -\sin \left({\frac {\theta _{n}}{2}}\right)|e,n-1\rangle \\\end{aligned}}} Where the angle θ n {\displaystyle \theta _{n}} is defined by the relation tan ⁡ θ n = − n Ω Δ {\displaystyle \tan \theta _{n}=-{\frac {{\sqrt {n}}\Omega }{\Delta }}} . === Vacuum Rabi oscillations === Consider an atom entering the cavity initially in its excited state, while the cavity is initially in the vacuum. Moreover, one assumes that the angular frequency of the mode can be approximated to the atomic transition frequency, involving Δ ≈ 0 {\displaystyle \Delta \approx 0} . Then the state of the atom-field system as a function of time is: | ψ ( t ) ⟩ = cos ⁡ ( Ω t 2 ) | e , 0 ⟩ − i sin ⁡ ( Ω t 2 ) | g , 1 ⟩ {\displaystyle |\psi (t)\rangle =\cos \left({\frac {\Omega t}{2}}\right)|e,0\rangle -i\sin \left({\frac {\Omega t}{2}}\right)|g,1\rangle } So the probabilities to find the system in the ground or excited states after interacting with the cavity for a time t {\displaystyle t} are: P e ( t ) = | ⟨ e , 0 | ψ ( t ) ⟩ | 2 = cos 2 ⁡ ( Ω t 2 ) P g ( t ) = | ⟨ g , 1 | ψ ( t ) ⟩ | 2 = sin 2 ⁡ ( Ω t 2 ) {\displaystyle {\begin{aligned}P_{e}(t)&=|\langle e,0|\psi (t)\rangle |^{2}=\cos ^{2}\left({\frac {\Omega t}{2}}\right)\\P_{g}(t)&=|\langle g,1|\psi (t)\rangle |^{2}=\sin ^{2}\left({\frac {\Omega t}{2}}\right)\\\end{aligned}}} Thus the probability amplitude to find the atom in either state oscillates. This is the quantum mechanical explanation for the phenomenon of vacuum Rabi oscillation. In this case, there was only a single quantum in the atom-field system, carried in by the initially excited atom. In general, the Rabi oscillation associated with an atom-field system of n {\displaystyle n} quanta will have frequency Ω n = n Ω 2 {\displaystyle \Omega _{n}={\frac {{\sqrt {n}}\Omega }{2}}} . As explained below, this discrete spectrum of frequencies is the underlying reason for the collapses and subsequent revivals probabilities in the model. === Jaynes–Cummings ladder === As shown in the previous subsection, if the initial state of the atom-cavity system is | e , n − 1 ⟩ {\displaystyle |e,n-1\rangle } or | g , n ⟩ {\displaystyle |g,n\rangle } , as is the case for an atom initially in a definite state (ground or excited) entering a cavity containing a known number of photons, then the state of the atom-cavity system at later times becomes a superposition of the new eigenstates of the atom-cavity system: | n , + ⟩ = cos ⁡ ( θ n 2 ) | e , n − 1 ⟩ + sin ⁡ ( θ n 2 ) | g , n ⟩ | n , − ⟩ = cos ⁡ ( θ n 2 ) | g , n ⟩ − sin ⁡ ( θ n 2 ) | e , n − 1 ⟩ {\displaystyle {\begin{aligned}|n,+\rangle &=\cos \left({\frac {\theta _{n}}{2}}\right)|e,n-1\rangle +\sin \left({\frac {\theta _{n}}{2}}\right)|g,n\rangle \\|n,-\rangle &=\cos \left({\frac {\theta _{n}}{2}}\right)|g,n\rangle -\sin \left({\frac {\theta _{n}}{2}}\right)|e,n-1\rangle \\\end{aligned}}} This change in eigenstates due to the alteration of the Hamiltonian caused by the atom-field interaction is sometimes called "dressing" the atom, and the new eigenstates are referred to as the dressed states. The energy difference between the dressed states is: δ E = E + − E − = ℏ Δ 2 + n Ω 2 {\displaystyle \delta E=E_{+}-E_{-}=\hbar {\sqrt {\Delta ^{2}+n\Omega ^{2}}}} Of particular interest is the case where the cavity frequency is perfectly resonant with the transition frequency of the atom, so ω e g = ω c ⟹ Δ = 0 {\displaystyle \omega _{eg}=\omega _{c}\implies \Delta =0} . In the resonant case, the dressed states are: | n , ± ⟩ = 1 2 ( | g , n ⟩ ∓ | e , n − 1 ⟩ ) {\displaystyle |n,\pm \rangle ={\frac {1}{\sqrt {2}}}\left(|g,n\rangle \mp |e,n-1\rangle \right)} With energy difference δ E = n ℏ Ω {\displaystyle \delta E={\sqrt {n}}\hbar \Omega } . Thus the interaction of the atom with the field splits the degeneracy of the states | e , n − 1 ⟩ {\displaystyle |e,n-1\rangle } and | g , n ⟩ {\displaystyle |g,n\rangle } by n ℏ Ω {\displaystyle {\sqrt {n}}\hbar \Omega } . This non-linear hierarchy of energy levels scaling as n {\displaystyle {\sqrt {n}}} is known as the Jaynes–Cummings ladder. This non-linear splitting effect is purely quantum mechanical, and cannot be explained by any semi-classical model. === Collapse and revival of probabilities === Consider an atom initially in the ground state interacting with a field mode initially prepared in a coherent state, so the initial state of the atom-field system is: | ψ ( 0 ) ⟩ = | g , α ⟩ = ∑ n = 0 ∞ e − | α | 2 / 2 α n n ! | g , n ⟩ {\displaystyle |\psi (0)\rangle =|g,\alpha \rangle =\sum _{n=0}^{\infty }e^{-|\alpha |^{2}/2}{\frac {\alpha ^{n}}{\sqrt {n!}}}|g,n\rangle } For simplicity, take the resonant case ( Δ = 0 {\displaystyle \Delta =0} ), then the Hamiltonian for the nth number subspace is: H ^ n = ( n + 1 2 ) I ^ ( n ) + ℏ n Ω 2 σ ^ x ( n ) {\displaystyle {\hat {H}}_{n}=\left(n+{\frac {1}{2}}\right){\hat {I}}^{(n)}+{\frac {\hbar {\sqrt {n}}\Omega }{2}}{\hat {\sigma }}_{x}^{(n)}} Using this, the time evolution of the atom-field system will be: | ψ ( t ) ⟩ = e − i H ^ n t / ℏ | ψ ( 0 ) ⟩ = e − | α | 2 / 2 | g , 0 ⟩ + ∑ n = 1 ∞ e − | α | 2 / 2 α n n ! e − i n ω c t ( cos ⁡ ( n Ω t / 2 ) I ^ ( n ) − i sin ⁡ ( n Ω t / 2 ) σ ^ x ( n ) ) | g , n ⟩ = e − | α | 2 / 2 | g , 0 ⟩ + ∑ n = 1 ∞ e − | α | 2 / 2 α n n ! e − i n ω c t ( cos ⁡ ( n Ω t / 2 ) | g , n ⟩ − i sin ⁡ ( n Ω t / 2 ) | e , n − 1 ⟩ ) {\displaystyle {\begin{aligned}|\psi (t)\rangle &=e^{-i{\hat {H}}_{n}t/\hbar }|\psi (0)\rangle \\&=e^{-|\alpha |^{2}/2}|g,0\rangle +\sum _{n=1}^{\infty }e^{-|\alpha |^{2}/2}{\frac {\alpha ^{n}}{\sqrt {n!}}}e^{-in\omega _{c}t}\left(\cos {({\sqrt {n}}\Omega t/2)}{\hat {I}}^{(n)}-i\sin {({\sqrt {n}}\Omega t/2)}{\hat {\sigma }}_{x}^{(n)}\right)|g,n\rangle \\&=e^{-|\alpha |^{2}/2}|g,0\rangle +\sum _{n=1}^{\infty }e^{-|\alpha |^{2}/2}{\frac {\alpha ^{n}}{\sqrt {n!}}}e^{-in\omega _{c}t}\left(\cos {({\sqrt {n}}\Omega t/2)}|g,n\rangle -i\sin {({\sqrt {n}}\Omega t/2)}|e,n-1\rangle \right)\end{aligned}}} Note neither of the constant factors ℏ ω c 2 I ^ ( n ) {\displaystyle {\frac {\hbar \omega _{c}}{2}}{\hat {I}}^{(n)}} nor H ^ 0 {\displaystyle {\hat {H}}_{0}} contribute to the dynamics beyond an overall phase, since they represent the zero-point energy. In this case, the probability to find the atom having flipped to the excited state at a later time t {\displaystyle t} is: P e ( t ) = | ⟨ e | ψ ( t ) ⟩ | 2 = ∑ n = 1 ∞ e − | α | 2 n ! | α | 2 n sin 2 ⁡ ( 1 2 n Ω t ) = ∑ n = 1 ∞ e − ⟨ n ⟩ ⟨ n ⟩ n n ! sin 2 ⁡ ( 1 2 n Ω t ) = ∑ n = 1 ∞ e − ⟨ n ⟩ ⟨ n ⟩ n n ! sin 2 ⁡ ( Ω n t ) {\displaystyle {\begin{aligned}P_{e}(t)=\left|\langle e|\psi (t)\rangle \right|^{2}&=\sum _{n=1}^{\infty }{\frac {e^{-|\alpha |^{2}}}{n!}}|\alpha |^{2n}\sin ^{2}\left({\tfrac {1}{2}}{\sqrt {n}}\Omega t\right)\\[2ex]&=\sum _{n=1}^{\infty }{\frac {e^{-\langle n\rangle }\langle n\rangle ^{n}}{n!}}\sin ^{2}\left({\tfrac {1}{2}}{\sqrt {n}}\Omega t\right)\\[2ex]&=\sum _{n=1}^{\infty }{\frac {e^{-\langle n\rangle }\langle n\rangle ^{n}}{n!}}\sin ^{2}(\Omega _{n}t)\\{}\end{aligned}}} Where we have identified ⟨ n ⟩ = | α | 2 {\displaystyle \langle n\rangle =|\alpha |^{2}} to be the mean photon number in a coherent state. If the mean photon number is large, then since the statistics of the coherent state are Poissonian we have that the variance-to-mean ratio is ⟨ ( Δ n ) 2 ⟩ / ⟨ n ⟩ 2 ≃ 1 / ⟨ n ⟩ {\displaystyle \langle (\Delta n)^{2}\rangle /\langle n\rangle ^{2}\simeq 1/\langle n\rangle } . Using this result and expanding Ω n {\displaystyle \Omega _{n}} around ⟨ n ⟩ {\displaystyle \langle n\rangle } to lowest non-vanishing order in n {\displaystyle n} gives: Ω n ≃ Ω 2 ⟨ n ⟩ ( 1 + 1 2 n − ⟨ n ⟩ ⟨ n ⟩ ) {\displaystyle \Omega _{n}\simeq {\frac {\Omega }{2}}{\sqrt {\langle n\rangle }}\left(1+{\frac {1}{2}}{\frac {n-\langle n\rangle }{\langle n\rangle }}\right)} Inserting this into the sum yields a complicated product of exponentials: P e ( t ) ≃ 1 2 − e − ⟨ n ⟩ 4 ⋅ ( e − i ⟨ n ⟩ Ω t / 2 exp ⁡ [ ⟨ n ⟩ exp ⁡ ( − i Ω t 2 ⟨ n ⟩ ) ] + e i ⟨ n ⟩ Ω t / 2 exp ⁡ [ ⟨ n ⟩ exp ⁡ ( i Ω t 2 ⟨ n ⟩ ) ] ) {\displaystyle P_{e}(t)\simeq {\frac {1}{2}}-{\frac {e^{-\langle n\rangle }}{4}}\cdot \left(e^{-i{\sqrt {\langle n\rangle }}\Omega t/2}\exp \left[\langle n\rangle \exp \left(-{\frac {i\Omega t}{2{\sqrt {\langle n\rangle }}}}\right)\right]+e^{i{\sqrt {\langle n\rangle }}\Omega t/2}\exp \left[\langle n\rangle \exp \left({\frac {i\Omega t}{2{\sqrt {\langle n\rangle }}}}\right)\right]\right)} For "small" times such that Ω t 2 ≪ ⟨ n ⟩ {\displaystyle {\frac {\Omega t}{2}}\ll {\sqrt {\langle n\rangle }}} , the inner exponential inside the double exponential in the last term can be expanded up second order to obtain: P e ( t ) ≃ 1 2 − 1 2 ⋅ cos ⁡ [ ⟨ n ⟩ Ω t ] e − Ω 2 t 2 / 8 {\displaystyle P_{e}(t)\simeq {\frac {1}{2}}-{\frac {1}{2}}\cdot \cos \left[{\sqrt {\langle n\rangle }}\Omega t\right]e^{-\Omega ^{2}t^{2}/8}} This result shows that the probability of occupation of the excited state oscillates with effective frequency Ω eff = ⟨ n ⟩ Ω {\textstyle \Omega _{\text{eff}}={\sqrt {\langle n\rangle }}\Omega } . It also shows that it should decay over characteristic time: τ c = 2 Ω {\displaystyle \tau _{c}={\frac {\sqrt {2}}{\Omega }}} The collapse can be easily understood as a consequence of destructive interference between the different frequency components as they de-phase and begin to destructively interfere over time. However, the fact that the frequencies have a discrete spectrum leads to another interesting result in the longer time regime; in that case, the periodic nature of the slowly varying double exponential predicts that there should also be a revival of probability at time: τ r = 4 π Ω ⟨ n ⟩ . {\displaystyle \tau _{r}={\frac {4\pi }{\Omega }}{\sqrt {\langle n\rangle }}.} The revival of probability is due to the re-phasing of the various discrete frequencies. If the field were classical, the frequencies would have a continuous spectrum, and such re-phasing could never occur within a finite time. A plot of the probability to find an atom initially in the ground state to have transitioned to the excited state after interacting with a cavity prepared a in a coherent state vs. the unit-less parameter g t = Ω t / 2 {\displaystyle gt=\Omega t/2} is shown to the right. Note the initial collapse followed by the clear revival at longer times. == Collapses and revivals of quantum oscillations == This plot of quantum oscillations of atomic inversion—for quadratic scaled detuning parameter a = ( δ / 2 g ) 2 = 40 {\displaystyle a=(\delta /2g)^{2}=40} , where δ {\displaystyle \delta } is the detuning parameter—was built on the basis of formulas obtained by A.A. Karatsuba and E.A. Karatsuba. == See also == Caldeira–Leggett model Jaynes–Cummings–Hubbard model Rabi problem Spontaneous emission Vacuum Rabi oscillation == References == == Further reading == Berman, P.R.; Maliovsky, V.S. (2011). Principles of Laser Spectroscopy and Quantum Optics. Princeton University Press. ISBN 978-0-691-14056-8. Gerry, C. C.; Knight, P. L. (2005). Introductory Quantum Optics. Cambridge: Cambridge University Press. ISBN 0-521-52735-X. Scully, M. O.; Zubairy, M. S. (1997). Quantum Optics. Cambridge: Cambridge University Press. ISBN 0-521-43595-1. Vogel, W.; Welsch, D-G (2006). Quantum Optics (3 ed.). Wiley-VCH. ISBN 978-3-527-40507-7. Walls, D. F.; Milburn, G. J. (1995). Quantum Optics. Springer-Verlag. ISBN 3-540-58831-0.
Wikipedia/Jaynes–Cummings_model
In quantum field theory, the vacuum expectation value (VEV) of an operator is its average or expectation value in the vacuum. The vacuum expectation value of an operator O is usually denoted by ⟨ O ⟩ . {\displaystyle \langle O\rangle .} One of the most widely used examples of an observable physical effect that results from the vacuum expectation value of an operator is the Casimir effect. This concept is important for working with correlation functions in quantum field theory. In the context of spontaneous symmetry breaking, an operator that has a vanishing expectation value due to symmetry can acquire a nonzero vacuum expectation value during a phase transition. Examples are: The Higgs field has a vacuum expectation value of 246 GeV. This nonzero value underlies the Higgs mechanism of the Standard Model. This value is given by v = 1 / 2 G F 0 = 2 M W / g ≈ 246.22 G e V {\displaystyle v=1/{\sqrt {{\sqrt {2}}G_{F}^{0}}}=2M_{W}/g\approx 246.22\,{\rm {GeV}}} , where MW is the mass of the W Boson, G F 0 {\displaystyle G_{F}^{0}} the reduced Fermi constant, and g the weak isospin coupling, in natural units. It is also near the limit of the most massive nuclei, at v = 264.3 Da. The chiral condensate in quantum chromodynamics, about a factor of a thousand smaller than the above, gives a large effective mass to quarks, and distinguishes between phases of quark matter. This underlies the bulk of the mass of most hadrons. The gluon condensate in quantum chromodynamics may also be partly responsible for masses of hadrons. The observed Lorentz invariance of space-time allows only the formation of condensates which are Lorentz scalars and have vanishing charge. Thus, fermion condensates must be of the form ⟨ ψ ¯ ψ ⟩ {\displaystyle \langle {\overline {\psi }}\psi \rangle } , where ψ is the fermion field. Similarly a tensor field, Gμν, can only have a scalar expectation value such as ⟨ G μ ν G μ ν ⟩ {\displaystyle \langle G_{\mu \nu }G^{\mu \nu }\rangle } . In some vacua of string theory, however, non-scalar condensates are found. If these describe our universe, then Lorentz symmetry violation may be observable. == See also == Correlation function (quantum field theory) Dark energy Spontaneous symmetry breaking Vacuum energy Wightman axioms == References == == External links == Quotations related to Vacuum expectation value at Wikiquote
Wikipedia/Condensate_(quantum_field_theory)
The Advanced Propulsion Physics Laboratory or "Eagleworks Laboratories" at NASA's Johnson Space Center is a small research group investigating a variety of theories regarding new forms of spacecraft propulsion. The principal investigator is Dr. Harold G. White. The group is developing the White–Juday warp-field interferometer in the hope of observing small disturbances of spacetime and also testing small prototypes of thrusters that do not use reaction mass, with currently inconclusive results. The proposed principle of operation of these quantum vacuum plasma thrusters, such as the RF resonant cavity thruster ('EM Drive'), has been shown to be inconsistent with known laws of physics, including conservation of momentum and conservation of energy. No plausible theory of operation for such drives has been proposed. == Purpose == The Advanced Propulsion Physics Laboratory is enabled by section 2.3.7 of the NASA Technology Roadmap TA 2: In Space Propulsion Technologies: Breakthrough Propulsion: Breakthrough propulsion is an area of technology development that seeks to explore and develop a deeper understanding of the nature of space-time, gravitation, inertial frames, quantum vacuum, and other fundamental physical phenomena, with the overall objective of developing advanced propulsion applications and systems that will revolutionize how NASA explores space. The lab's purpose is to explore, investigate, and pursue advanced and theoretical propulsion technologies that are intended to allow human exploration of the Solar System in the next 50 years with the ultimate goal of interstellar travel by the turn of the century. The 30x40 ft floor of the lab facility floats on large pneumatic piers in order to isolate it from any seismic activity. The pneumatic piers were originally built for the Apollo program and used to perform work involving inertial measurement units (IMU) before being brought out of retirement. == See also == Boeing Phantom Works, advanced projects division Breakthrough Propulsion Physics Program JPL Lockheed Skunk Works, advanced projects division NASA Swamp Works == References ==
Wikipedia/Advanced_Propulsion_Physics_Laboratory
A dissipative system is a thermodynamically open system which is operating out of, and often far from, thermodynamic equilibrium in an environment with which it exchanges energy and matter. A tornado may be thought of as a dissipative system. Dissipative systems stand in contrast to conservative systems. A dissipative structure is a dissipative system that has a dynamical regime that is in some sense in a reproducible steady state. This reproducible steady state may be reached by natural evolution of the system, by artifice, or by a combination of these two. == Overview == A dissipative structure is characterized by the spontaneous appearance of symmetry breaking (anisotropy) and the formation of complex, sometimes chaotic, structures where interacting particles exhibit long range correlations. Examples in everyday life include convection, turbulent flow, cyclones, hurricanes and living organisms. Less common examples include lasers, Bénard cells, droplet cluster, and the Belousov–Zhabotinsky reaction. One way of mathematically modeling a dissipative system is given in the article on wandering sets: it involves the action of a group on a measurable set. Dissipative systems can also be used as a tool to study economic systems and complex systems. For example, a dissipative system involving self-assembly of nanowires has been used as a model to understand the relationship between entropy generation and the robustness of biological systems. The Hopf decomposition states that dynamical systems can be decomposed into a conservative and a dissipative part; more precisely, it states that every measure space with a non-singular transformation can be decomposed into an invariant conservative set and an invariant dissipative set. == Dissipative structures in thermodynamics == Russian-Belgian physical chemist Ilya Prigogine, who coined the term dissipative structure, received the Nobel Prize in Chemistry in 1977 for his pioneering work on these structures, which have dynamical regimes that can be regarded as thermodynamic steady states, and sometimes at least can be described by suitable extremal principles in non-equilibrium thermodynamics. In his Nobel lecture, Prigogine explains how thermodynamic systems far from equilibrium can have drastically different behavior from systems close to equilibrium. Near equilibrium, the local equilibrium hypothesis applies and typical thermodynamic quantities such as free energy and entropy can be defined locally. One can assume linear relations between the (generalized) flux and forces of the system. Two celebrated results from linear thermodynamics are the Onsager reciprocal relations and the principle of minimum entropy production. After efforts to extend such results to systems far from equilibrium, it was found that they do not hold in this regime and opposite results were obtained. One way to rigorously analyze such systems is by studying the stability of the system far from equilibrium. Close to equilibrium, one can show the existence of a Lyapunov function which ensures that the entropy tends to a stable maximum. Fluctuations are damped in the neighborhood of the fixed point and a macroscopic description suffices. However, far from equilibrium stability is no longer a universal property and can be broken. In chemical systems, this occurs with the presence of autocatalytic reactions, such as in the example of the Brusselator. If the system is driven beyond a certain threshold, oscillations are no longer damped out, but may be amplified. Mathematically, this corresponds to a Hopf bifurcation where increasing one of the parameters beyond a certain value leads to limit cycle behavior. If spatial effects are taken into account through a reaction–diffusion equation, long-range correlations and spatially ordered patterns arise, such as in the case of the Belousov–Zhabotinsky reaction. Systems with such dynamic states of matter that arise as the result of irreversible processes are dissipative structures. Recent research has seen reconsideration of Prigogine's ideas of dissipative structures in relation to biological systems. == Dissipative systems in control theory == Willems first introduced the concept of dissipativity in systems theory to describe dynamical systems by input-output properties. Considering a dynamical system described by its state x ( t ) {\displaystyle x(t)} , its input u ( t ) {\displaystyle u(t)} and its output y ( t ) {\displaystyle y(t)} , the input-output correlation is given a supply rate w ( u ( t ) , y ( t ) ) {\displaystyle w(u(t),y(t))} . A system is said to be dissipative with respect to a supply rate if there exists a continuously differentiable storage function V ( x ( t ) ) {\displaystyle V(x(t))} such that V ( 0 ) = 0 {\displaystyle V(0)=0} , V ( x ( t ) ) ≥ 0 {\displaystyle V(x(t))\geq 0} and V ˙ ( x ( t ) ) ≤ w ( u ( t ) , y ( t ) ) {\displaystyle {\dot {V}}(x(t))\leq w(u(t),y(t))} . As a special case of dissipativity, a system is said to be passive if the above dissipativity inequality holds with respect to the passivity supply rate w ( u ( t ) , y ( t ) ) = u ( t ) T y ( t ) {\displaystyle w(u(t),y(t))=u(t)^{T}y(t)} . The physical interpretation is that V ( x ) {\displaystyle V(x)} is the energy stored in the system, whereas w ( u ( t ) , y ( t ) ) {\displaystyle w(u(t),y(t))} is the energy that is supplied to the system. This notion has a strong connection with Lyapunov stability, where the storage functions may play, under certain conditions of controllability and observability of the dynamical system, the role of Lyapunov functions. Roughly speaking, dissipativity theory is useful for the design of feedback control laws for linear and nonlinear systems. Dissipative systems theory has been discussed by V.M. Popov, J.C. Willems, D.J. Hill, and P. Moylan. In the case of linear invariant systems, this is known as positive real transfer functions, and a fundamental tool is the so-called Kalman–Yakubovich–Popov lemma which relates the state space and the frequency domain properties of positive real systems. Dissipative systems are still an active field of research in systems and control, due to their important applications. == Quantum dissipative systems == As quantum mechanics, and any classical dynamical system, relies heavily on Hamiltonian mechanics for which time is reversible, these approximations are not intrinsically able to describe dissipative systems. It has been proposed that in principle, one can couple weakly the system – say, an oscillator – to a bath, i.e., an assembly of many oscillators in thermal equilibrium with a broad band spectrum, and trace (average) over the bath. This yields a master equation which is a special case of a more general setting called the Lindblad equation that is the quantum equivalent of the classical Liouville equation. The well-known form of this equation and its quantum counterpart takes time as a reversible variable over which to integrate, but the very foundations of dissipative structures imposes an irreversible and constructive role for time. Recent research has seen the quantum extension of Jeremy England's theory of dissipative adaptation (which generalizes Prigogine's ideas of dissipative structures to far-from-equilibrium statistical mechanics, as stated above). == Applications on dissipative systems of dissipative structure concept == The framework of dissipative structures as a mechanism to understand the behavior of systems in constant interexchange of energy has been successfully applied on different science fields and applications, as in optics, population dynamics and growth and chemomechanical structures. == See also == == Notes == == References == B. Brogliato, R. Lozano, B. Maschke, O. Egeland, Dissipative Systems Analysis and Control. Theory and Applications. Springer Verlag, London, 2nd Ed., 2007. Davies, Paul The Cosmic Blueprint Simon & Schuster, New York 1989 (abridged— 1500 words) (abstract— 170 words) — self-organized structures. Philipson, Schuster, Modeling by Nonlinear Differential Equations: Dissipative and Conservative Processes, World Scientific Publishing Company 2009. Prigogine, Ilya, Time, structure and fluctuations. Nobel Lecture, 8 December 1977. J.C. Willems. Dissipative dynamical systems, part I: General theory; part II: Linear systems with quadratic supply rates. Archive for Rationale mechanics Analysis, vol.45, pp. 321–393, 1972. == External links == The dissipative systems model The Australian National University
Wikipedia/Dissipative_systems
Searches for Lorentz violation involving photons provide one possible test of relativity. Examples range from modern versions of the classic Michelson–Morley experiment that utilize highly stable electromagnetic resonant cavities to searches for tiny deviations from c in the speed of light emitted by distant astrophysical sources. Due to the extreme distances involved, astrophysical studies have achieved sensitivities on the order of parts in 1038. == Minimal Lorentz-violating electrodynamics == The most general framework for studies of relativity violations is an effective field theory called the Standard-Model Extension (SME). Lorentz-violating operators in the SME are classified by their mass dimension d {\displaystyle d} . To date, the most widely studied limit of the SME is the minimal SME, which limits attention to operators of renormalizable mass-dimension, d = 3 , 4 {\displaystyle d=3,4} , in flat spacetime. Within the minimal SME, photons are governed by the Lagrangian density L = − 1 4 F μ ν F μ ν + 1 2 ( k A F ) κ ϵ κ λ μ ν A λ F μ ν − 1 4 ( k F ) κ λ μ ν F κ λ F μ ν . {\displaystyle {\mathcal {L}}=-\textstyle {{1} \over {4}}\,F_{\mu \nu }F^{\mu \nu }+\textstyle {{1} \over {2}}\,(k_{\mathrm {AF} })^{\kappa }\,\epsilon _{\kappa \lambda \mu \nu }A^{\lambda }F^{\mu \nu }-\textstyle {{1} \over {4}}\,(k_{\mathrm {F} })_{\kappa \lambda \mu \nu }F^{\kappa \lambda }F^{\mu \nu }.} The first term on the right-hand side is the conventional Maxwell Lagrangian and gives rise to the usual source-free Maxwell equations. The next term violates both Lorentz and CPT invariance and is constructed from a dimension d = 3 {\displaystyle d=3} operator and a constant coefficient for Lorentz violation ( k A F ) κ {\displaystyle (k_{\mathrm {AF} })^{\kappa }} . The second term introduces Lorentz violation, but preserves CPT invariance. It consists of a dimension d = 4 {\displaystyle d=4} operator contracted with constant coefficients for Lorentz violation ( k F ) κ λ μ ν {\displaystyle (k_{\mathrm {F} })_{\kappa \lambda \mu \nu }} . There are a total of four independent ( k A F ) κ {\displaystyle (k_{\mathrm {AF} })^{\kappa }} coefficients and nineteen ( k F ) κ λ μ ν {\displaystyle (k_{\mathrm {F} })_{\kappa \lambda \mu \nu }} coefficients. Both Lorentz-violating terms are invariant under observer Lorentz transformations, implying that the physics in independent of observer or coordinate choice. However, the coefficient tensors ( k A F ) κ {\displaystyle (k_{\mathrm {AF} })^{\kappa }} and ( k F ) κ λ μ ν {\displaystyle (k_{\mathrm {F} })_{\kappa \lambda \mu \nu }} are outside the control of experimenters and can be viewed as constant background fields that fill the entire Universe, introducing directionality to the otherwise isotropic spacetime. Photons interact with these background fields and experience frame-dependent effects, violating Lorentz invariance. The mathematics describing Lorentz violation in photons is similar to that of conventional electromagnetism in dielectrics. As a result, many of the effects of Lorentz violation are also seen in light passing through transparent materials. These include changes in the speed that can depend on frequency, polarization, and direction of propagation. Consequently, Lorentz violation can introduce dispersion in light propagating in empty space. It can also introduce birefringence, an effect seen in crystals such as calcite. The best constraints on Lorentz violation come from constraints on birefringence in light from astrophysical sources. == Nonminimal Lorentz-violating electrodynamics == The full SME incorporates general relativity and curved spacetimes. It also includes operators of arbitrary (nonrenormalizable) dimension d ≥ 5 {\displaystyle d\geq 5} . The general gauge-invariant photon sector was constructed in 2009 by Kostelecky and Mewes. It was shown that the more general theory could be written in a form similar to the minimal case, L = − 1 4 F μ ν F μ ν + 1 2 ϵ κ λ μ ν A λ ( k ^ A F ) κ F μ ν − 1 4 F κ λ ( k ^ F ) κ λ μ ν F μ ν , {\displaystyle {\mathcal {L}}=-\textstyle {1 \over 4}F_{\mu \nu }F^{\mu \nu }+\textstyle {1 \over 2}\epsilon ^{\kappa \lambda \mu \nu }A_{\lambda }{({\hat {k}}_{\mathrm {AF} })}_{\kappa }F_{\mu \nu }-\textstyle {1 \over 4}F_{\kappa \lambda }{({\hat {k}}_{\mathrm {F} })}^{\kappa \lambda \mu \nu }F_{\mu \nu }\,,} where the constant coefficients are promoted to operators ( k ^ A F ) κ {\displaystyle {({\hat {k}}_{\mathrm {AF} })}_{\kappa }} and ( k ^ F ) κ λ μ ν {\displaystyle {({\hat {k}}_{\mathrm {F} })}^{\kappa \lambda \mu \nu }} , which take the form of power series in spacetime derivatives. The ( k ^ A F ) κ {\displaystyle {({\hat {k}}_{\mathrm {AF} })}_{\kappa }} operator contains all the CPT-odd d = 3 , 5 , 7 , … {\displaystyle d=3,5,7,\ldots } terms, while the CPT-even terms with d = 4 , 6 , 8 , … {\displaystyle d=4,6,8,\ldots } are in ( k ^ F ) κ λ μ ν {\displaystyle {({\hat {k}}_{\mathrm {F} })}^{\kappa \lambda \mu \nu }} . While the nonrenormalizable terms give many of the same types of signatures as the d = 3 , 4 {\displaystyle d=3,4} case, the effects generally grow faster with frequency, due to the additional derivatives. More complex directional dependence typically also arises. Vacuum dispersion of light without birefringence is another feature that is found, which does not arise in the minimal SME. == Experiments == === Vacuum birefringence === Birefringence of light occurs when the solutions to the modified Lorentz-violating Maxwell equations give rise to polarization-dependent speeds. Light propagates as the combination of two orthogonal polarizations that propagate at slightly different phase velocities. A gradual change in the relative phase results as one of the polarizations outpaces the other. The total polarization (the sum of the two) evolves as the light propagates, in contrast to the Lorentz-invariant case where the polarization of light remains fixed when propagating in a vacuum. In the CPT-odd case (d ∈ {odd} ), birefringence causes a simple rotation of the polarization. The CPT-even case (d ∈ {even} ) gives more complicated behavior as linearly polarized light evolves into elliptically polarizations. The quantity determining the size of the effect is the change in relative phase, Δ ϕ = 2 π Δ v t / λ {\displaystyle \Delta \phi =2\pi \Delta v\,t/\lambda } , where Δ v {\displaystyle \Delta v} is the difference in phase speeds, t {\displaystyle t} is the propagation time, and λ {\displaystyle \lambda } is the wavelength. For d > 3 {\displaystyle d>3} , the highest sensitivities are achieved by considering high-energy photons from distant sources, giving large values to the ratio t / λ {\displaystyle t/\lambda } that enhance the sensitivity to Δ v {\displaystyle \Delta v} . The best constraints on vacuum birefringence from d > 3 {\displaystyle d>3} Lorentz violation come from polarimetry studies of gamma-ray bursts (GRB). For example, sensitivities of 10−38 to the d = 4 {\displaystyle d=4} coefficients for Lorentz violation have been achieved. For d = 3 {\displaystyle d=3} , the velocity difference Δ v {\displaystyle \Delta v} is proportional to the wavelength, canceling the λ {\displaystyle \lambda } dependence in the phase shift, implying there is no benefit to considering higher energies. As a result, maximum sensitivity is achieved by studying the most distant source available, the cosmic microwave background (CMB). Constraints on d = 3 {\displaystyle d=3} coefficients for Lorentz violation from the CMB currently stand at around 10−43 GeV. === Vacuum dispersion === Lorentz violation with d ≠ 4 {\displaystyle d\neq 4} can lead to frequency-dependent light speeds. To search for this effect, researchers compare the arrival times of photons from distant sources of pulsed radiation, such as GRB or pulsars. Assuming photons of all energies are produced within a narrow window of time, dispersion would cause higher-energy photons to run ahead or behind lower-energy photons, leading to otherwise unexplained energy dependence in the arrival time. For two photons of two different energies, the difference in arrival times is approximately given by the ratio Δ t = Δ v L / c 2 {\displaystyle \Delta t=\Delta vL/c^{2}} , where Δ v {\displaystyle \Delta v} is the difference in the group velocity and L {\displaystyle L} is the distance traveled. Sensitivity to Lorentz violation is then increased by considering very distant sources with rapidly changing time profiles. The speed difference Δ v {\displaystyle \Delta v} grows as E d − 4 {\displaystyle E^{d-4}} , so higher-energy sources provide better sensitivity to effects from d > 4 {\displaystyle d>4} Lorentz violation, making GRB an ideal source. Dispersion may or may not be accompanied by birefringence. Polarization studies typically achieved sensitivities well beyond those achievable through dispersion. As a result, most searches for dispersion focus on Lorentz violation that leads to dispersion but not birefringence. The SME shows that dispersion without birefringence can only arise from operators of even dimension d {\displaystyle d} . Consequently, the energy dependence in the light speed from nonbirefringent Lorentz violation can be quadratic E 2 {\displaystyle E^{2}} or quartic E 4 {\displaystyle E^{4}} or any other even power of energy. Odd powers of energy, such as linear E {\displaystyle E} and cubic E 3 {\displaystyle E^{3}} , do not arise in effective field theory. === Resonant cavities === While extreme sensitivity to Lorentz violation is achieved in astrophysical studies, most forms of Lorentz violation have little to no effect on light propagating in a vacuum. These types of violations cannot be tested using astrophysical tests, but can be sought in laboratory-based experiments involving electromagnetic fields. The primary examples are the modern Michelson-Morley experiments based on electromagnetic resonant cavities, which have achieved sensitivities on the order of parts in 1018 to Lorentz violation. Resonant cavities support electromagnetic standing waves that oscillate at well-defined frequencies determined by the Maxwell equations and the geometry of the cavity. The Lorentz-violating modifications to the Maxwell equations lead to tiny shifts in the resonant frequencies. Experimenters search for these tiny shifts by comparing two or more cavities at different orientations. Since rotation-symmetry violation is a form of Lorentz violation, the resonant frequencies may depend on the orientation of the cavity. So, two cavities with different orientations may give different frequencies even if they are otherwise identical. A typical experiment compares the frequencies of two identical cavities oriented at right angles in the laboratory. To distinguish between frequency differences of more conventional origins, such as small defects in the cavities, and Lorentz violation, the cavities are typically placed on a turntable and rotated in the laboratory. The orientation dependence from Lorentz violation would cause the frequency difference to change as the cavities rotate. Several classes of cavity experiment exist with different sensitivities to different types of Lorentz violation. Microwave and optical cavities have been used to constrain d = 4 {\displaystyle d=4} violations. Microwave experiments have also placed some bounds on nonminimal d = 6 {\displaystyle d=6} and d = 8 {\displaystyle d=8} violations. However, for d > 4 {\displaystyle d>4} , the effects of Lorentz violation grow with frequency, so optical cavities provide better sensitivity to nonrenormalizable violations, all else being equal. The geometrical symmetries of the cavity also affect the sensitivity since parity symmetric cavities are only directly sensitive to parity-even coefficients for Lorentz violation. Ring resonators provide a complementary class of cavity experiment that can test parity-odd violations. In a ring resonator, two modes propagating in opposites directions in the same ring are compared, rather than modes in two different cavities. === Other experiments === A number of other searches for Lorentz violation in photons have been performed that do not fall under the above categories. These include accelerator based experiments, atomic clocks, and threshold analyses. The results of experimental searches of Lorentz invariance violation in the photon sector of the SME are summarized in the Data Tables for Lorentz and CPT violation. == See also == Standard-Model Extension Lorentz-violating neutrino oscillations Antimatter tests of Lorentz violation Bumblebee models Tests of special relativity Test theories of special relativity == External links == Background information on Lorentz and CPT violation Data Tables for Lorentz and CPT Violation == References ==
Wikipedia/Lorentz-violating_electrodynamics
Black-Body Theory and the Quantum Discontinuity, 1894–1912 (1978; second edition 1987) is a book by the philosopher Thomas Kuhn, in which the author surveys the development of quantum mechanics. The second edition has a new afterword. == Summary == Kuhn surveys the development of quantum mechanics by Max Planck at the end of the 19th century. He argues that Planck misread his own earlier work. == Reception == Alexander Bird describes Kuhn's book as "masterly", writing that it "differs from traditional history of science less in the kind of explanation offered and more in the vast erudition and scholarly attention to detail displayed." According to philosopher Tim Maudlin, Planck and the Black Body Discontinuity (sic) "is a mixed bag: some good historiography and some poor analysis." == References ==
Wikipedia/Black-Body_Theory_and_the_Quantum_Discontinuity,_1894-1912
BAE Systems plc is a British multinational aerospace, military and information security company, based in London. It is the largest manufacturer in Britain as of 2017. It is the largest defence contractor in Europe and the seventh largest in the world based on applicable 2021 revenues. Its largest operations are in the United Kingdom and in the United States, where its BAE Systems Inc. subsidiary is one of the six largest suppliers to the US Department of Defense. Its next biggest markets are Saudi Arabia, then Australia; other major markets include Canada, Japan, India, Turkey, Qatar, Oman and Sweden. The company was formed on 30 November 1999 by the £7.7 billion purchase of and merger of Marconi Electronic Systems (MES), the defence electronics and naval shipbuilding subsidiary of the General Electric Company plc (GEC), with British Aerospace, an aircraft, munitions and naval systems manufacturer. BAE Systems is the successor to various aircraft, shipbuilding, armoured vehicle, armaments and defence electronics companies, including the Marconi Company, the first commercial company devoted to the development and use of radio; A.V. Roe and Company, one of the world's first aircraft companies; de Havilland, manufacturer of the Comet, the world's first commercial jet airliner; Hawker Siddeley, manufacturer of the Harrier, the world's first VTOL attack aircraft; British Aircraft Corporation, co-manufacturer of the Concorde supersonic transport; Supermarine, manufacturer of the Spitfire; Yarrow Shipbuilders, builder of the Royal Navy's first destroyers; Fairfield Shipbuilding and Engineering Company, builder of the world's first battlecruiser; and Vickers Shipbuilding and Engineering, builder of the Royal Navy's first submarines. Since its 1999 formation, BAE Systems has made a number of acquisitions, most notably of Ball Aerospace, United Defense and Armor Holdings of the United States, and has sold its shares in Airbus, Astrium, AMS and Atlas Elektronik. It is involved in several major defence projects, including the Lockheed Martin F-35 Lightning II, the Eurofighter Typhoon, and the Astute, Dreadnought and SSN-AUKUS submarines. BAE is listed on the London Stock Exchange's FTSE 100 Index. == History == === Predecessors === British Aerospace bought Marconi Electronic Systems for £7.7 billion on 30 November 1999 and merged with it to form BAE Systems. The company is the successor to many of the most famous British aircraft, defence electronics and warship manufacturers. Predecessor companies built the Comet, the world's first commercial jet airliner; the Harrier "jump jet", the world's first operational vertical/short take-off and landing (VTOL) aircraft; the "groundbreaking" Blue Vixen radar carried by Sea Harrier FA2s and which formed the basis of the Eurofighter's CAPTOR radar; and co-produced the Concorde supersonic airliner with Aérospatiale. British Aerospace was a civil and military aircraft manufacturer, as well as a provider of military land systems. The company had emerged from the massive consolidation of UK aircraft manufacturers since World War II. British Aerospace was formed on 29 April 1977, by the nationalisation and merger of the British Aircraft Corporation (BAC), the Hawker Siddeley Group and Scottish Aviation. Both BAC and Hawker Siddeley were themselves the result of various mergers and acquisitions. Marconi Electronic Systems was the defence subsidiary of British engineering firm the General Electric Company (GEC), dealing largely in military systems integration, as well as naval and land systems. Marconi's heritage dates back to Guglielmo Marconi's Wireless Telegraph & Signal Company, founded in 1897. GEC purchased English Electric (which included Marconi) in 1968 and thereafter used the Marconi brand for its defence businesses (as GEC-Marconi and later Marconi Electronic Systems). GEC's own defence heritage dates back to World War I, when its contribution to the war effort included radios and bulbs. World War II consolidated this position, as the company was involved in important technological advances, notably the cavity magnetron for radar. Between 1945 and 1999, GEC-Marconi/Marconi Electronic Systems became one of the world's most important defence contractors. GEC's major defence related acquisitions included Associated Electrical Industries in 1967, Yarrow Shipbuilders in 1985, Plessey companies in 1989, parts of Ferranti's defence business in 1990, the rump of Ferranti when it went into receivership in 1993/1994, Vickers Shipbuilding and Engineering in 1995 and Kværner Govan in 1999. In June 1998, MES acquired Tracor, a major American defence contractor, for £830 million (about US$1.4 billion). === Formation === The 1997 merger of American corporations Boeing and McDonnell Douglas, which followed the formation in 1995 of Lockheed Martin, the world's largest defence contractor, increased the pressure on European defence companies to consolidate. In June 1997, British Aerospace Defence managing director John Weston commented "Europe ... is supporting three times the number of contractors on less than half the budget of the US." European governments wished to see the merger of their defence manufacturers into a single entity, a "European Aerospace and Defence Company". As early as 1995, British Aerospace and the German aerospace and defence company DaimlerChrysler Aerospace (DASA) were said to be keen to create a transnational aerospace and defence company. The two companies envisaged including Aérospatiale, the other major European aerospace company, but only after its privatisation. The first stage of this integration was seen as the transformation of Airbus from a consortium of British Aerospace, DASA, Aérospatiale and Construcciones Aeronáuticas SA into an integrated company; with this aim British Aerospace and DASA were united against the various objections of Aérospatiale. As well as Airbus, British Aerospace and DASA were partners in the Panavia Tornado and Eurofighter Typhoon aircraft projects. Merger discussions began between British Aerospace and DASA in July 1998, just as French participation became more likely with the announcement that Aérospatiale was to merge with Matra and emerge with a diluted French government shareholding. A merger was agreed between British Aerospace chairman Richard Evans and DASA CEO Jürgen Schrempp. Meanwhile, GEC was also under pressure to participate in defence industry consolidation. Reporting the appointment of George Simpson as GEC managing director in 1996, The Independent said "some analysts believe that Mr Simpson's inside knowledge of BAe, a long-rumoured GEC bid target, was a key to his appointment. GEC favours forging a national 'champion' defence group with BAe to compete with the giant US organisations." When GEC put MES up for sale on 22 December 1998, British Aerospace abandoned the DASA merger in favour of purchasing its British rival. The merger of British Aerospace and MES was announced on 19 January 1999. Evans stated in 2004 that his fear was that an American defence contractor would acquire MES and challenge both British Aerospace and DASA. The merger created a vertically integrated company which The Scotsman described as "[a combination of British Aerospace's] contracting and platform-building skills with Marconi's coveted electronics systems capability", for example combining the manufacturer of the Eurofighter with the company that provided many of the aircraft's electronic systems; British Aerospace was MES's biggest customer. In contrast, DASA's response to the breakdown of the merger discussion was to merge with Aérospatiale to create the European Aeronautic Defence and Space Company (EADS), a horizontal integration. Seventeen undertakings were given by BAE Systems to the Department of Trade and Industry which prevented a reference of the merger to the Monopolies and Mergers Commission. These were largely to ensure that the integrated company would tender sub-contracts to external companies on an equal basis with its subsidiaries. Another condition was the "firewalling" of former British Aerospace and MES teams on defence projects such as the Joint Strike Fighter (JSF). In 2007 the government announced that it had agreed to release BAE Systems from ten of the undertakings due to "a change in circumstances". BAE Systems inherited the UK government-owned "golden" share that was established when British Aerospace was privatised. This unique share prevents amendments of certain parts of the company's Articles of Association without the permission of the Secretary of State. These Articles require that no foreign person or persons acting together may hold more than 15% of the company's shares. === 2000s === BAE Systems' first annual report identified Airbus, support services to militaries and integrated systems for air, land and naval applications as key areas of growth. It also stated the company's desire to both expand in the US and participate in further consolidation in Europe. BAE Systems described 2001 as an "important year" for its European joint ventures, which were reorganised considerably. The company has described the rationale for expansion in the US; "[it] is by far the largest defence market with spend running close to twice that of the Western European nations combined. Importantly, US investment in research and development is significantly higher than in Western Europe." When Dick Olver was appointed chairman in July 2004 he ordered a review of the company's businesses which ruled out further European acquisitions or joint ventures and confirmed a "strategic bias" for expansion and investment in the US. The review also confirmed the attractiveness of the land systems sector and, with two acquisitions in 2004 and 2005, BAE moved from a limited land systems supplier to the second largest such company in the world. This shift in strategy was described as "remarkable" by the Financial Times. Between 2008 and early 2011 BAE acquired five cybersecurity companies in a shift in strategy to take account of reduced spending by governments on "traditional defence items such as warships and tanks". In 2000 Matra Marconi Space, a joint BAE Systems/Matra company, was merged with the space division of DASA to form Astrium. On 16 June 2003 BAE sold its 25% share of Astrium for £84 million; however, due to the lossmaking status of the company, BAE Systems invested an equal amount for "restructuring". BAE Systems sold its 54% majority share of BAE Systems Canada, an electronics company, in April for CA$310 (approximately £197 million as of December 2010). In November 2001, the company announced the closure of the Avro Regional Jet (Avro RJ) production line at Woodford and the cancellation of the Avro RJX, an advanced series of the aircraft family, as the business was "no longer viable". The final Avro RJ to be completed became the last British civil airliner. In November 2001 BAE sold its 49.9% share of Thomson Marconi Sonar to Thales for £85 million. A further step of European defence consolidation was the merger of BAE's share of Matra BAe Dynamics and the missile division of Alenia Marconi Systems (AMS) into MBDA in December. MBDA thus became the world's second largest missile manufacturer. Although EADS (now Airbus SE) was later reported to be interested in acquiring full control of MBDA, BAE said that, unlike Airbus, MBDA is a "core business". In June 2002, BAE Systems confirmed it was in takeover discussions with TRW, an American aerospace, automotive and defence business. This was prompted by Northrop Grumman's £4.1 billion (approximately US$6 billion in 2002) hostile bid for TRW in February 2002. A bidding war between BAE Systems, Northrop and General Dynamics ended on 1 June when Northrop's increased bid of £5.1 billion was accepted. On 11 December 2002, BAE Systems issued a shock profit warning due to cost overruns of the Nimrod MRA4 maritime reconnaissance/attack aircraft and the Astute-class submarine projects. On 19 February 2003 BAE took a charge of £750 million against these projects and the Ministry of Defence (MoD) agreed to pay a further £700 million of the cost. In 2000 the company had taken a £300 million "loss charge" on the Nimrod contract which was expected to cover "all the costs of completion of the current contract". The troubled Nimrod project would ultimately be cancelled as part of the 2010 Strategic Defence and Security Review (SDSR). The UK government, following a cabinet row described as "one of the most bitter Cabinet disputes over defence contracts since the Westland helicopter affair in 1985", ordered 20 BAE Hawk trainer aircraft with 24 options in July 2003 in a deal worth £800 million. The deal was significant because it was a factor in India's decision to finalise a £1 billion order for 66 Hawks in March 2004. Also in July 2003 BAE Systems and Finmeccanica announced their intention to set up three joint venture companies, to be collectively known as Eurosystems. These companies would have pooled the avionics, C4ISTAR and communications businesses of the two companies. However, the difficulties of integrating the companies in this way led to a re-evaluation of the proposal; BAE Systems' 2004 Annual Report states that "recognising the complexity of the earlier proposed Eurosystems transaction with Finmeccanica we have moved to a simpler model". The main part of this deal was the dissolution of AMS and the establishment of SELEX Sensors and Airborne Systems; BAE Systems sold its 25% share of the latter to Finmeccanica for €400 million (approximately £270 million c. 2007) in March 2007. In May 2004, it was reported that the company was considering selling its shipbuilding divisions, BAE Systems Naval Ships and BAE Systems Submarines. It was understood that General Dynamics wished to acquire the submarine building facilities at Barrow-in-Furness, while VT Group was said to be interested in the remaining yards on the Clyde. Instead, in 2008 BAE Systems merged its Surface Fleet arm with the shipbuilding operations of VT Group to form BVT Surface Fleet, an aim central to the British Government's Defence Industrial Strategy. On 4 June 2004, BAE Systems outbid General Dynamics for Alvis Vickers, the UK's main manufacturer of armoured vehicles. Alvis Vickers was merged with the company's RO Defence unit to form BAE Systems Land Systems. Recognising the lack of scale of this business compared to General Dynamics, BAE Systems executives soon identified the US defence company United Defense Industries (UDI), a major competitor to General Dynamics, as a main acquisition target. On 7 March 2005 BAE announced the £2.25 billion (approximately US$4.2 billion c. 2005) acquisition of UDI. UDI, now BAE Systems Land and Armaments, manufactures combat vehicles, artillery systems, naval guns, missile launchers and precision guided munitions. In December 2005, BAE Systems announced the sale of its German naval systems subsidiary, Atlas Elektronik, to ThyssenKrupp and EADS. The Financial Times described the sale as "cut price" because French company Thales bid €300 million, but was blocked from purchasing Atlas by the German government on national security grounds. On 31 January 2006 the company announced the sale of BAE Systems Aerostructures to Spirit AeroSystems, Inc., having said as early as 2002 that it wished to dispose of what it did not regard as a "core business". On 18 August 2006 Saudi Arabia signed a contract worth £6 billion to £10 billion for 72 Eurofighter Typhoons, to be delivered by BAE Systems. On 10 September 2006 the company was awarded a £2.5 billion contract for the upgrade of 80 Royal Saudi Air Force Tornado IDSs. One of BAE Systems' major aims, as highlighted in the 2005 Annual Report, was the granting of increased technology transfer between the UK and the US. The F-35 (JSF) programme became the focus of this effort, with British government ministers such as Lord Drayson, Minister for Defence Procurement, suggesting the UK would withdraw from the project without the transfer of technology that would allow the UK to operate and maintain F-35s independently. On 12 December 2006, Lord Drayson signed an agreement which allows "an unbroken British chain of command" for operation of the aircraft. On 22 December 2006 BAE received a £947 million contract to provide guaranteed availability of Royal Air Force (RAF) Tornados. In May 2007 the company announced its subsidiary BAE Systems Inc. was to purchase Armor Holdings for £2.3 billion (approximately US$4.5 billion c. 2007) and completed the deal on 31 July 2007. The company was a manufacturer of tactical wheeled vehicles and a provider of vehicle and individual armour systems and survivability technologies. BAE Systems (and British Aerospace previously) was a technology partner to the McLaren Formula One team from 1996 to December 2007. The partnership originally focused on McLaren's F1 car's aerodynamics, eventually moving on to carbon fibre techniques, wireless systems and fuel management. BAE Systems' main interest in the partnership was to learn about the high speed build and operations processes of McLaren. The company announced the acquisition of Tenix Defence, a major Australian defence contractor in January 2008. The purchase was completed on 27 June for A$775 million (£373 million) making BAE Systems Australia that country's largest defence contractor. The MoD awarded BAE Systems a 15-year munitions contract in August 2008 worth up to £3 billion, known as Munition Acquisition Supply Solution (MASS). The contract guaranteed supply of 80% of the UK Armed Forces' ammunition and required BAE to modernise its munitions manufacturing facilities. BAE Systems expanded its intelligence and security business with the £531 million purchase of Detica Group in July 2008. It continued this strategy with purchases of Danish cyber and intelligence company ETI for approximately $210 million in December 2010, and Norkom Group PLC the following month for €217 million. The latter provides counter fraud and anti-money laundering solutions to the global financial services industry where its software assists institutions to comply with regulations on financial intelligence and monitoring. ==== Airbus shareholding ==== BAE Systems inherited British Aerospace's share of Airbus Industrie, which consisted of two factories at Broughton and Filton. These facilities manufactured wings for the Airbus family of aircraft. In 2001 Airbus was incorporated as Airbus SAS, a joint stock company. In return for a 20% share in the new company BAE Systems transferred ownership of its Airbus plants (known as Airbus UK) to the new company. Despite repeated suggestions as early as 2000 that BAE Systems wished to sell its 20% share of Airbus, the possibility was denied by the company. However, on 6 April 2006 it was reported that it was indeed to sell its stake, then "conservatively valued" at £2.4 billion. Due to the slow pace of informal negotiations, BAE Systems exercised its put option which saw investment bank Rothschild appointed to give an independent valuation. Six days after this process began, Airbus announced delays to the A380 with significant effects on the value of Airbus shares. On 2 June 2006 Rothschild valued the company's share at £1.87 billion, well below its own analysts' and even EADS's expectations. The BAE Systems board recommended that the company proceed with the sale. Shareholders voted in favour and the sale was completed on 13 October. This saw the end of UK-owned involvement in civil airliner production. Airbus Operations Ltd (the former Airbus UK) continued to be the Airbus "Centre of Excellence" for wing production, employing over 9,500 in 2007. === 2010s === In February 2010 BAE Systems announced a £592 million writedown of the former Armor Holdings business following the loss of the Family of Medium Tactical Vehicles contract in 2009. It was outbid by Oshkosh Corporation for the £2.3 billion ($3.7 billion) contract. Land and Armaments had been the "star performer" of BAE Systems' subsidiaries, growing from sales of £482 million in 2004 to £6.7 billion in 2009. BAE Systems inherited British Aerospace's 35% share of Saab AB, with which it produced and marketed the Gripen fighter aircraft. In 2005 it reduced this share to 20.5% and in March 2010 announced its intention to sell the remainder. The Times stated that the decision brought "to an end its controversial relationship with the Gripen fighter aircraft". Several of the export campaigns for the aircraft were subject to allegations of bribery and corruption. The company continued its move into support services in May 2010 with the purchase of the marine support company Atlantic Marine for $352 million. In September 2010 BAE Systems announced plans to sell the Platform Solutions division of BAE Systems Inc., which the Financial Times estimated could yield as much as £1.3 billion. Despite "considerable expressions of interest", the sale was abandoned in January 2011. The purchases of Queen Elizabeth-class aircraft carriers, the Astute-class submarines, and the Type 26 frigates were all confirmed in the 2010 SDSR. A new generation of nuclear missile submarines, the Dreadnought class, was ordered in 2016. BAE Systems sold the regional aircraft leasing and asset management arm of its Regional Aircraft business in May 2011. This unit leases the BAe 146/Avro RJ family, BAe ATP, Jetstream and BAe 748. The company retained the support and engineering activities of the business. In September 2011, BAE Systems began consultation with unions and workers over plans to cut nearly 3,000 jobs, mostly in the company's military aircraft division. In its 2012 half-year report, the company revealed a 10% decline in revenue in the six months up to 30 June due to falling demand for armaments. In May 2012 the governments of the UK and Saudi Arabia reached an agreement on an arms package which saw a £1.6 billion contract awarded to BAE for the delivery of 55 Pilatus PC-21 and 22 BAE Systems Hawk aircraft. The Sultanate of Oman ordered Typhoon and Hawk aircraft worth £2.5 billion in December 2012. In September 2012, it was reported that BAE Systems and EADS had entered merger talks which would have seen BAE shareholders own 40% of the resulting organisation. On 10 October 2012, the companies said the merger talks had been called off. The Guardian reported that this was due to the German Government's concern about the "potential size of the French shareholding in the combined company, as well as disagreements over the location of the group's headquarters". In November 2013, BAE Systems announced that shipbuilding would cease in Portsmouth in 2014 with the loss of 940 jobs, and a further 835 jobs would be lost at Filton, near Bristol, and at the shipyards in Govan, Rosyth, and Scotstoun in Scotland. On 9 October 2014, the company announced the loss of 440 management jobs across the country, with 286 of the job cuts in Lancashire. In July 2014 it announced the acquisition of US intelligence company Signal Innovations Group Inc. to augment imagery and data analysis technologies in its Intelligence & Security business. In August 2014, BAE was awarded a £248 million contract from the British government to build three new offshore patrol vessels. In October 2014, BAE Systems won a £600 million contract from the MoD to maintain Portsmouth naval base for five years. During 2014 BAE Systems acquired US-based cybersecurity firm Silversky for $232.5 million. During Prime Minister Theresa May's visit to Turkey in January 2017, BAE and TAI officials signed an agreement, worth about £100 million, for BAE to provide assistance in developing the TAI TF Kaan aircraft. On 10 October 2017, BAE announced that it would lay off nearly 2,000 out of its approximately 35,000 employees in Britain, mainly due to an order shortage for the Typhoon fighter. In 2018, the company agreed a £5 billion with the government of Qatar for 24 Typhoon Eurofighters. In 2019 BAE Systems sold a 55% share of its UK land business to Rheinmetall. The resultant joint venture (JV), Rheinmetall BAE Systems Land (RBSL), was established in July 2019 following regulatory approval and is headquartered at the existing facility in Telford, Shropshire. === 2020s === In August 2020 BAE Systems completed the purchase of United Technologies' military GPS businesses for $1.9 billion and Raytheon's military airborne radios business for $275 million. The sale of these two business was a condition of the merger approval that saw their two parent companies merge to form Raytheon Technologies. In November 2020, the MoD announced the award of a 20-year, £2.4 billion munitions contract to BAE. This will see BAE manufacture 39 different munitions for the UK armed forces and supersedes the 2008 MASS contract. In July 2023, BAE received a related £280 million order to address a munitions shortage caused by the supply of ammunition to Ukraine. In 2022, during the Russian invasion of Ukraine, major arms manufacturers, including BAE Systems, reported a sharp increase in interim sales and profits. In August 2023 BAE agreed to acquire the aerospace division of US-based Ball Corporation for $5.6 billion in cash (approximately £4.5 billion); this was BAE's largest acquisition up until that point and was completed on 16 February 2024. In October 2023, BAE was awarded a £3.95 billion contract for development work on Aukus-class submarines up to 2028. == Products == BAE Systems plays a significant role in the production of military equipment. In 2017, 98% of BAE Systems' total sales were military related. It plays important roles in military aircraft production. The company's Typhoon fighter is one of the main front line aircraft of the RAF. The company is a major partner in the F-35 Lightning II programme. Its Hawk advanced jet trainer aircraft has been widely exported. In July 2006, the British government declassified the HERTI (High Endurance Rapid Technology Insertion), an Unmanned Aerial Vehicle (UAV) which can navigate autonomously. It is currently developing a sixth-generation jet fighter aircraft for the RAF marketed as the "Tempest" along with the MoD, Rolls-Royce, Leonardo and MBDA. It is intended to enter service from 2035 replacing the Typhoon aircraft in service with the RAF. BAE Systems Land and Armaments manufactures the M2/M3 Bradley fighting vehicle family, the US Navy Advanced Gun System (AGS), M113 armoured personnel carrier (APC), M109 Paladin, Archer, M777 howitzer, the British Army's Challenger 2, Warrior Tracked Armoured Vehicle, Panther Command and Liaison Vehicle, and the SA80 assault rifle. Major naval projects include the Astute-class submarines, Type 26 frigates and Dreadnought-class submarines. BAE Systems is indirectly engaged in production of nuclear weapons—through its share of MBDA it is involved with the production and support of the ASMP missile, an air-launched nuclear missile which forms part of the French nuclear deterrent. The company is also the UK's only nuclear submarine manufacturer and thus produces a key element of the United Kingdom's nuclear weapons capability. BAE has operated the Holston Army Ammunition Plant in Tennessee, since 1999, and the Radford Army Ammunition Plant in Radford, Virginia since 2012. == Areas of business == BAE Systems' biggest markets are the US 44%, UK 20%, Saudi Arabia 11% and Australia 4%, as of 2022. === United Kingdom === BAE Systems is the main supplier to the UK MoD; in 2009/2010 BAE Systems companies in the list of Top 100 suppliers to the MoD received contracts totalling £3.98 billion, with total revenue being higher when other subsidiary income is included. In comparison, the second largest supplier is Babcock International Group and its subsidiaries, with a revenue of £1.1 billion from the MoD. Oxford Economic Forecasting states that in 2002 the company's UK businesses employed 111,578 people, achieved export sales of £3 billion and paid £2.6 billion in taxes. These figures exclude the contribution of Airbus UK. After its creation, BAE Systems had a difficult relationship with the MoD. This was attributed to deficient project management by the company, but also in part to the deficiencies in the terms of "fixed price contracts". CEO Mike Turner said in 2006 "We had entered into contracts under the old competition rules that frankly we shouldn't have taken". These competition rules were introduced by Lord Levene during the 1980s to shift the burden of risk to the contractor and were in contrast to "cost plus contracts" where a contractor was paid for the value of its product plus an agreed profit. BAE Systems was operating in "the only truly open defence market", which meant it was competing with US and European companies for British defence projects, while they were protected in their home markets. The US defence market is competitive; however, largely between American firms, while foreign companies are excluded. In December 2005 the MoD published the Defence Industrial Strategy (DIS) which has been widely acknowledged to recognise BAE Systems as the UK's "national champion". The government claimed the DIS would "promote a sustainable industrial base, that retains in the UK those industrial capabilities needed to ensure national security." After the publication of the DIS BAE Systems CEO Mike Turner said "If we didn't have the DIS and our profitability and the terms of trade had stayed as they were... then there had to be a question mark about our future in the UK". Lord Levene said in the balance between value for money or maintaining a viable industrial base the DIS "tries as well as it can to steer a middle course and to achieve as much as it can in both directions. ...We will never have a perfect solution." === United States === The attraction of MES to British Aerospace was largely its ownership of Tracor, a major American defence contractor. BAE Systems Inc. now sells more to the US Department of Defense (DOD) than to the UK MoD. The company has been allowed to buy important defence contractors in the US; however, its status as a UK company requires that its US subsidiaries are governed by American executives under Special Security Arrangements. The company faces fewer impediments in this sense than its European counterparts, as there is a high degree of integration between the US and UK defence establishments. BAE Systems' purchase of Lockheed Martin Aerospace Electronic Systems in November 2000 was described by John Hamre, CEO of the Center for Strategic and International Studies and former Deputy Secretary of Defense, as "precedent setting" given the advanced and classified nature of many of that company's products. The possibility of a merger between BAE Systems Inc. and major North American defence contractors has long been reported, including Boeing, General Dynamics, Lockheed Martin, and Raytheon. === Rest of the world === BAE Systems Australia is one of the largest defence contractors in Australia, having more than doubled in size with the acquisition of Tenix Defence in 2008. The Al Yamamah agreements between the UK and Saudi Arabia require "the provision of a complete defence package for the Kingdom of Saudi Arabia"; BAE Systems employs 5,300 people in the country. As of March 2022, BAE Systems employs over 7,000 people in Saudi Arabia and 75 per cent of the employees are Saudi nationals. BAE Systems' interests in Sweden are a result of the purchases of Alvis Vickers and UDI, which owned Hägglunds and Bofors, respectively; the companies are now part of BAE Systems AB. On 6 April 2022, BAE Systems announced the establishment of BAE Systems Japan, a subsidiary located in Akasaka, Tokyo, Japan. The new company will provide comprehensive cooperation with Japanese industry and aims to strengthen relations with the Japanese Ministry of Defense and the Japan Self-Defense Forces. In late August 2023, BAE Systems announced that it had opened an office in Ukraine, and had signed an agreement for cooperation on the repair, spare parts, and production of L119 howitzers within Ukraine. On 4 January 2024, BAE Systems announced an initial agreement, potentially worth up to $50 million, to resume production of titanium structures for the M777 howitzer for the US Army, with the first deliveries expected in 2025. == Shareholders == BAE Systems' 2022 Annual Report listed the following as "significant" shareholders: Barclays 3.98%, BlackRock 9.90%, Capital Group Companies 14.18%, Invesco 4.97% and Silchester International Investors 3.01%. == Organisation == BAE Systems has its head office and its registered office in City of Westminster, London. In addition to its central London offices, it has an office in Farnborough, Hampshire, that houses functional specialists and support functions. == Corporate governance == BAE Systems' Chair is Cressida Hogg, CBE. The executive directors are Charles Woodburn (CEO), Brad Greve and Tom Arsenault. The non-executive directors are Crystal E. Ashby, Elizabeth Corley, Chris Grigg, Ewan Kirk, Ian Tyler, Nicole Piasecki, Stephen Pearce, Jane Griffiths and Nick Anderson. The company's first CEO, John Weston, was forced to resign in 2002 in a boardroom "coup" and was replaced by Mike Turner. The Business reported that Weston was ousted when non-executive directors informed the chairman that they had lost confidence in him. Further, it was suggested that at least one non-executive director was encouraged to make such a move by the MoD due to the increasingly fractious relationship between BAE Systems and the government. As well as the terms of the Nimrod contract, Weston had fought against the MOD's insistence that one of the first three Type 45 destroyers should be built by VT Group. The Business said he considered this "competition-policy gone mad". It is understood that Turner had a poor working relationship with senior MoD officials, for example with former Defence Secretary Geoff Hoon. The first meeting between Hoon and BAE's new chairman Dick Olver in 2004 was said to have gone well; a MoD official commented "He is a man we can do business with". It has been suggested that relations between Turner and Olver were tense. On 16 October 2007 the company announced that Mike Turner would retire in August 2008. The Times called his departure plans "abrupt" and a "shock", given previous statements that he wished to retire in 2013 at the age of 65. Despite suggestions that BAE Systems would prefer an American CEO due to the increasing importance of the United States defence market to the company and the opportunity to make a clean break from corruption allegations and investigations related to the Al-Yamamah contracts, the company announced on 27 June 2008, that it had selected the company's chief operating officer, Ian King, to succeed Turner with effect from 1 September 2008. The Financial Times noted that King's career at Marconi distances him from the British Aerospace-led Al Yamamah project. Charles Woodburn succeeded Ian King as CEO on 1 July 2017. Woodburn joined BAE Systems in May 2016 as Chief Operating Officer and Executive Board Director following over 20 years' international experience in senior management positions in the oil and gas industry. === Senior leadership === Chair: Cressida Hogg (since 2023) Chief executive: Charles Woodburn (since 2017) List of former chairmen Sir Richard Evans (1999–2004) Sir Dick Olver (2004–2014) Sir Roger Carr (2014–2023) List of former chief executives John Weston (1999–2002) Michael Turner (2002–2008) Ian King (2008–2017) == Financial information == Financial information for the company is as follows: == Corruption investigations == === Serious Fraud Office === BAE Systems has been investigated by the Serious Fraud Office (SFO) for the use of corruption to help sell arms to Chile, Czech Republic, Romania, Saudi Arabia, South Africa, Tanzania and Qatar. In response, BAE Systems' 2006 Corporate Responsibility Report states "We continue to reject these allegations... We take our obligations under the law extremely seriously and will continue to comply with all legal requirements around the world. In June 2007 Lord Woolf was selected to lead what the BBC described as an "independent review.... [an] ethics committee to look into how the defence giant conducts its arms deals". The report, Ethical business conduct in BAE Systems plc – the way forward, made 23 recommendations, measures which the company committed to implement. The finding stated that "in the past BAE did not pay sufficient attention to ethical standards in the way it conducted business", and was described by the BBC as "an embarrassing admission". In September 2009, the SFO announced that it intended to prosecute BAE Systems for offences relating to overseas corruption. The Guardian claimed that a penalty more than £500 million might be an acceptable settlement package. On 5 February 2010, BAE Systems agreed to pay criminal fines of £257 million (US$400 million) to the US and £30 million to the UK. The $400 million fine was a result of a plea bargain with the US Department of Justice (DOJ) whereby BAE Systems was convicted of felony conspiracy to defraud the United States government. This was one of the largest fines in the history of the DOJ. Judge Bates said the company's conduct involved "deception, duplicity and knowing violations of law, I think it's fair to say, on an enormous scale". BAE Systems did not directly admit to bribery, and is thus not internationally blacklisted from future contracts. Some of the £30 million penalty the company will pay in fines to the UK will be paid ex gratia for the benefit of the people of Tanzania. On 2 March 2010, Campaign Against Arms Trade (CAAT) and Corner House Research were successful in gaining a High Court injunction on the SFO's settlement with BAE Systems; however, in April 2010 the two organisations withdrew their application for a judicial review. === Saudi Arabia === Both BAE Systems and its predecessor (BAe) have long been the subject of allegations of bribery in relation to its business in Saudi Arabia. The UK National Audit Office (NAO) investigated the Al Yamamah contracts and has so far not published its conclusions, the only NAO report ever to be withheld. The MoD has stated "The report remains sensitive. Disclosure would harm both international relations and the UK's commercial interests." The company has been accused of maintaining a £60 million Saudi slush fund. In November 2006, Saudi Arabia put pressure on the British government to end the SFO investigation by suspending negotiations over a new deal for seventy-two Typhoon fighter jets. On 14 December 2006 it was announced that the SFO was "discontinuing" its investigation into the company. It stated that representations to its director and the Attorney General Lord Goldsmith had led to the conclusion that the wider public interest "to safeguard national and international security" outweighed any potential benefits of further investigation. The termination of the investigation has been controversial. In June 2007, the BBC's Panorama alleged BAE Systems "paid hundreds of millions of pounds to the ex-Saudi ambassador to the US, Prince Bandar bin Sultan" in return for his role in the Al Yamamah deals. In late June 2007 the DOJ began a formal investigation into BAE's compliance with anti-corruption laws. On 19 May 2008 BAE Systems confirmed that its CEO Mike Turner and non-executive director Nigel Rudd had been detained "for about 20 minutes" at two US airports the previous week and that the DOJ had issued "a number of additional subpoenas in the US to employees of BAE Systems plc and BAE Systems Inc as part of its ongoing investigation". The Times suggested that such "humiliating behaviour by the DOJ" is unusual toward a company that is co-operating fully. A judicial review of the decision by the SFO to drop the investigation was granted on 9 November 2007. On 10 April 2008 the High Court ruled that the SFO "acted unlawfully" by dropping its investigation. The Times described the ruling as "one of the most strongly worded judicial attacks on government action" which condemned how "ministers 'buckled' to 'blatant threats' that Saudi cooperation in the fight against terrorism would end unless the ...investigation was dropped." On 24 April the SFO was granted leave to appeal to the House of Lords against the ruling. There was a two-day hearing before the Lords on 7 and 8 July 2008. On 30 July the House of Lords unanimously overturned the High Court ruling, stating that the decision to discontinue the investigation was lawful. === Others === In September 2005 The Guardian reported that banking records showed that BAE Systems paid £1 million to Augusto Pinochet, the former Chilean dictator. The Guardian has also reported that "clandestine arms deals" have been under investigation in Chile and the UK since 2003 and that British Aerospace and BAE Systems made a number of payments to Pinochet advisers. BAE Systems is alleged to have paid "secret offshore commissions" of over £7 million to secure the sale of HMS London and HMS Coventry to the Romanian Navy. The company received a £116 million contract for the refurbishment of the ships prior to delivery; however, the British taxpayer only received the scrap value of £100,000 each from the sale. BAE Systems ran into controversy in 2002 over the abnormally high cost of a radar system sold to Tanzania. The sale was criticised by several opposition MPs and the World Bank; Secretary of State for International Development Clare Short declared that BAE Systems had "ripped off" developing nations. In January 2007, details of an investigation by the SFO into BAE Systems' sales tactics in regard to South Africa were reported, highlighting the £2.3 billion deal to supply Hawk trainers and Gripen fighters as suspect. In May 2011, as allegations of bribery behind South Africa's Gripen procurement continued, the company's partner Saab AB issued strong denials of any illicit payments being made; however, in June 2011 Saab announced that BAE Systems had made unaccounted payments of roughly $3.5 million to a consultant; this revelation prompted South African opposition parties to call for a renewed inquiry. The Gripen's procurement by the Czech Republic was also under investigation by the SFO in 2006 over allegations of bribery. == Criticism == === Espionage === In September 2003 The Sunday Times reported that BAE Systems had hired a private security contractor to collect information about individuals working at CAAT and their activities. In February 2007, it was reported that the corporation had again obtained private confidential information from CAAT. The company was reported in 2012 to have been the target of Chinese cyber espionage that may have stolen secrets related to the F-35 Lightning II. In 2020 former employee Simon Finch, who became disillusioned when his reports of homophobic attacks in 2013 were not investigated properly, was convicted of a breach of the Official Secrets Act, after "recording from memory highly sensitive details of a UK missile system." In November 2020 he was sentenced to 4 years and 6 months in prison. === Cluster bombs === In 2003, BAE Systems was criticised for its role in the production of cluster bombs, due to the long-term risk for injury or death to civilians. Following the 2008 Oslo Convention on Cluster Munitions BAE Systems was among the first defence contractors to stop their manufacture and by 2012 the majority of the munitions had been destroyed. === Saudi war in Yemen === Saudi Arabia is BAE's third-biggest market. The Independent reported that BAE-supplied aircraft were used to bomb Red Cross and MSF hospitals in Yemen." Sir Roger Carr rejected criticism over BAE's continued work in Saudi Arabia, saying "We will stop doing it when they tell us to stop doing it... We maintain peace by having the ability to make war and that has stood the test of time." BAE Systems sold weaponry worth £17.6 billion to Saudi Arabia during the Yemen war. === Israel's war in Gaza === Direct action was taken against arms companies in the United Kingdom, including BAE Systems, that supplied weapons to Israel during the Gaza war. For instance, on 10 November 2023, trade unionists in Rochester, Kent, blocked the entrances to a BAE Systems factory, stating the facility manufactured military aircraft components used to bomb the Gaza Strip. === Political influence and donations === Former Foreign Secretary Robin Cook said of his time in office that he "came to learn that the chairman of BAE appeared to have the key to the garden door to number 10. Certainly I never knew No 10 to come up with any decision which would be incommoding to BAE." In the United States BAE Systems is a significant political donor to both Democratic and Republican candidates and organisations. In 2016 its political action committee (PAC) contributions were the second largest of any foreign corporation after UBS. In January 2021 following the 2021 United States Capitol attack BAE Systems announced that it was suspending political donations in the US. On 30 March it once again began making large political contributions, including one to the National Republican Senatorial Committee. == See also == Aerospace industry in the United Kingdom == References == == Further reading == Hartley, Keith. The Political Economy of Aerospace Industries: A Key Driver of Growth and International Competitiveness? (Edward Elgar, 2014); 288 pages; the industry in Britain, continental Europe, and the US with a case study of BAE Systems. == External links == Official website
Wikipedia/BAE_Systems
The Einstein–Brillouin–Keller (EBK) method is a semiclassical technique (named after Albert Einstein, Léon Brillouin, and Joseph B. Keller) used to compute eigenvalues in quantum-mechanical systems. EBK quantization is an improvement from Bohr-Sommerfeld quantization which did not consider the caustic phase jumps at classical turning points. This procedure is able to reproduce exactly the spectrum of the 3D harmonic oscillator, particle in a box, and even the relativistic fine structure of the hydrogen atom. In 1976–1977, Michael Berry and M. Tabor derived an extension to Gutzwiller trace formula for the density of states of an integrable system starting from EBK quantization. There have been a number of recent results on computational issues related to this topic, for example, the work of Eric J. Heller and Emmanuel David Tannenbaum using a partial differential equation gradient descent approach. == Procedure == Given a separable classical system defined by coordinates ( q i , p i ) ; i ∈ { 1 , 2 , ⋯ , d } {\displaystyle (q_{i},p_{i});i\in \{1,2,\cdots ,d\}} , in which every pair ( q i , p i ) {\displaystyle (q_{i},p_{i})} describes a closed function or a periodic function in q i {\displaystyle q_{i}} , the EBK procedure involves quantizing the line integrals of p i {\displaystyle p_{i}} over the closed orbit of q i {\displaystyle q_{i}} : I i = 1 2 π ∮ p i d q i = ℏ ( n i + μ i 4 + b i 2 ) {\displaystyle I_{i}={\frac {1}{2\pi }}\oint p_{i}dq_{i}=\hbar \left(n_{i}+{\frac {\mu _{i}}{4}}+{\frac {b_{i}}{2}}\right)} where I i {\displaystyle I_{i}} is the action-angle coordinate, n i {\displaystyle n_{i}} is a positive integer, and μ i {\displaystyle \mu _{i}} and b i {\displaystyle b_{i}} are Maslov indexes. μ i {\displaystyle \mu _{i}} corresponds to the number of classical turning points in the trajectory of q i {\displaystyle q_{i}} (Dirichlet boundary condition), and b i {\displaystyle b_{i}} corresponds to the number of reflections with a hard wall (Neumann boundary condition). == Examples == === 1D Harmonic oscillator === The Hamiltonian of a simple harmonic oscillator is given by H = p 2 2 m + m ω 2 x 2 2 {\displaystyle H={\frac {p^{2}}{2m}}+{\frac {m\omega ^{2}x^{2}}{2}}} where p {\displaystyle p} is the linear momentum and x {\displaystyle x} the position coordinate. The action variable is given by I = 2 π ∫ 0 x 0 2 m E − m 2 ω 2 x 2 d x {\displaystyle I={\frac {2}{\pi }}\int _{0}^{x_{0}}{\sqrt {2mE-m^{2}\omega ^{2}x^{2}}}\mathrm {d} x} where we have used that H = E {\displaystyle H=E} is the energy and that the closed trajectory is 4 times the trajectory from 0 to the turning point x 0 = 2 E / m ω 2 {\displaystyle x_{0}={\sqrt {2E/m\omega ^{2}}}} . The integral turns out to be E = I ω {\displaystyle E=I\omega } , which under EBK quantization there are two soft turning points in each orbit μ x = 2 {\displaystyle \mu _{x}=2} and b x = 0 {\displaystyle b_{x}=0} . Finally, that yields E = ℏ ω ( n + 1 / 2 ) {\displaystyle E=\hbar \omega (n+1/2)} , which is the exact result for quantization of the quantum harmonic oscillator. === 2D hydrogen atom === The Hamiltonian for a non-relativistic electron (electric charge e {\displaystyle e} ) in a hydrogen atom is: H = p r 2 2 m + p φ 2 2 m r 2 − e 2 4 π ϵ 0 r {\displaystyle H={\frac {p_{r}^{2}}{2m}}+{\frac {p_{\varphi }^{2}}{2mr^{2}}}-{\frac {e^{2}}{4\pi \epsilon _{0}r}}} where p r {\displaystyle p_{r}} is the canonical momentum to the radial distance r {\displaystyle r} , and p φ {\displaystyle p_{\varphi }} is the canonical momentum of the azimuthal angle φ {\displaystyle \varphi } . Take the action-angle coordinates: I φ = constant = | L | {\displaystyle I_{\varphi }={\text{constant}}=|L|} For the radial coordinate r {\displaystyle r} : p r = 2 m E − L 2 r 2 + e 2 4 π ϵ 0 r {\displaystyle p_{r}={\sqrt {2mE-{\frac {L^{2}}{r^{2}}}+{\frac {e^{2}}{4\pi \epsilon _{0}r}}}}} I r = 1 π ∫ r 1 r 2 p r d r = m e 2 4 π ϵ 0 − 2 m E − | L | {\displaystyle I_{r}={\frac {1}{\pi }}\int _{r_{1}}^{r_{2}}p_{r}dr={\frac {me^{2}}{4\pi \epsilon _{0}{\sqrt {-2mE}}}}-|L|} where we are integrating between the two classical turning points r 1 , r 2 {\displaystyle r_{1},r_{2}} ( μ r = 2 {\displaystyle \mu _{r}=2} ) E = − m e 4 32 π 2 ϵ 0 2 ( I r + I φ ) 2 {\displaystyle E=-{\frac {me^{4}}{32\pi ^{2}\epsilon _{0}^{2}(I_{r}+I_{\varphi })^{2}}}} Using EBK quantization b r = μ φ = b φ = 0 , n φ = m {\displaystyle b_{r}=\mu _{\varphi }=b_{\varphi }=0,n_{\varphi }=m} : I φ = ℏ m ; m = 0 , 1 , 2 , ⋯ {\displaystyle I_{\varphi }=\hbar m\quad ;\quad m=0,1,2,\cdots } I r = ℏ ( n r + 1 / 2 ) ; n r = 0 , 1 , 2 , ⋯ {\displaystyle I_{r}=\hbar (n_{r}+1/2)\quad ;\quad n_{r}=0,1,2,\cdots } E = − m e 4 32 π 2 ϵ 0 2 ℏ 2 ( n r + m + 1 / 2 ) 2 {\displaystyle E=-{\frac {me^{4}}{32\pi ^{2}\epsilon _{0}^{2}\hbar ^{2}(n_{r}+m+1/2)^{2}}}} and by making n = n r + m + 1 {\displaystyle n=n_{r}+m+1} the spectrum of the 2D hydrogen atom is recovered : E n = − m e 4 32 π 2 ϵ 0 2 ℏ 2 ( n − 1 / 2 ) 2 ; n = 1 , 2 , 3 , ⋯ {\displaystyle E_{n}=-{\frac {me^{4}}{32\pi ^{2}\epsilon _{0}^{2}\hbar ^{2}(n-1/2)^{2}}}\quad ;\quad n=1,2,3,\cdots } Note that for this case I φ = | L | {\displaystyle I_{\varphi }=|L|} almost coincides with the usual quantization of the angular momentum operator on the plane L z {\displaystyle L_{z}} . For the 3D case, the EBK method for the total angular momentum is equivalent to the Langer correction. == See also == Hamilton–Jacobi equation WKB approximation Quantum chaos == References == Duncan, Anthony; Janssen, Michel (2019). "5. Guiding Principles". Constructing quantum mechanics (First ed.). Oxford, United Kingdom ; New York, NY: Oxford University Press. ISBN 978-0-19-884547-8.
Wikipedia/Einstein–Brillouin–Keller_method
In the history of quantum mechanics, the Bohr–Kramers–Slater (BKS) theory was perhaps the final attempt at understanding the interaction of matter and electromagnetic radiation on the basis of the so-called old quantum theory, in which quantum phenomena are treated by imposing quantum restrictions on classically describable behaviour. It was advanced in 1924, and sticks to a classical wave description of the electromagnetic field. It was perhaps more a research program than a full physical theory, the ideas that are developed not being worked out in a quantitative way.: 236  The purpose of BKS theory was to disprove Einstein's hypothesis of the light quantum. One aspect, the idea of modelling atomic behaviour under incident electromagnetic radiation using "virtual oscillators" at the absorption and emission frequencies, rather than the (different) apparent frequencies of the Bohr orbits, significantly led Max Born, Werner Heisenberg and Hendrik Kramers to explore mathematics that strongly inspired the subsequent development of matrix mechanics, the first form of modern quantum mechanics. The provocativeness of the theory also generated great discussion and renewed attention to the difficulties in the foundations of the old quantum theory. However, physically the most provocative element of the theory, that momentum and energy would not necessarily be conserved in each interaction but only overall, statistically, was soon shown to be in conflict with experiment. Walther Bothe won the Nobel Prize in Physics in 1954 for the Bothe–Geiger coincidence experiment that experimentally disproved BKS theory. == Origins == When Albert Einstein introduced the light quantum (photon) in 1905, there was much resistance from the scientific community. However, when in 1923, the Compton effect showed the results could be explained by assuming the light beam behaves as light-quanta and that energy and momentum are conserved, Niels Bohr was still resistant against quantized light, even repudiating it in his 1922 Nobel Prize lecture. So Bohr found a way of using Einstein's approach without also using the light-quantum hypothesis by reinterpreting the principles of energy and momentum conservation as statistical principles. Thus, it was in 1924 that Bohr, Hendrik Kramers and John C. Slater published a provocative description of the interaction of matter and electromagnetic interaction, historically known as the BKS paper that combined quantum transitions and electromagnetic waves with energy and momentum being conserved only on average. The initial idea of the BKS theory originated with Slater, who proposed to Bohr and Kramers the following elements of a theory of emission and absorption of radiation by atoms, to be developed during his stay in Copenhagen: Emission and absorption of electromagnetic radiation by matter is realized in agreement with Einstein's photon concept; A photon emitted by an atom is guided by a classical electromagnetic field (cf. Louis de Broglie's ideas published September 1923) consisting of spherical waves, thus enabling an explanation of interference; Even when there are no transitions there exists a classical field to which all atoms contribute; this field contains all frequencies at which an atom can emit or absorb a photon, the probability of such an emission being determined by the amplitude of the corresponding Fourier component of the field; the probabilistic aspect is provisional, to be eliminated when the dynamics of the inside of atoms are better known; The classical field is not produced by the actual motions of the electrons but by "motions with the frequencies of possible emission and absorption lines" (to be called 'virtual oscillators', creating a field to be referred to as 'virtual' as well). This fourth point reverts to Max Planck's original view of his quantum introduction in 1900. Planck also did not believe that light was quantized. He believed that a black body had virtual oscillators and that only during interactions between light and the virtual oscillators of the body was the quantum to be considered. Max Planck said in 1911, Mr. Einstein, it would be necessary to conceive … [of] light waves themselves as atomistically constituted, and hence to give up Maxwell's equations. This seems to me a step which in my opinion is not yet necessary…. I think that first of all one should attempt to transfer the whole problem of the quantum theory to the area of the interaction between matter and radiation.”Independently, Franz S. Exner had also suggested the statistical validity of energy conservation in the same spirit as the second law of thermodynamics. Erwin Schrödinger, who did his habilitation under the supervision of Exner, was very supportive of the BKS theory. Schrödinger published a paper to provide his own interpretation of the BKS statistical interpretation. == Development with Bohr and Kramers == Slater's main intention seems to have been to reconcile the two conflicting models of radiation, viz. the wave and particle models. He may have had good hopes that his idea with respect to oscillators vibrating at the differences of the frequencies of electron rotations (rather than at the rotation frequencies themselves) might be attractive to Bohr because it solved a problem of the latter's atomic model, even though the physical meaning of these oscillators was far from clear. Nevertheless, Bohr and Kramers had two objections to Slater's proposal: The assumption that photons exist. Even though Einstein's photon hypothesis could explain in a simple way the photoelectric effect, as well as conservation of energy in processes of de-excitation of an atom followed by excitation of a neighboring one, Bohr had always been reluctant to accept the reality of photons, his main argument being the problem of reconciling the existence of photons with the phenomenon of interference; The impossibility to account for conservation of energy in a process of de-excitation of an atom followed by excitation of a neighboring one. This impossibility followed from Slater's probabilistic assumption, which did not imply any correlation between processes going on in different atoms. As Max Jammer puts it, this refocussed the theory "to harmonize the physical picture of the continuous electromagnetic field with the physical picture, not as Slater had proposed of light quanta, but of the discontinuous quantum transitions in the atom." Bohr and Kramers hoped to be able to evade the photon hypothesis on the basis of ongoing work by Kramers to describe "dispersion" (in present-day terms inelastic scattering) of light by means of a classical theory of interaction of radiation and matter. But abandoning the concept of the photon, they instead chose to squarely accept the possibility of non-conservation of energy, and momentum. == Experimental counter-evidence == In the BKS paper the Compton effect was discussed as an application of the idea of "statistical conservation of energy and momentum" in a continuous process of scattering of radiation by a sample of free electrons, where "each of the electrons contributes through the emission of coherent secondary wavelets". Although Arthur Compton had already given an attractive account of his experiment on the basis of the photon picture (including conservation of energy and momentum in individual scattering processes), is it stated in the BKS paper that "it seems at the present state of science hardly justifiable to reject a formal interpretation as that under consideration [i.e. the weaker assumption of statistical conservation] as inadequate". This statement may have prompted experimental physicists to improve `the present state of science' by testing the hypothesis of `statistical energy and momentum conservation'. In any case, already after one year the BKS theory was disproved by coincidence methods studying correlations between the directions into which the emitted radiation and the recoil electron are emitted in individual scattering processes. Such experiments were carried independently, with the Bothe–Geiger coincidence experiment performed by Walther Bothe and Hans Geiger, as well as the experiment by Compton and Alfred W. Simon. They provided experimental evidence pointing in the direction of energy and momentum conservation in individual scattering processes (at least, it was shown that the BKS theory was not able to explain the experimental results). More accurate experiments, performed much later, have also confirmed these results. Commenting on the experiments, Max von Laue considered that “physics was saved from being led astray.” From the very beginning, Wolfgang Pauli was extremely critical of the BKS theory, referring to it as the Copenhagen putsch (German: Kopenhagener Putsch). In a letter to Kramers, Pauli said that Bohr would have abandoned the theory even if no experiment was ever carried out, arguing that it is the notion of motion and forces that needs to be modified, not the conservation of energy. Pauli could not help to mock the theory, proposing to the Institute of Physics in Copenhague to “fly its flag at half mast on the anniversary of the publication of the work of Bohr, Kramers and Slater.” As suggested by a letter to Max Born, for Einstein, the corroboration of energy and momentum conservation was probably even more important than his photon hypothesis:Bohr's opinion of radiation interests me very much. But I don't want to let myself be driven to a renunciation of strict causality before there has been a much stronger resistance against it than up to now. I cannot bear the thought that an electron exposed to a ray should by its own free decision choose the moment and the direction in which it wants to jump away. If so, I'd rather be a cobbler or even an employee in a gambling house than a physicist. It is true that my attempts to give the quanta palpable shape have failed again and again, but I'm not going to give up hope for a long time yet. In light of the experimental results, Bohr informed Charles Galton Darwin that "there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible". Bohr's reaction, too, was not primarily related to the photon hypothesis. According to Werner Heisenberg, Bohr remarked: Even if Einstein sends me a cable that an irrevocable proof of the physical existence of light-quanta has now been found, the message cannot reach me, because it has to be transmitted by electromagnetic waves. For Bohr the lesson to be learned from the disproof of the BKS theory was not that photons do exist, but rather that the applicability of classical space-time pictures in understanding phenomena within the quantum domain is limited. This theme would become particularly important a few years later in developing the notion of complementarity. According to Heisenberg, Born's statistical interpretation also had its ultimate roots in the BKS theory. Hence, despite its failure the BKS theory still provided an important contribution to the revolutionary transition from classical mechanics to quantum mechanics. Schrödinger would not abandon the statistical interpretation and would continue to push this theory until the end of his life. == References ==
Wikipedia/BKS_theory
In atomic physics, the Bohr model or Rutherford–Bohr model was a model of the atom that incorporated some early quantum concepts. Developed from 1911 to 1918 by Niels Bohr and building on Ernest Rutherford's nuclear model, it supplanted the plum pudding model of J. J. Thomson only to be replaced by the quantum atomic model in the 1920s. It consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values). In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics. The model's key success lies in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results. The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory. == Background == Until the second decade of the 20th century, atomic models were generally speculative. Even the concept of atoms, let alone atoms with internal structure, faced opposition from some scientists.: 2  === Planetary models === In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons.: 35  These models faced a significant constraint. In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitable arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements.: 113  === Thomson's atom model === When Bohr began his work on a new atomic theory in the summer of 1912: 237  the atomic model proposed by J. J. Thomson, now known as the plum pudding model, was the best available.: 37  Thomson proposed a model with electrons rotating in coplanar rings within an atomic-sized, positively-charged, spherical volume. Thomson showed that this model was mechanically stable by lengthy calculations and was electrodynamically stable under his original assumption of thousands of electrons per atom. Moreover, he suggested that the particularly stable configurations of electrons in rings was connected to chemical properties of the atoms. He developed a formula for the scattering of beta particles that seemed to match experimental results.: 38  However Thomson himself later showed that the atom had a factor of a thousand fewer electrons, challenging the stability argument and forcing the poorly understood positive sphere to have most of the atom's mass. Thomson was also unable to explain the many lines in atomic spectra.: 18  === Rutherford nuclear model === In 1908, Hans Geiger and Ernest Marsden demonstrated that alpha particle occasionally scatter at large angles, a result inconsistent with Thomson's model. In 1911 Ernest Rutherford developed a new scattering model, showing that the observed large angle scattering could be explained by a compact, highly charged mass at the center of the atom. Rutherford scattering did not involve the electrons and thus his model of the atom was incomplete. Bohr begins his first paper on his atomic model by describing Rutherford's atom as consisting of a small, dense, positively charged nucleus attracting negatively charged electrons. === Atomic spectra === By the early twentieth century, it was expected that the atom would account for the many atomic spectral lines. These lines were summarized in empirical formula by Johann Balmer and Johannes Rydberg. In 1897, Lord Rayleigh showed that vibrations of electrical systems predicted spectral lines that depend on the square of the vibrational frequency, contradicting the empirical formula which depended directly on the frequency.: 18  In 1907 Arthur W. Conway showed that, rather than the entire atom vibrating, vibrations of only one of the electrons in the system described by Thomson might be sufficient to account for spectral series.: II:106  Although Bohr's model would also rely on just the electron to explain the spectrum, he did not assume an electrodynamical model for the atom. The other important advance in the understanding of atomic spectra was the Rydberg–Ritz combination principle which related atomic spectral line frequencies to differences between 'terms', special frequencies characteristic of each element.: 173  Bohr would recognize the terms as energy levels of the atom divided by the Planck constant, leading to the modern view that the spectral lines result from energy differences.: 847  === Haas atomic model === In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy, E pot {\displaystyle E_{\text{pot}}} , on a sphere of radius a to equal the frequency, f, of the electron's orbit on the sphere times the Planck constant:: 197  E pot = − e 2 a = h f {\displaystyle E_{\text{pot}}={\frac {-e^{2}}{a}}=hf} where e represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force: e 2 a 2 = m a ( 2 π f ) 2 {\displaystyle {\frac {e^{2}}{a^{2}}}=ma(2\pi f)^{2}} where m is the mass of the electron. This combination relates the radius of the sphere to the Planck constant: a = h 2 4 π 2 e 2 m {\displaystyle a={\frac {h^{2}}{4\pi ^{2}e^{2}m}}} Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom. Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, a, the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius.: 197  === Influence of the Solvay Conference === The first Solvay Conference, in 1911, was one of the first international physics conferences. Nine Nobel or future Nobel laureates attended, including Ernest Rutherford, Bohr's mentor.: 271  Bohr did not attend but he read the Solvay reports and discussed them with Rutherford.: 233  The subject of the conference was the theory of radiation and the energy quanta of Max Planck's oscillators. Planck's lecture at the conference ended with comments about atoms and the discussion that followed it concerned atomic models. Hendrik Lorentz raised the question of the composition of the atom based on Haas's model, a form of Thomson's plum pudding model with a quantum modification. Lorentz explained that the size of atoms could be taken to determine the Planck constant as Haas had done or the Planck constant could be taken as determining the size of atoms.: 273  Bohr would adopt the second path. The discussions outlined the need for the quantum theory to be included in the atom. Planck explicitly mentions the failings of classical mechanics.: 273  While Bohr had already expressed a similar opinion in his PhD thesis, at Solvay the leading scientists of the day discussed a break with classical theories.: 244  Bohr's first paper on his atomic model cites the Solvay proceedings saying: "Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i.e. Planck's constant, or as it often is called the elementary quantum of action." Encouraged by the Solvay discussions, Bohr would assume the atom was stable and abandon the efforts to stabilize classical models of the atom: 199  === Nicholson atom theory === In 1911 John William Nicholson published a model of the atom which would influence Bohr's model. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant. Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency. This new concept gave Planck constant an atomic meaning for the first time.: 169  In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom. The other critical influence of Nicholson work was his detailed analysis of spectra. Before Nicholson's work Bohr thought the spectral data was not useful for understanding atoms. In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail.: 178  Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit.: 163  By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair.: 195  Bohr's atomic model would abandon classical electrodynamics. Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency. === Bohr's previous work === Bohr completed his PhD in 1911 with a thesis 'Studies on the Electron Theory of Metals', an application of the classical electron theory of Hendrik Lorentz. Bohr noted two deficits of the classical model. The first concerned the specific heat of metals which James Clerk Maxwell noted in 1875: every additional degree of freedom in a theory of metals, like subatomic electrons, cause more disagreement with experiment. The second, the classical theory could not explain magnetism.: 194  After his PhD, Bohr worked briefly in the lab of JJ Thomson before moving to Rutherford's lab in Manchester to study radioactivity. He arrived just after Rutherford completed his proposal of a compact nuclear core for atoms. Charles Galton Darwin, also at Manchester, had just completed an analysis of alpha particle energy loss in metals, concluding the electron collisions where the dominant cause of loss. Bohr showed in a subsequent paper that Darwin's results would improve by accounting for electron binding energy. Importantly this allowed Bohr to conclude that hydrogen atoms have a single electron.: 195  == Development == Next, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888, resulting in what is now known as the Rydberg formula. After this, Bohr declared, "everything became clear". In 1913 Niels Bohr put forth three postulates to provide an electron model consistent with Rutherford's nuclear model: The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones. The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant: m e v r = n ℏ {\displaystyle m_{\mathrm {e} }vr=n\hbar } , where n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...} is called the principal quantum number, and ℏ = h / 2 π {\displaystyle \hbar =h/2\pi } . The lowest value of n {\displaystyle n} is 1; this gives the smallest possible orbital radius, known as the Bohr radius, of 0.0529 nm for hydrogen. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation. Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν {\displaystyle \nu } determined by the energy difference of the levels according to the Planck relation: Δ E = E 2 − E 1 = h ν {\displaystyle \Delta E=E_{2}-E_{1}=h\nu } , where h {\displaystyle h} is the Planck constant. Other points are: Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons. According to the Maxwell theory the frequency ν {\displaystyle \nu } of classical radiation is equal to the rotation frequency ν {\displaystyle \nu } rot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels E n {\displaystyle E_{n}} and E n − k {\displaystyle E_{n-k}} when k {\displaystyle k} is much smaller than n {\displaystyle n} . These jumps reproduce the frequency of the k {\displaystyle k} -th harmonic of orbit n {\displaystyle n} . For sufficiently large values of n {\displaystyle n} (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small n {\displaystyle n} (or large k {\displaystyle k} ), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers. The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average. Bohr's condition, that the angular momentum be an integer multiple of ℏ {\displaystyle \hbar } , was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit: n λ = 2 π r . {\displaystyle n\lambda =2\pi r.} According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is λ = h m v , {\displaystyle \lambda ={\frac {h}{mv}},} which implies that n h m v = 2 π r , {\displaystyle {\frac {nh}{mv}}=2\pi r,} or n h 2 π = m v r , {\displaystyle {\frac {nh}{2\pi }}=mvr,} where m v r {\displaystyle mvr} is the angular momentum of the orbiting electron. Writing ℓ {\displaystyle \ell } for this angular momentum, the previous equation becomes ℓ = n h 2 π , {\displaystyle \ell ={\frac {nh}{2\pi }},} which is Bohr's second postulate. Bohr described angular momentum of the electron orbit as 2 / h {\displaystyle 2/h} while de Broglie's wavelength of λ = h / p {\displaystyle \lambda =h/p} described h {\displaystyle h} divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected. In 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge. == Electron energy levels == The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons. Calculation of the orbits requires two assumptions. Classical mechanics The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force. m e v 2 r = Z k e e 2 r 2 , {\displaystyle {\frac {m_{\mathrm {e} }v^{2}}{r}}={\frac {Zk_{\mathrm {e} }e^{2}}{r^{2}}},} where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius: v = Z k e e 2 m e r . {\displaystyle v={\sqrt {\frac {Zk_{\mathrm {e} }e^{2}}{m_{\mathrm {e} }r}}}.} It also determines the electron's total energy at any radius: E = − 1 2 m e v 2 . {\displaystyle E=-{\frac {1}{2}}m_{\mathrm {e} }v^{2}.} The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem. A quantum rule The angular momentum L = mevr is an integer multiple of ħ: m e v r = n ℏ . {\displaystyle m_{\mathrm {e} }vr=n\hbar .} === Derivation === In classical mechanics, if an electron is orbiting around an atom with period T, and if its coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, it will emit electromagnetic radiation in a pattern repeating at every period, so that the Fourier transform of the pattern will only have frequencies which are multiples of 1/T. However, in quantum mechanics, the quantization of angular momentum leads to discrete energy levels of the orbits, and the emitted frequencies are quantized according to the energy differences between these levels. This discrete nature of energy levels introduces a fundamental departure from the classical radiation law, giving rise to distinct spectral lines in the emitted radiation. Bohr assumes that the electron is circling the nucleus in an elliptical orbit obeying the rules of classical mechanics, but with no loss of radiation due to the Larmor formula. Denoting the total energy as E, the electron charge as −e, the nucleus charge as K = Ze, the electron mass as me, half the major axis of the ellipse as a, he starts with these equations:: 3  E is assumed to be negative, because a positive energy is required to unbind the electron from the nucleus and put it at rest at an infinite distance. Eq. (1a) is obtained from equating the centripetal force to the Coulombian force acting between the nucleus and the electron, considering that E = T + U {\displaystyle E=T+U} (where T is the average kinetic energy and U the average electrostatic potential), and that for Kepler's second law, the average separation between the electron and the nucleus is a. Eq. (1b) is obtained from the same premises of eq. (1a) plus the virial theorem, stating that, for an elliptical orbit, Then Bohr assumes that | E | {\displaystyle \vert E\vert } is an integer multiple of the energy of a quantum of light with half the frequency of the electron's revolution frequency,: 4  i.e.: From eq. (1a, 1b, 2), it descends: He further assumes that the orbit is circular, i.e. a = r {\displaystyle a=r} , and, denoting the angular momentum of the electron as L, introduces the equation: Eq. (4) stems from the virial theorem, and from the classical mechanics relationships between the angular momentum, the kinetic energy and the frequency of revolution. From eq. (1c, 2, 4), it stems: where: that is: This results states that the angular momentum of the electron is an integer multiple of the reduced Planck constant.: 15  Substituting the expression for the velocity gives an equation for r in terms of n: m e k e Z e 2 m e r r = n ℏ , {\displaystyle m_{\text{e}}{\sqrt {\dfrac {k_{\text{e}}Ze^{2}}{m_{\text{e}}r}}}r=n\hbar ,} so that the allowed orbit radius at any n is r n = n 2 ℏ 2 Z k e e 2 m e . {\displaystyle r_{n}={\frac {n^{2}\hbar ^{2}}{Zk_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}.} The smallest possible value of r in the hydrogen atom (Z = 1) is called the Bohr radius and is equal to: r 1 = ℏ 2 k e e 2 m e ≈ 5.29 × 10 − 11 m = 52.9 p m . {\displaystyle r_{1}={\frac {\hbar ^{2}}{k_{\mathrm {e} }e^{2}m_{\mathrm {e} }}}\approx 5.29\times 10^{-11}~\mathrm {m} =52.9~\mathrm {pm} .} The energy of the n-th level for any atom is determined by the radius and quantum number: E = − Z k e e 2 2 r n = − Z 2 ( k e e 2 ) 2 m e 2 ℏ 2 n 2 ≈ − 13.6 Z 2 n 2 e V . {\displaystyle E=-{\frac {Zk_{\mathrm {e} }e^{2}}{2r_{n}}}=-{\frac {Z^{2}(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}n^{2}}}\approx {\frac {-13.6\ Z^{2}}{n^{2}}}~\mathrm {eV} .} An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product. The combination of natural constants in the energy formula is called the Rydberg energy (RE): R E = ( k e e 2 ) 2 m e 2 ℏ 2 . {\displaystyle R_{\mathrm {E} }={\frac {(k_{\mathrm {e} }e^{2})^{2}m_{\mathrm {e} }}{2\hbar ^{2}}}.} This expression is clarified by interpreting it in combinations that form more natural units: m e c 2 {\displaystyle m_{\mathrm {e} }c^{2}} is the rest mass energy of the electron (511 keV), k e e 2 ℏ c = α ≈ 1 137 {\displaystyle {\frac {k_{\mathrm {e} }e^{2}}{\hbar c}}=\alpha \approx {\frac {1}{137}}} is the fine-structure constant, R E = 1 2 ( m e c 2 ) α 2 {\displaystyle R_{\mathrm {E} }={\frac {1}{2}}(m_{\mathrm {e} }c^{2})\alpha ^{2}} . Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge q = Ze, where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation): E n = − Z 2 R E n 2 . {\displaystyle E_{n}=-{\frac {Z^{2}R_{\mathrm {E} }}{n^{2}}}.} The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb force. When Z = 1/α (Z ≈ 137), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei. The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron, m red = m e m p m e + m p = m e 1 1 + m e / m p . {\displaystyle m_{\text{red}}={\frac {m_{\mathrm {e} }m_{\mathrm {p} }}{m_{\mathrm {e} }+m_{\mathrm {p} }}}=m_{\mathrm {e} }{\frac {1}{1+m_{\mathrm {e} }/m_{\mathrm {p} }}}.} However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1 + 1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4. For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus. E n = R E 2 n 2 {\displaystyle E_{n}={\frac {R_{\mathrm {E} }}{2n^{2}}}} (positronium). == Rydberg formula == Beginning in late 1860s, Johann Balmer and later Johannes Rydberg and Walther Ritz developed increasingly accurate empirical formula matching measured atomic spectral lines. Critical for Bohr's later work, Rydberg expressed his formula in terms of wave-number, equivalent to frequency. These formula contained a constant, R {\displaystyle R} , now known the Rydberg constant and a pair of integers indexing the lines:: 247  ν = R ( 1 m 2 − 1 n 2 ) . {\displaystyle \nu =R\left({\frac {1}{m^{2}}}-{\frac {1}{n^{2}}}\right).} Despite many attempts, no theory of the atom could reproduce these relatively simple formula.: 169  In Bohr's theory describing the energies of transitions or quantum jumps between orbital energy levels is able to explain these formula. For the hydrogen atom Bohr starts with his derived formula for the energy released as a free electron moves into a stable circular orbit indexed by τ {\displaystyle \tau } : W τ = 2 π 2 m e 4 h 2 τ 2 {\displaystyle W_{\tau }={\frac {2\pi ^{2}me^{4}}{h^{2}\tau ^{2}}}} The energy difference between two such levels is then: h ν = W τ 2 − W τ 1 = 2 π 2 m e 4 h 2 ( 1 τ 2 2 − 1 τ 1 2 ) {\displaystyle h\nu =W_{\tau _{2}}-W_{\tau _{1}}={\frac {2\pi ^{2}me^{4}}{h^{2}}}\left({\frac {1}{\tau _{2}^{2}}}-{\frac {1}{\tau _{1}^{2}}}\right)} Therefore, Bohr's theory gives the Rydberg formula and moreover the numerical value the Rydberg constant for hydrogen in terms of more fundamental constants of nature, including the electron's charge, the electron's mass, and the Planck constant:: 31  c R H = 2 π 2 m e 4 h 3 . {\displaystyle cR_{\text{H}}={\frac {2\pi ^{2}me^{4}}{h^{3}}}.} Since the energy of a photon is E = h c λ , {\displaystyle E={\frac {hc}{\lambda }},} these results can be expressed in terms of the wavelength of the photon given off: 1 λ = R ( 1 n f 2 − 1 n i 2 ) . {\displaystyle {\frac {1}{\lambda }}=R\left({\frac {1}{n_{\text{f}}^{2}}}-{\frac {1}{n_{\text{i}}^{2}}}\right).} Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman (nf = 1), Balmer (nf = 2), and Paschen (nf = 3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted.: 34  To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing Z with Z − b or n with n − b where b is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model. == Shell model (heavier atoms) == Bohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, "rings" in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: "We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8." For smaller atoms, the electron shells would be filled as follows: "rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8". However, in larger atoms the innermost shell would contain eight electrons, "on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur". Bohr wrote "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:" In Bohr's third 1913 paper Part III called "Systems Containing Several Nuclei", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium. In 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury Bohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: "shells". Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit. This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit. For example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer. The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas). In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n = 3 d-orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment. == Moseley's law and calculation (K-alpha X-ray emission lines) == Niels Bohr said in 1962: "You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley." In 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number Z. Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z − 1)2. Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation. It was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: "This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated." Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of "cells" which could each only contain two electrons each, and these were arranged in "equidistant layers". In the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines, E = h ν = E i − E f = R E ( Z − 1 ) 2 ( 1 1 2 − 1 2 2 ) , {\displaystyle E=h\nu =E_{i}-E_{f}=R_{\mathrm {E} }(Z-1)^{2}\left({\frac {1}{1^{2}}}-{\frac {1}{2^{2}}}\right),} or f = ν = R v ( 3 4 ) ( Z − 1 ) 2 = ( 2.46 × 10 15 Hz ) ( Z − 1 ) 2 . {\displaystyle f=\nu =R_{\mathrm {v} }\left({\frac {3}{4}}\right)(Z-1)^{2}=(2.46\times 10^{15}~{\text{Hz}})(Z-1)^{2}.} Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28×1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913. The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation. == Shortcomings == The Bohr model gives an incorrect value L=ħ for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to revolve "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction. In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a "coincidence". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics). The Bohr model also failed to explain: Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom. The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect). The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin. The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields. Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together. Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium. == Refinements == Several enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Wilson–Sommerfeld quantization condition ∫ 0 T p r d q r = n h , {\displaystyle \int _{0}^{T}p_{\text{r}}\,dq_{\text{r}}=nh,} where pr is the radial momentum canonically conjugate to the coordinate qr, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants. The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could have any orientation relative to the coordinates, without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926. However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron. The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. Bohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable "closed shells". == Model of the chemical bond == Niels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other. == Symbolism of planetary atomic models == Although Bohr's atomic model was superseded by quantum models in the 1920s, the visual image of electrons orbiting a nucleus has remained the popular concept of atoms. The concept of an atom as a tiny planetary system has been widely used as a symbol for atoms and even for "atomic" energy (even though this is more properly considered nuclear energy).: 58  Examples of its use over the past century include but are not limited to: The logo of the United States Atomic Energy Commission, which was in part responsible for its later usage in relation to nuclear fission technology in particular. The flag of the International Atomic Energy Agency is a "crest-and-spinning-atom emblem", enclosed in olive branches. The US minor league baseball Albuquerque Isotopes' logo shows baseballs as electrons orbiting a large letter "A". A similar symbol, the atomic whirl, was chosen as the symbol for the American Atheists, and has come to be used as a symbol of atheism in general. The Unicode Miscellaneous Symbols code point U+269B (⚛) for an atom looks like a planetary atom model. The television show The Big Bang Theory uses a planetary-like image in its print logo. The JavaScript library React uses planetary-like image as its logo. On maps, it is generally used to indicate a nuclear power installation. == See also == == References == === Footnotes === === Primary sources === Bohr, N. (July 1913). "I. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (151): 1–25. Bibcode:1913PMag...26....1B. doi:10.1080/14786441308634955. Bohr, N. (September 1913). "XXXVII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (153): 476–502. Bibcode:1913PMag...26..476B. doi:10.1080/14786441308634993. Bohr, N. (1 November 1913). "LXXIII. On the constitution of atoms and molecules". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 26 (155): 857–875. Bibcode:1913PMag...26..857B. doi:10.1080/14786441308635031. Bohr, N. (October 1913). "The Spectra of Helium and Hydrogen". Nature. 92 (2295): 231–232. Bibcode:1913Natur..92..231B. doi:10.1038/092231d0. S2CID 11988018. Bohr, N. (March 1921). "Atomic Structure". Nature. 107 (2682): 104–107. Bibcode:1921Natur.107..104B. doi:10.1038/107104a0. S2CID 4035652. A. Einstein (1917). "Zum Quantensatz von Sommerfeld und Epstein". Verhandlungen der Deutschen Physikalischen Gesellschaft. 19: 82–92. Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p. 434. (provides an elegant reformulation of the Bohr–Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.) de Broglie, Maurice; Langevin, Paul; Solvay, Ernest; Einstein, Albert (1912). La théorie du rayonnement et les quanta : rapports et discussions de la réunion tenue à Bruxelles, du 30 octobre au 3 novembre 1911, sous les auspices de M.E. Solvay (in French). Gauthier-Villars. OCLC 1048217622. == Further reading == Linus Carl Pauling (1970). "Chapter 5-1". General Chemistry (3rd ed.). San Francisco: W.H. Freeman & Co. Reprint: Linus Pauling (1988). General Chemistry. New York: Dover Publications. ISBN 0-486-65622-5. George Gamow (1985). "Chapter 2". Thirty Years That Shook Physics. Dover Publications. Walter J. Lehmann (1972). "Chapter 18". Atomic and Molecular Structure: the development of our concepts. John Wiley and Sons. ISBN 0-471-52440-9. Paul Tipler and Ralph Llewellyn (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0. Klaus Hentschel: Elektronenbahnen, Quantensprünge und Spektren, in: Charlotte Bigg & Jochen Hennig (eds.) Atombilder. Ikonografien des Atoms in Wissenschaft und Öffentlichkeit des 20. Jahrhunderts, Göttingen: Wallstein-Verlag 2009, pp. 51–61 Steven and Susan Zumdahl (2010). "Chapter 7.4". Chemistry (8th ed.). Brooks/Cole. ISBN 978-0-495-82992-8. Kragh, Helge (November 2011). "Conceptual objections to the Bohr atomic theory — do electrons have a 'free will'?". The European Physical Journal H. 36 (3): 327–352. Bibcode:2011EPJH...36..327K. doi:10.1140/epjh/e2011-20031-x. S2CID 120859582. == External links == Standing waves in Bohr's atomic model—An interactive simulation to intuitively explain the quantization condition of standing waves in Bohr's atomic mode
Wikipedia/Bohr_Model
The Bohr–Sommerfeld model (also known as the Sommerfeld model or Bohr–Sommerfeld theory) was an extension of the Bohr model to allow elliptical orbits of electrons around an atomic nucleus. Bohr–Sommerfeld theory is named after Danish physicist Niels Bohr and German physicist Arnold Sommerfeld. Sommerfeld showed that, if electronic orbits are elliptical instead of circular (as in Bohr's model of the atom), the fine-structure of the hydrogen atom can be described. The Bohr–Sommerfeld model added to the quantized angular momentum condition of the Bohr model with a radial quantization (condition by William Wilson, the Wilson–Sommerfeld quantization condition): ∫ 0 T p r d q r = n h , {\displaystyle \int _{0}^{T}p_{r}\,dq_{r}=nh,} where pr is the radial momentum canonically conjugate to the coordinate q, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants. == History == In 1913, Niels Bohr displayed rudiments of the later defined correspondence principle and used it to formulate a model of the hydrogen atom which explained its line spectrum. In the next few years Arnold Sommerfeld extended the quantum rule to arbitrary integrable systems making use of the principle of adiabatic invariance of the quantum numbers introduced by Hendrik Lorentz and Albert Einstein. Sommerfeld made a crucial contribution by quantizing the z-component of the angular momentum, which in the old quantum era was called "space quantization" (German: Richtungsquantelung). This allowed the orbits of the electron to be ellipses instead of circles, and introduced the concept of quantum degeneracy. The theory would have correctly explained the Zeeman effect, except for the issue of electron spin. Sommerfeld's model was much closer to the modern quantum mechanical picture than Bohr's. In the 1950s Joseph Keller updated Bohr–Sommerfeld quantization using Einstein's interpretation of 1917, now known as Einstein–Brillouin–Keller method. In 1971, Martin Gutzwiller took into account that this method only works for integrable systems and derived a semiclassical way of quantizing chaotic systems from path integrals. == Predictions == The Sommerfeld model predicted that the magnetic moment of an atom measured along an axis will only take on discrete values, a result which seems to contradict rotational invariance but which was confirmed by the Stern–Gerlach experiment. This was a significant step in the development of quantum mechanics. It also described the possibility of atomic energy levels being split by a magnetic field (called the Zeeman effect). Walther Kossel worked with Bohr and Sommerfeld on the Bohr–Sommerfeld model of the atom introducing two electrons in the first shell and eight in the second. == Issues == The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could be turned this way and that relative to the coordinates without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926. However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron. The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. == Relativistic orbit == Arnold Sommerfeld derived the relativistic solution of atomic energy levels. We will start this derivation with the relativistic equation for energy in the electric potential W = m 0 c 2 ( 1 1 − v 2 c 2 − 1 ) − k Z e 2 r {\displaystyle W={m_{\mathrm {0} }c^{2}}\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)-k{\frac {Ze^{2}}{r}}} After substitution u = 1 r {\displaystyle u={\frac {1}{r}}} we get 1 1 − v 2 c 2 = 1 + W m 0 c 2 + k Z e 2 m 0 c 2 u {\displaystyle {\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}=1+{\frac {W}{m_{\mathrm {0} }c^{2}}}+k{\frac {Ze^{2}}{m_{\mathrm {0} }c^{2}}}u} For momentum p r = m r ˙ {\displaystyle p_{\mathrm {r} }=m{\dot {r}}} , p φ = m r 2 φ ˙ {\displaystyle p_{\mathrm {\varphi } }=mr^{2}{\dot {\varphi }}} and their ratio p r p φ = − d u d φ {\displaystyle {\frac {p_{\mathrm {r} }}{p_{\mathrm {\varphi } }}}=-{\frac {du}{d\varphi }}} the equation of motion is (see Binet equation) d 2 u d φ 2 = − ( 1 − k 2 Z 2 e 4 c 2 p φ 2 ) u + m 0 k Z e 2 p φ 2 ( 1 + W m 0 c 2 ) = − ω 0 2 u + K {\displaystyle {\frac {d^{2}u}{d\varphi ^{2}}}=-\left(1-k^{2}{\frac {Z^{2}e^{4}}{c^{2}p_{\mathrm {\varphi } }^{2}}}\right)u+{\frac {m_{\mathrm {0} }kZe^{2}}{p_{\mathrm {\varphi } }^{2}}}\left(1+{\frac {W}{m_{\mathrm {0} }c^{2}}}\right)=-\omega _{\mathrm {0} }^{2}u+K} with solution u = 1 r = K + A cos ⁡ ω 0 φ {\displaystyle u={\frac {1}{r}}=K+A\cos \omega _{\mathrm {0} }\varphi } The angular shift of periapsis per revolution is given by φ s = 2 π ( 1 ω 0 − 1 ) ≈ 4 π 3 k 2 Z 2 e 4 c 2 n φ 2 h 2 {\displaystyle \varphi _{\mathrm {s} }=2\pi \left({\frac {1}{\omega _{\mathrm {0} }}}-1\right)\approx 4\pi ^{3}k^{2}{\frac {Z^{2}e^{4}}{c^{2}n_{\mathrm {\varphi } }^{2}h^{2}}}} With the quantum conditions ∮ p φ d φ = 2 π p φ = n φ h {\displaystyle \oint p_{\mathrm {\varphi } }\,d\varphi =2\pi p_{\mathrm {\varphi } }=n_{\mathrm {\varphi } }h} and ∮ p r d r = p φ ∮ ( 1 r d r d φ ) 2 d φ = n r h {\displaystyle \oint p_{\mathrm {r} }\,dr=p_{\mathrm {\varphi } }\oint \left({\frac {1}{r}}{\frac {dr}{d\varphi }}\right)^{2}\,d\varphi =n_{\mathrm {r} }h} we will obtain energies W m 0 c 2 = ( 1 + α 2 Z 2 ( n r + n φ 2 − α 2 Z 2 ) 2 ) − 1 / 2 − 1 {\displaystyle {\frac {W}{m_{\mathrm {0} }c^{2}}}=\left(1+{\frac {\alpha ^{2}Z^{2}}{\left(n_{\mathrm {r} }+{\sqrt {n_{\mathrm {\varphi } }^{2}-\alpha ^{2}Z^{2}}}\right)^{2}}}\right)^{-1/2}-1} where α {\displaystyle \alpha } is the fine-structure constant. This solution (using substitutions for quantum numbers) is equivalent to the solution of the Dirac equation. Nevertheless, both solutions fail to predict the Lamb shifts. == See also == Bohr model Old quantum theory == References ==
Wikipedia/Bohr–Sommerfeld_model
Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Albert Einstein’s theory of special relativity. The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. One proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel out each other. This idea would be true if supersymmetry were an exact symmetry of nature; however, the Large Hadron Collider at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature". == Etymology and terminology == The term zero-point energy (ZPE) is a translation from the German Nullpunktsenergie. Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, and its associated zero-point energy is called the vacuum energy. == Overview == In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes; it is a consequence of the uncertainty principle of quantum mechanics. The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy. Given the equivalence of mass and energy expressed by Albert Einstein's E = mc2, any point in space that contains energy can be thought of as having mass to create particles. Modern physics has developed quantum field theory (QFT) to understand the fundamental interactions between matter and forces; it treats every single point of space as a quantum harmonic oscillator. According to QFT the universe is made up of matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons). All these fields have zero-point energy. Recent experiments support the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions of the zero-point field. The idea that "empty" space can have an intrinsic energy associated with it, and that there is no such thing as a "true vacuum" is seemingly unintuitive. It is often argued that the entire universe is completely bathed in the zero-point radiation, and as such it can add only some constant amount to calculations. Physical measurements will therefore reveal only deviations from this value. For many practical calculations zero-point energy is dismissed by fiat in the mathematical model as a term that has no physical effect. Such treatment causes problems however, as in Einstein's theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant. For decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zero-point energy (discussed further below) and make it completely vanish. If the vacuum has no intrinsic, absolute value of energy it will not gravitate. It was believed that as the universe expands from the aftermath of the Big Bang, the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe; galaxies and all matter in the universe should begin to decelerate. This possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating, meaning empty space does indeed have some intrinsic energy. The discovery of dark energy is best explained by zero-point energy, though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problem. Many physical effects attributed to zero-point energy have been experimentally verified, such as spontaneous emission, Casimir force, Lamb shift, magnetic moment of the electron and Delbrück scattering. These effects are usually called "radiative corrections". In more complex nonlinear theories (e.g. QCD) zero-point energy can give rise to a variety of complex phenomena such as multiple stable states, symmetry breaking, chaos and emergence. Active areas of research include the effects of virtual particles, quantum entanglement, the difference (if any) between inertial and gravitational mass, variation in the speed of light, a reason for the observed value of the cosmological constant and the nature of dark energy. == History == === Early aether theories === Zero-point energy evolved from historical ideas about the vacuum. To Aristotle the vacuum was τὸ κενόν, "the empty"; i.e., space independent of body. He believed this concept violated basic physical principles and asserted that the elements of fire, air, earth, and water were not made of atoms, but were continuous. To the atomists the concept of emptiness had absolute character: it was the distinction between existence and nonexistence. Debate about the characteristics of the vacuum were largely confined to the realm of philosophy, it was not until much later on with the beginning of the renaissance, that Otto von Guericke invented the first vacuum pump and the first testable scientific ideas began to emerge. It was thought that a totally empty volume of space could be created by simply removing all gases. This was the first generally accepted concept of the vacuum. Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were transmitted in empty space was considered evidence that their associated aethers were part of the fabric of space itself. However Maxwell noted that for the most part these aethers were ad hoc: To those who maintained the existence of a plenum as a philosophical principle, nature's abhorrence of a vacuum was a sufficient reason for imagining an all-surrounding aether ... Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, till a space had been filled three or four times with aethers. Moreever, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be created by cooling and thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool a region of space down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved. === Second quantum theory === In 1900, Max Planck derived the average energy ε of a single energy radiator, e.g., a vibrating atomic unit, as a function of absolute temperature: ε = h ν e h ν / ( k T ) − 1 , {\displaystyle \varepsilon ={\frac {h\nu }{e^{h\nu /(kT)}-1}}\,,} where h is the Planck constant, ν is the frequency, k is the Boltzmann constant, and T is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900. The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900. In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of hν. This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one ⁠hν/2⁠, as an additional term dependent on the frequency ν, which was greater than zero (where h is the Planck constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." In a series of papers from 1911 to 1913, Planck found the average energy of an oscillator to be: ε = h ν 2 + h ν e h ν / ( k T ) − 1 . {\displaystyle \varepsilon ={\frac {h\nu }{2}}+{\frac {h\nu }{e^{h\nu /(kT)}-1}}~.} Soon, the idea of zero-point energy attracted the attention of Albert Einstein and his assistant Otto Stern. In 1913 they published a paper that attempted to prove the existence of zero-point energy by calculating the specific heat of hydrogen gas and compared it with the experimental data. However, after assuming they had succeeded, they retracted support for the idea shortly after publication because they found Planck's second theory may not apply to their example. In a letter to Paul Ehrenfest of the same year Einstein declared zero-point energy "dead as a doornail". Zero-point energy was also invoked by Peter Debye, who noted that zero-point energy of the atoms of a crystal lattice would cause a reduction in the intensity of the diffracted radiation in X-ray diffraction even as the temperature approached absolute zero. In 1916 Walther Nernst proposed that empty space was filled with zero-point electromagnetic radiation. With the development of general relativity Einstein found the energy density of the vacuum to contribute towards a cosmological constant in order to obtain static solutions to his field equations; the idea that empty space, or the vacuum, could have some intrinsic energy associated with it had returned, with Einstein stating in 1920: There is a weighty argument to be adduced in favour of the aether hypothesis. To deny the aether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view ... according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity space without aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it. Kurt Bennewitz and Francis Simon (1923), who worked at Walther Nernst's laboratory in Berlin, studied the melting process of chemicals at low temperatures. Their calculations of the melting points of hydrogen, argon and mercury led them to conclude that the results provided evidence for a zero-point energy. Moreover, they suggested correctly, as was later verified by Simon (1934), that this quantity was responsible for the difficulty in solidifying helium even at absolute zero. In 1924 Robert Mulliken provided direct evidence for the zero-point energy of molecular vibrations by comparing the band spectrum of 10BO and 11BO: the isotopic difference in the transition frequencies between the ground vibrational states of two different electronic levels would vanish if there were no zero-point energy, in contrast to the observed spectra. Then just a year later in 1925, with the development of matrix mechanics in Werner Heisenberg's article "Quantum theoretical re-interpretation of kinematic and mechanical relations" the zero-point energy was derived from quantum mechanics. In 1913 Niels Bohr had proposed what is now called the Bohr model of the atom, but despite this it remained a mystery as to why electrons do not fall into their nuclei. According to classical ideas, the fact that an accelerating charge loses energy by radiating implied that an electron should spiral into the nucleus and that atoms should not be stable. This problem of classical mechanics was nicely summarized by James Hopwood Jeans in 1915: "There would be a very real difficulty in supposing that the (force) law ⁠1/r2⁠ held down to the zero values of r. For the force between two charges at zero distance would be infinite; we should have charges of opposite sign continually rushing together and, when once together, no force would be adequate to separate them. [...] Thus the matter in the universe would tend to shrink into nothing or to diminish indefinitely in size." The resolution to this puzzle came in 1926 when Erwin Schrödinger introduced the Schrödinger equation. This equation explained the new, non-classical fact that an electron confined to be close to a nucleus would necessarily have a large kinetic energy so that the minimum total energy (kinetic plus potential) actually occurs at some positive separation rather than at zero separation; in other words, zero-point energy is essential for atomic stability. === Quantum field theory and beyond === In 1926, Pascual Jordan published the first attempt to quantize the electromagnetic field. In a joint paper with Max Born and Werner Heisenberg he considered the field inside a cavity as a superposition of quantum harmonic oscillators. In his calculation he found that in addition to the "thermal energy" of the oscillators there also had to exist an infinite zero-point energy term. He was able to obtain the same fluctuation formula that Einstein had obtained in 1909. However, Jordan did not think that his infinite zero-point energy term was "real", writing to Einstein that "it is just a quantity of the calculation having no direct physical meaning". Jordan found a way to get rid of the infinite term, publishing a joint work with Pauli in 1928, performing what has been called "the first infinite subtraction, or renormalisation, in quantum field theory". Building on the work of Heisenberg and others, Paul Dirac's theory of emission and absorption (1927) was the first application of the quantum theory of radiation. Dirac's work was seen as crucially important to the emerging field of quantum mechanics; it dealt directly with the process in which "particles" are actually created: spontaneous emission. Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. The theory showed that spontaneous emission depends upon the zero-point energy fluctuations of the electromagnetic field in order to get started. In a process in which a photon is annihilated (absorbed), the photon can be thought of as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. In the words of Dirac: The light-quantum has the peculiarity that it apparently ceases to exist when it is in one of its stationary states, namely, the zero state, in which its momentum and therefore also its energy, are zero. When a light-quantum is absorbed it can be considered to jump into this zero state, and when one is emitted it can be considered to jump from the zero state to one in which it is physically in evidence, so that it appears to have been created. Since there is no limit to the number of light-quanta that may be created in this way, we must suppose that there are an infinite number of light quanta in the zero state ... Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote: From quantum theory there follows the existence of so called zero-point oscillations; for example each oscillator in its lowest state is not completely at rest but always is moving about its equilibrium position. Therefore electromagnetic oscillations also can never cease completely. Thus the quantum nature of the electromagnetic field has as its consequence zero point oscillations of the field strength in the lowest energy state, in which there are no light quanta in space ... The zero point oscillations act on an electron in the same way as ordinary electrical oscillations do. They can change the eigenstate of the electron, but only in a transition to a state with the lowest energy, since empty space can only take away energy, and not give it up. In this way spontaneous radiation arises as a consequence of the existence of these unique field strengths corresponding to zero point oscillations. Thus spontaneous radiation is induced radiation of light quanta produced by zero point oscillations of empty space This view was also later supported by Theodore Welton (1948), who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field". This new theory, which Dirac coined quantum electrodynamics (QED), predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources. Throughout the 1940s improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift, and measurement of the magnetic moment of the electron. Discrepancies between these experiments and Dirac's theory led to the idea of incorporating renormalisation into QED to deal with zero-point infinities. Renormalization was originally developed by Hans Kramers and also Victor Weisskopf (1936), and first successfully applied to calculate a finite value for the Lamb shift by Hans Bethe (1947). As per spontaneous emission, these effects can in part be understood with interactions with the zero-point field. But in light of renormalisation being able to remove some zero-point infinities from calculations, not all physicists were comfortable attributing zero-point energy any physical meaning, viewing it instead as a mathematical artifact that might one day be eliminated. In Wolfgang Pauli's 1945 Nobel lecture he made clear his opposition to the idea of zero-point energy stating "It is clear that this zero-point energy has no physical reality". In 1948 Hendrik Casimir showed that one consequence of the zero-point field is an attractive force between two uncharged, perfectly conducting parallel plates, the so-called Casimir effect. At the time, Casimir was studying the properties of colloidal solutions. These are viscous materials, such as paint and mayonnaise, that contain micron-sized particles in a liquid matrix. The properties of such solutions are determined by Van der Waals forces – short-range, attractive forces that exist between neutral atoms and molecules. One of Casimir's colleagues, Theo Overbeek, realized that the theory that was used at the time to explain Van der Waals forces, which had been developed by Fritz London in 1930, did not properly explain the experimental measurements on colloids. Overbeek therefore asked Casimir to investigate the problem. Working with Dirk Polder, Casimir discovered that the interaction between two neutral molecules could be correctly described only if the fact that light travels at a finite speed was taken into account. Soon afterwards after a conversation with Bohr about zero-point energy, Casimir noticed that this result could be interpreted in terms of vacuum fluctuations. He then asked himself what would happen if there were two mirrors – rather than two molecules – facing each other in a vacuum. It was this work that led to his prediction of an attractive force between reflecting plates. The work by Casimir and Polder opened up the way to a unified theory of van der Waals and Casimir forces and a smooth continuum between the two phenomena. This was done by Lifshitz (1956) in the case of plane parallel dielectric plates. The generic name for both van der Waals and Casimir forces is dispersion forces, because both of them are caused by dispersions of the operator of the dipole moment. The role of relativistic forces becomes dominant at orders of a hundred nanometers. In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. The fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions. In 1963 the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave nonintuitive predictions such as that an atom's spontaneous emission could be driven by field of effectively constant frequency (Rabi frequency). In the 1970s experiments were being performed to test aspects of quantum optics and showed that the rate of spontaneous emission of an atom could be controlled using reflecting surfaces. These results were at first regarded with suspicion in some quarters: it was argued that no modification of a spontaneous emission rate would be possible, after all, how can the emission of a photon be affected by an atom's environment when the atom can only "see" its environment by emitting a photon in the first place? These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. Spontaneous emission can be suppressed (or "inhibited") or amplified. Amplification was first predicted by Purcell in 1946 (the Purcell effect) and has been experimentally verified. This phenomenon can be understood, partly, in terms of the action of the vacuum field on the atom. == Uncertainty principle == Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well, for then its position and momentum would both be completely determined to arbitrarily great precision. Therefore, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle, which implies its energy must be greater than the minimum of the potential well. Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator, H ^ = V 0 + 1 2 k ( x ^ − x 0 ) 2 + 1 2 m p ^ 2 , {\displaystyle {\hat {H}}=V_{0}+{\tfrac {1}{2}}k\left({\hat {x}}-x_{0}\right)^{2}+{\frac {1}{2m}}{\hat {p}}^{2}\,,} where V0 is the minimum of the classical potential well. The uncertainty principle tells us that ⟨ ( x ^ − x 0 ) 2 ⟩ ⟨ p ^ 2 ⟩ ≥ ℏ 2 , {\displaystyle {\sqrt {\left\langle \left({\hat {x}}-x_{0}\right)^{2}\right\rangle }}{\sqrt {\left\langle {\hat {p}}^{2}\right\rangle }}\geq {\frac {\hbar }{2}}\,,} making the expectation values of the kinetic and potential terms above satisfy ⟨ 1 2 k ( x ^ − x 0 ) 2 ⟩ ⟨ 1 2 m p ^ 2 ⟩ ≥ ( ℏ 4 ) 2 k m . {\displaystyle \left\langle {\tfrac {1}{2}}k\left({\hat {x}}-x_{0}\right)^{2}\right\rangle \left\langle {\frac {1}{2m}}{\hat {p}}^{2}\right\rangle \geq \left({\frac {\hbar }{4}}\right)^{2}{\frac {k}{m}}\,.} The expectation value of the energy must therefore be at least ⟨ H ^ ⟩ ≥ V 0 + ℏ 2 k m = V 0 + ℏ ω 2 {\displaystyle \left\langle {\hat {H}}\right\rangle \geq V_{0}+{\frac {\hbar }{2}}{\sqrt {\frac {k}{m}}}=V_{0}+{\frac {\hbar \omega }{2}}} where ω = √k/m is the angular frequency at which the system oscillates. A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly E0 = V0 + ⁠ħω/2⁠, requires solving for the ground state of the system. == Atomic physics == The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by ν above, using angular frequency, denoted with ω and defined by ω = 2πν. This leads to a convention of writing the Planck constant h with a bar through its top (ħ) to denote the quantity ⁠h/2π⁠. In these terms, an example of zero-point energy is the above E = ⁠ħω/2⁠ associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state. If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system. According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature. The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by: h 2 n 2 8 m L 2 {\displaystyle {\frac {h^{2}n^{2}}{8mL^{2}}}} where h is the Planck constant, m is the mass of the particle, n is the energy state (n = 1 corresponds to the ground-state energy), and L is the width of the well. == Quantum field theory == In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is zero-point field (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields. In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field. Each point in space makes a contribution of E = ⁠ħω/2⁠, resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy. Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy. In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses. === Quantum electrodynamic vacuum === The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories. ==== Redefining the zero of energy ==== In the quantum theory of the electromagnetic field, classical wave amplitudes α and α* are replaced by operators a and a† that satisfy: [ a , a † ] = 1 {\displaystyle \left[a,a^{\dagger }\right]=1} The classical quantity |α|2 appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator a†a. The fact that: [ a , a † a ] ≠ 1 {\displaystyle \left[a,a^{\dagger }a\right]\neq 1} implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for a†a and a. The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" a† and a associated with these classical modes. The zero-point energy of the field arises formally from the non-commutativity of a and a†. This is true for any harmonic oscillator: the zero-point energy ⁠ħω/2⁠ appears when we write the Hamiltonian: H c l = p 2 2 m + 1 2 m ω 2 q 2 = 1 2 ℏ ω ( a a † + a † a ) = ℏ ω ( a † a + 1 2 ) {\displaystyle {\begin{aligned}H_{cl}&={\frac {p^{2}}{2m}}+{\tfrac {1}{2}}m\omega ^{2}{q}^{2}\\&={\tfrac {1}{2}}\hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right)\\&=\hbar \omega \left(a^{\dagger }a+{\tfrac {1}{2}}\right)\end{aligned}}} It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by: H F − ⟨ 0 | H F | 0 ⟩ = 1 2 ℏ ω ( a a † + a † a ) − 1 2 ℏ ω = ℏ ω ( a † a + 1 2 ) − 1 2 ℏ ω = ℏ ω a † a {\displaystyle {\begin{aligned}H_{F}-\left\langle 0|H_{F}|0\right\rangle &={\tfrac {1}{2}}\hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right)-{\tfrac {1}{2}}\hbar \omega \\&=\hbar \omega \left(a^{\dagger }a+{\tfrac {1}{2}}\right)-{\tfrac {1}{2}}\hbar \omega \\&=\hbar \omega a^{\dagger }a\end{aligned}}} without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted :HF, i.e.: : H F :≡ ℏ ω ( a a † + a † a ) :≡ ℏ ω a † a {\displaystyle :H_{F}:\equiv \hbar \omega \left(aa^{\dagger }+a^{\dagger }a\right):\equiv \hbar \omega a^{\dagger }a} In other words, within the normal ordering symbol we can commute a and a†. Since zero-point energy is intimately connected to the non-commutativity of a and a†, the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with a and a† and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion. However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistency of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian. ==== Electromagnetic field in free space ==== From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by: H F = 1 8 π ∫ d 3 r ( E 2 + B 2 ) = k 2 2 π | α ( t ) | 2 {\displaystyle {\begin{aligned}H_{F}&={\frac {1}{8\pi }}\int d^{3}r\left(\mathbf {E} ^{2}+\mathbf {B} ^{2}\right)\\&={\frac {k^{2}}{2\pi }}|\alpha (t)|^{2}\end{aligned}}} We introduce the "mode function" A0(r) that satisfies the Helmholtz equation: ( ∇ 2 + k 2 ) A 0 ( r ) = 0 {\displaystyle \left(\nabla ^{2}+k^{2}\right)\mathbf {A} _{0}(\mathbf {r} )=0} where k = ⁠ω/c⁠ and assume it is normalized such that: ∫ d 3 r | A 0 ( r ) | 2 = 1 {\displaystyle \int d^{3}r\left|\mathbf {A} _{0}(\mathbf {r} )\right|^{2}=1} We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that |A0(r)|2 should be independent of r for each mode of the field. The mode function satisfying these conditions is: A 0 ( r ) = e k e i k ⋅ r {\displaystyle \mathbf {A} _{0}(\mathbf {r} )=e_{\mathbf {k} }e^{i\mathbf {k} \cdot \mathbf {r} }} where k · ek = 0 in order to have the transversality condition ∇ · A(r,t) satisfied for the Coulomb gauge in which we are working. To achieve the desired normalization we pretend space is divided into cubes of volume V = L3 and impose on the field the periodic boundary condition: A ( x + L , y + L , z + L , t ) = A ( x , y , z , t ) {\displaystyle \mathbf {A} (x+L,y+L,z+L,t)=\mathbf {A} (x,y,z,t)} or equivalently ( k x , k y , k z ) = 2 π L ( n x , n y , n z ) {\displaystyle \left(k_{x},k_{y},k_{z}\right)={\frac {2\pi }{L}}\left(n_{x},n_{y},n_{z}\right)} where n can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function: A k ( r ) = 1 V e k e i k ⋅ r {\displaystyle \mathbf {A} _{\mathbf {k} }(\mathbf {r} )={\frac {1}{\sqrt {V}}}e_{\mathbf {k} }e^{i\mathbf {k} \cdot \mathbf {r} }} which satisfies the Helmholtz equation, transversality, and the "box normalization": ∫ V d 3 r | A k ( r ) | 2 = 1 {\displaystyle \int _{V}d^{3}r\left|\mathbf {A} _{\mathbf {k} }(\mathbf {r} )\right|^{2}=1} where ek is chosen to be a unit vector which specifies the polarization of the field mode. The condition k · ek = 0 means that there are two independent choices of ek, which we call ek1 and ek2 where ek1 · ek2 = 0 and e2k1 = e2k2 = 1. Thus we define the mode functions: A k λ ( r ) = 1 V e k λ e i k ⋅ r , λ = { 1 2 {\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} )={\frac {1}{\sqrt {V}}}e_{\mathbf {k} \lambda }e^{i\mathbf {k} \cdot \mathbf {r} }\,,\quad \lambda ={\begin{cases}1\\2\end{cases}}} in terms of which the vector potential becomes: A k λ ( r , t ) = 2 π ℏ c 2 ω k V [ a k λ ( 0 ) e i k ⋅ r + a k λ † ( 0 ) e − i k ⋅ r ] e k λ {\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} ,t)={\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{i\mathbf {k} \cdot \mathbf {r} }+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{-i\mathbf {k} \cdot \mathbf {r} }\right]e_{\mathbf {k} \lambda }} or: A k λ ( r , t ) = 2 π ℏ c 2 ω k V [ a k λ ( 0 ) e − i ( ω k t − k ⋅ r ) + a k λ † ( 0 ) e i ( ω k t − k ⋅ r ) ] {\displaystyle \mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} ,t)={\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{-i(\omega _{k}t-\mathbf {k} \cdot \mathbf {r} )}+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{i(\omega _{k}t-\mathbf {k} \cdot \mathbf {r} )}\right]} where ωk = kc and akλ, a†kλ are photon annihilation and creation operators for the mode with wave vector k and polarization λ. This gives the vector potential for a plane wave mode of the field. The condition for (kx, ky, kz) shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write: A ( r t ) = ∑ k λ 2 π ℏ c 2 ω k V [ a k λ ( 0 ) e i k ⋅ r + a k λ † ( 0 ) e − i k ⋅ r ] e k λ {\displaystyle \mathbf {A} (\mathbf {r} t)=\sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar c^{2}}{\omega _{k}V}}}\left[a_{\mathbf {k} \lambda }(0)e^{i\mathbf {k} \cdot \mathbf {r} }+a_{\mathbf {k} \lambda }^{\dagger }(0)e^{-i\mathbf {k} \cdot \mathbf {r} }\right]e_{\mathbf {k} \lambda }} for the total vector potential in free space. Using the fact that: ∫ V d 3 r A k λ ( r ) ⋅ A k ′ λ ′ ∗ ( r ) = δ k , k ′ 3 δ λ , λ ′ {\displaystyle \int _{V}d^{3}r\mathbf {A} _{\mathbf {k} \lambda }(\mathbf {r} )\cdot \mathbf {A} _{\mathbf {k} '\lambda '}^{\ast }(\mathbf {r} )=\delta _{\mathbf {k} ,\mathbf {k} '}^{3}\delta _{\lambda ,\lambda '}} we find the field Hamiltonian is: H F = ∑ k λ ℏ ω k ( a k λ † a k λ + 1 2 ) {\displaystyle H_{F}=\sum _{\mathbf {k} \lambda }\hbar \omega _{k}\left(a_{\mathbf {k} \lambda }^{\dagger }a_{\mathbf {k} \lambda }+{\tfrac {1}{2}}\right)} This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations: [ a k λ ( t ) , a k ′ λ ′ † ( t ) ] = δ k , k ′ 3 δ λ , λ ′ [ a k λ ( t ) , a k ′ λ ′ ( t ) ] = [ a k λ † ( t ) , a k ′ λ ′ † ( t ) ] = 0 {\displaystyle {\begin{aligned}\left[a_{\mathbf {k} \lambda }(t),a_{\mathbf {k} '\lambda '}^{\dagger }(t)\right]&=\delta _{\mathbf {k} ,\mathbf {k} '}^{3}\delta _{\lambda ,\lambda '}\\[10px]\left[a_{\mathbf {k} \lambda }(t),a_{\mathbf {k} '\lambda '}(t)\right]&=\left[a_{\mathbf {k} \lambda }^{\dagger }(t),a_{\mathbf {k} '\lambda '}^{\dagger }(t)\right]=0\end{aligned}}} Clearly the least eigenvalue for HF is: ∑ k λ 1 2 ℏ ω k {\displaystyle \sum _{\mathbf {k} \lambda }{\tfrac {1}{2}}\hbar \omega _{k}} This state describes the zero-point energy of the vacuum. It appears that this sum is divergent – in fact highly divergent, as putting in the density factor 8 π v 2 d v c 3 V {\displaystyle {\frac {8\pi v^{2}dv}{c^{3}}}V} shows. The summation becomes approximately the integral: 4 π h V c 3 ∫ v 3 d v {\displaystyle {\frac {4\pi hV}{c^{3}}}\int v^{3}\,dv} for high values of v. It diverges proportional to v4 for large v. There are two separate questions to consider. First, is the divergence a real one such that the zero-point energy really is infinite? If we consider the volume V is contained by perfectly conducting walls, very high frequencies can only be contained by taking more and more perfect conduction. No actual method of containing the high frequencies is possible. Such modes will not be stationary in our box and thus not countable in the stationary energy content. So from this physical point of view the above sum should only extend to those frequencies which are countable; a cut-off energy is thus eminently reasonable. However, on the scale of a "universe" questions of general relativity must be included. Suppose even the boxes could be reproduced, fit together and closed nicely by curving spacetime. Then exact conditions for running waves may be possible. However the very high frequency quanta will still not be contained. As per John Wheeler's "geons" these will leak out of the system. So again a cut-off is permissible, almost necessary. The question here becomes one of consistency since the very high energy quanta will act as a mass source and start curving the geometry. This leads to the second question. Divergent or not, finite or infinite, is the zero-point energy of any physical significance? The ignoring of the whole zero-point energy is often encouraged for all practical calculations. The reason for this is that energies are not typically defined by an arbitrary data point, but rather changes in data points, so adding or subtracting a constant (even if infinite) should be allowed. However this is not the whole story, in reality energy is not so arbitrarily defined: in general relativity the seat of the curvature of spacetime is the energy content and there the absolute amount of energy has real physical meaning. There is no such thing as an arbitrary additive constant with density of field energy. Energy density curves space, and an increase in energy density produces an increase of curvature. Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out. ==== Necessity of the vacuum field in QED ==== The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which nkλ = 0 for all modes (k, λ). The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero. In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by Σkλ ⁠ħωk/2⁠ is infinite. We can make the replacement: ∑ k λ ⟶ ∑ λ ( 1 2 π ) 3 ∫ d 3 k = V 8 π 3 ∑ λ ∫ d 3 k {\displaystyle \sum _{\mathbf {k} \lambda }\longrightarrow \sum _{\lambda }\left({\frac {1}{2\pi }}\right)^{3}\int d^{3}k={\frac {V}{8\pi ^{3}}}\sum _{\lambda }\int d^{3}k} the zero-point energy density is: 1 V ∑ k λ 1 2 ℏ ω k = 2 8 π 3 ∫ d 3 k 1 2 ℏ ω k = 4 π 4 π 3 ∫ d k k 2 ( 1 2 ℏ ω k ) = ℏ 2 π 2 c 3 ∫ d ω ω 3 {\displaystyle {\begin{aligned}{\frac {1}{V}}\sum _{\mathbf {k} \lambda }{\tfrac {1}{2}}\hbar \omega _{k}&={\frac {2}{8\pi ^{3}}}\int d^{3}k{\tfrac {1}{2}}\hbar \omega _{k}\\&={\frac {4\pi }{4\pi ^{3}}}\int dk\,k^{2}\left({\tfrac {1}{2}}\hbar \omega _{k}\right)\\&={\frac {\hbar }{2\pi ^{2}c^{3}}}\int d\omega \,\omega ^{3}\end{aligned}}} or in other words the spectral energy density of the vacuum field: ρ 0 ( ω ) = ℏ ω 3 2 π 2 c 3 {\displaystyle \rho _{0}(\omega )={\frac {\hbar \omega ^{3}}{2\pi ^{2}c^{3}}}} The zero-point energy density in the frequency range from ω1 to ω2 is therefore: ∫ ω 1 ω 2 d ω ρ 0 ( ω ) = ℏ 8 π 2 c 3 ( ω 2 4 − ω 1 4 ) {\displaystyle \int _{\omega _{1}}^{\omega _{2}}d\omega \rho _{0}(\omega )={\frac {\hbar }{8\pi ^{2}c^{3}}}\left(\omega _{2}^{4}-\omega _{1}^{4}\right)} This can be large even in relatively narrow "low frequency" regions of the spectrum. In the optical region from 400 to 700 nm, for instance, the above equation yields around 220 erg/cm3. We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is: H = 1 2 m ( p − e c A ) 2 + 1 2 m ω 0 2 x 2 + H F {\displaystyle H={\frac {1}{2m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)^{2}+{\tfrac {1}{2}}m\omega _{0}^{2}\mathbf {x} ^{2}+H_{F}} This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate x and the canonical momentum p = mẋ +⁠eA/c⁠ of the oscillator are: x ˙ = ( i ℏ ) − 1 [ x . H ] = 1 m ( p − e c A ) p ˙ = ( i ℏ ) − 1 [ p . H ] = 1 2 ∇ ( p − e c A ) 2 − m ω 0 2 x ˙ = − 1 m [ ( p − e c A ) ⋅ ∇ ] [ − e c A ] − 1 m ( p − e c A ) × ∇ × [ − e c A ] − m ω 0 2 x ˙ = e c ( x ˙ ⋅ ∇ ) A + e c x ˙ × B − m ω 0 2 x ˙ {\displaystyle {\begin{aligned}\mathbf {\dot {x}} &=(i\hbar )^{-1}[\mathbf {x} .H]={\frac {1}{m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\\\mathbf {\dot {p}} &=(i\hbar )^{-1}[\mathbf {p} .H]{\begin{aligned}&={\tfrac {1}{2}}\nabla \left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)^{2}-m\omega _{0}^{2}\mathbf {\dot {x}} \\&=-{\frac {1}{m}}\left[\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\cdot \nabla \right]\left[-{\frac {e}{c}}\mathbf {A} \right]-{\frac {1}{m}}\left(\mathbf {p} -{\frac {e}{c}}\mathbf {A} \right)\times \nabla \times \left[-{\frac {e}{c}}\mathbf {A} \right]-m\omega _{0}^{2}\mathbf {\dot {x}} \\&={\frac {e}{c}}(\mathbf {\dot {x}} \cdot \nabla )\mathbf {A} +{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {\dot {x}} \end{aligned}}\end{aligned}}} or: m x ¨ = p ˙ − e c A ˙ = − e c [ A ˙ − ( x ˙ ⋅ ∇ ) A ] + e c x ˙ × B − m ω 0 2 x = e E + e c x ˙ × B − m ω 0 2 x {\displaystyle {\begin{aligned}m\mathbf {\ddot {x}} &=\mathbf {\dot {p}} -{\frac {e}{c}}\mathbf {\dot {A}} \\&=-{\frac {e}{c}}\left[\mathbf {\dot {A}} -\left(\mathbf {\dot {x}} \cdot \nabla \right)\mathbf {A} \right]+{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {x} \\&=e\mathbf {E} +{\frac {e}{c}}\mathbf {\dot {x}} \times \mathbf {B} -m\omega _{0}^{2}\mathbf {x} \end{aligned}}} since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative A ˙ = ∂ A ∂ t + ( x ˙ ⋅ ∇ ) A 3 . {\displaystyle \mathbf {\dot {A}} ={\frac {\partial \mathbf {A} }{\partial t}}+(\mathbf {\dot {x}} \cdot \nabla )\mathbf {A} ^{3}\,.} For nonrelativistic motion we may neglect the magnetic force and replace the expression for mẍ by: x ¨ + ω 0 2 x ≈ e m E ≈ ∑ k λ 2 π ℏ ω k V [ a k λ ( t ) + a k λ † ( t ) ] e k λ {\displaystyle {\begin{aligned}\mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} &\approx {\frac {e}{m}}\mathbf {E} \\&\approx \sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}\left[a_{\mathbf {k} \lambda }(t)+a_{\mathbf {k} \lambda }^{\dagger }(t)\right]e_{\mathbf {k} \lambda }\end{aligned}}} Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for akλ is found similarly from the Hamiltonian to be: a ˙ k λ = i ω k a k λ + i e 2 π ℏ ω k V x ˙ ⋅ e k λ {\displaystyle {\dot {a}}_{\mathbf {k} \lambda }=i\omega _{k}a_{\mathbf {k} \lambda }+ie{\sqrt {\frac {2\pi }{\hbar \omega _{k}V}}}\mathbf {\dot {x}} \cdot e_{\mathbf {k} \lambda }} in the electric dipole approximation. In deriving these equations for x, p, and akλ we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, t = 0) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator A(t) evolves in time as A(t) = U†(t)A(0)U(t), where U(t) is the time evolution operator satisfying i ℏ U ˙ = H U , U † ( t ) = U − 1 ( t ) , U ( 0 ) = 1 . {\displaystyle i\hbar {\dot {U}}=HU\,,\quad U^{\dagger }(t)=U^{-1}(t)\,,\quad U(0)=1\,.} Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is: a k λ ( t ) = a k λ ( 0 ) e − i ω k t + i e 2 π ℏ ω k V ∫ 0 t d t ′ e k λ ⋅ x ˙ ( t ′ ) e i ω k ( t ′ − t ) {\displaystyle a_{\mathbf {k} \lambda }(t)=a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}+ie{\sqrt {\frac {2\pi }{\hbar \omega _{k}V}}}\int _{0}^{t}dt'\,e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} (t')e^{i\omega _{k}\left(t'-t\right)}} and therefore the equation for ȧkλ may be written: x ¨ + ω 0 2 x = e m E 0 ( t ) + e m E R R ( t ) {\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} ={\frac {e}{m}}\mathbf {E} _{0}(t)+{\frac {e}{m}}\mathbf {E} _{RR}(t)} where E 0 ( t ) = i ∑ k λ 2 π ℏ ω k V [ a k λ ( 0 ) e − i ω k t − a k λ † ( 0 ) e i ω k t ] e k λ {\displaystyle \mathbf {E} _{0}(t)=i\sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}\left[a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}-a_{\mathbf {k} \lambda }^{\dagger }(0)e^{i\omega _{k}t}\right]e_{\mathbf {k} \lambda }} and E R R ( t ) = − 4 π e V ∑ k λ ∫ 0 t d t ′ [ e k λ ⋅ x ˙ ( t ′ ) ] cos ⁡ ω k ( t ′ − t ) {\displaystyle \mathbf {E} _{RR}(t)=-{\frac {4\pi e}{V}}\sum _{\mathbf {k} \lambda }\int _{0}^{t}dt'\left[e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} \left(t'\right)\right]\cos \omega _{k}\left(t'-t\right)} It can be shown that in the radiation reaction field, if the mass m is regarded as the "observed" mass then we can take E R R ( t ) = 2 e 3 c 3 x ¨ {\displaystyle \mathbf {E} _{RR}(t)={\frac {2e}{3c^{3}}}\mathbf {\ddot {x}} } The total field acting on the dipole has two parts, E0(t) and ERR(t). E0(t) is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation [ ∇ 2 − 1 c 2 ∂ 2 ∂ t 2 ] E = 0 {\displaystyle \left[\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right]\mathbf {E} =0} satisfied by the field in the (source free) vacuum. For this reason E0(t) is often referred to as the "vacuum field", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at t = 0. ERR(t) is the source field, the field generated by the dipole and acting on the dipole. Using the above equation for ERR(t) we obtain an equation for the Heisenberg-picture operator x ( t ) {\displaystyle \mathbf {x} (t)} that is formally the same as the classical equation for a linear dipole oscillator: x ¨ + ω 0 2 x − τ x . . . = e m E 0 ( t ) {\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)} where τ = ⁠2e2/3mc3⁠. in this instance we have considered a dipole in the vacuum, without any "external" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole. Classically, a dipole in the vacuum is not acted upon by any "external" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an "external" field, namely the source-free or vacuum field E0(t). According to our earlier equation for akλ(t) the free field is the only field in existence at t = 0 as the time at which the interaction between the dipole and the field is "switched on". The state vector of the dipole-field system at t = 0 is therefore of the form | Ψ ⟩ = | vac ⟩ | ψ D ⟩ , {\displaystyle |\Psi \rangle =|{\text{vac}}\rangle |\psi _{D}\rangle \,,} where |vac⟩ is the vacuum state of the field and |ψD⟩ is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero: ⟨ E 0 ( t ) ⟩ = ⟨ Ψ | E 0 ( t ) | Ψ ⟩ = 0 {\displaystyle \langle \mathbf {E} _{0}(t)\rangle =\langle \Psi |\mathbf {E} _{0}(t)|\Psi \rangle =0} since akλ(0)|vac⟩ = 0. however, the energy density associated with the free field is infinite: 1 4 π ⟨ E 0 2 ( t ) ⟩ = 1 4 π ∑ k λ ∑ k ′ λ ′ 2 π ℏ ω k V 2 π ℏ ω k ′ V × ⟨ a k λ ( 0 ) a k ′ λ ′ † ( 0 ) ⟩ = 1 4 π ∑ k λ ( 2 π ℏ ω k V ) = ∫ 0 ∞ d w ρ 0 ( ω ) {\displaystyle {\begin{aligned}{\frac {1}{4\pi }}\left\langle \mathbf {E} _{0}^{2}(t)\right\rangle &={\frac {1}{4\pi }}\sum _{\mathbf {k} \lambda }\sum _{\mathbf {k'} \lambda '}{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}{\sqrt {\frac {2\pi \hbar \omega _{k'}}{V}}}\times \left\langle a_{\mathbf {k} \lambda }(0)a_{\mathbf {k'} \lambda '}^{\dagger }(0)\right\rangle \\&={\frac {1}{4\pi }}\sum _{\mathbf {k} \lambda }\left({\frac {2\pi \hbar \omega _{k}}{V}}\right)\\&=\int _{0}^{\infty }dw\,\rho _{0}(\omega )\end{aligned}}} The important point of this is that the zero-point field energy HF does not affect the Heisenberg equation for akλ since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with akλ. We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term Σkλ ⁠ħωk/2⁠ in the field Hamiltonian. The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory: [ z ( t ) , p z ( t ) ] = [ U † ( t ) z ( 0 ) U ( t ) , U † ( t ) p z ( 0 ) U ( t ) ] = U † ( t ) [ z ( 0 ) , p z ( 0 ) ] U ( t ) = i ℏ U † ( t ) U ( t ) = i ℏ {\displaystyle {\begin{aligned}\left[z(t),p_{z}(t)\right]&=\left[U^{\dagger }(t)z(0)U(t),U^{\dagger }(t)p_{z}(0)U(t)\right]\\&=U^{\dagger }(t)\left[z(0),p_{z}(0)\right]U(t)\\&=i\hbar U^{\dagger }(t)U(t)\\&=i\hbar \end{aligned}}} We can calculate [z(t),pz(t)] from the formal solution of the operator equation of motion x ¨ + ω 0 2 x − τ x . . . = e m E 0 ( t ) {\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)} Using the fact that [ a k λ ( 0 ) , a k ′ λ ′ † ( 0 ) ] = δ k k ′ 3 , δ λ λ ′ {\displaystyle \left[a_{\mathbf {k} \lambda }(0),a_{\mathbf {k'} \lambda '}^{\dagger }(0)\right]=\delta _{\mathbf {kk'} }^{3},\delta _{\lambda \lambda '}} and that equal-time particle and field operators commute, we obtain: = [ z ( t ) , m z ˙ ( t ) ] + [ z ( t ) , e c A z ( t ) ] = [ z ( t ) , m z ˙ ( t ) ] = ( i ℏ e 2 2 π 2 m c 3 ) ( 8 π 3 ) ∫ 0 ∞ d ω ω 4 ( ω 2 − ω 0 2 ) 2 + τ 2 ω 6 {\displaystyle {\begin{aligned}[z(t),p_{z}(t)]&=\left[z(t),m{\dot {z}}(t)\right]+\left[z(t),{\frac {e}{c}}A_{z}(t)\right]\\&=\left[z(t),m{\dot {z}}(t)\right]\\&=\left({\frac {i\hbar e^{2}}{2\pi ^{2}mc^{3}}}\right)\left({\frac {8\pi }{3}}\right)\int _{0}^{\infty }{\frac {d\omega \,\omega ^{4}}{\left(\omega ^{2}-\omega _{0}^{2}\right)^{2}+\tau ^{2}\omega ^{6}}}\end{aligned}}} For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., τω0 ≪ 1. Then the integrand above is sharply peaked at ω = ω0 and: [ z ( t ) , p z ( t ) ] ≈ 2 i ℏ e 2 3 π m c 3 ω 0 3 ∫ − ∞ ∞ d x x 2 + τ 2 ω 0 6 = ( 2 i ℏ e 2 ω 0 3 3 π m c 3 ) ( π τ ω 0 3 ) = i ℏ {\displaystyle {\begin{aligned}\left[z(t),p_{z}(t)\right]&\approx {\frac {2i\hbar e^{2}}{3\pi mc^{3}}}\omega _{0}^{3}\int _{-\infty }^{\infty }{\frac {dx}{x^{2}+\tau ^{2}\omega _{0}^{6}}}\\&=\left({\frac {2i\hbar e^{2}\omega _{0}^{3}}{3\pi mc^{3}}}\right)\left({\frac {\pi }{\tau \omega _{0}^{3}}}\right)\\&=i\hbar \end{aligned}}} the necessity of the vacuum field can also be appreciated by making the small damping approximation in x ¨ + ω 0 2 x − τ x . . . = e m E 0 ( t ) x ¨ ≈ − ω 0 2 x ( t ) x . . . ≈ − ω 0 2 x ˙ {\displaystyle {\begin{aligned}&\mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} -\tau \mathbf {\overset {...}{x}} ={\frac {e}{m}}\mathbf {E} _{0}(t)\\&\mathbf {\ddot {x}} \approx -\omega _{0}^{2}\mathbf {x} (t)&&\mathbf {\overset {...}{x}} \approx -\omega _{0}^{2}\mathbf {\dot {x}} \end{aligned}}} and x ¨ + τ ω 0 2 x ˙ + ω 0 2 x ≈ e m E 0 ( t ) {\displaystyle \mathbf {\ddot {x}} +\tau \omega _{0}^{2}\mathbf {\dot {x}} +\omega _{0}^{2}\mathbf {x} \approx {\frac {e}{m}}\mathbf {E} _{0}(t)} Without the free field E0(t) in this equation the operator x(t) would be exponentially dampened, and commutators like [z(t),pz(t)] would approach zero for t ≫ ⁠1/τω20⁠. With the vacuum field included, however, the commutator is iħ at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator. What we have here is an example of a "fluctuation-dissipation elation". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails. The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of x, the spectral energy density of the vacuum field must be proportional to the third power of ω in order for [z(t),pz(t)] to hold. In the case of a dissipative force proportional to ẋ, by contrast, the fluctuation force must be proportional to ω {\displaystyle \omega } in order to maintain the canonical commutation relation. This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem. The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field. === Quantum chromodynamic vacuum === The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a non-perturbative vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions. === Higgs field === The Standard Model hypothesises a field called the Higgs field (symbol: ϕ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of ϕ0 in the ground state (the vacuum expectation value or VEV) is then ⟨ϕ0⟩ = ⁠v/√2⁠, where v = ⁠|μ|/√λ⁠. The measured value of this parameter is approximately 246 GeV/c2. It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number. The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory). == Experimental observations == Zero-point energy has many observed physical consequences. It is important to note that zero-point energy is not merely an artifact of mathematical formalism that can, for instance, be dropped from a Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion without latter consequence. Indeed, such treatment could create a problem at a deeper, as of yet undiscovered, theory. For instance, in general relativity the zero of energy (i.e. the energy density of the vacuum) contributes to a cosmological constant of the type introduced by Einstein in order to obtain static solutions to his field equations. The zero-point energy density of the vacuum, due to all quantum fields, is extremely large, even when we cut off the largest allowable frequencies based on plausible physical arguments. It implies a cosmological constant larger than the limits imposed by observation by about 120 orders of magnitude. This "cosmological constant problem" remains one of the greatest unsolved mysteries of physics. === Casimir effect === A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir, who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move. Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. That changed in 1997 with Lamoreaux conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then. In 2009, Munday et al. published experimental proof that (as predicted in 1961) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction. An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect, a hypothetical phenomenon in which light signals travel slightly faster than c between two closely spaced conducting plates. === Lamb shift === The quantum fluctuations of the electromagnetic field have important physical consequences. In addition to the Casimir effect, they also lead to a splitting between the two energy levels 2S⁠1/2⁠ and 2P⁠1/2⁠ (in term symbol notation) of the hydrogen atom which was not predicted by the Dirac equation, according to which these states should have the same energy. Charged particles can interact with the fluctuations of the quantized vacuum field, leading to slight shifts in energy; this effect is called the Lamb shift. The shift of about 4.38×10−6 eV is roughly 10−7 of the difference between the energies of the 1s and 2s levels, and amounts to 1,058 MHz in frequency units. A small part of this shift (27 MHz ≈ 3%) arises not from fluctuations of the electromagnetic field, but from fluctuations of the electron–positron field. The creation of (virtual) electron–positron pairs has the effect of screening the Coulomb field and acts as a vacuum dielectric constant. This effect is much more important in muonic atoms. === Fine-structure constant === Taking ħ (the Planck constant divided by 2π), c (the speed of light), and e2 = ⁠q2e/4πε0⁠ (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where qe is the absolute value of the electronic charge and ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity)) we can form a dimensionless quantity called the fine-structure constant: α = e 2 ℏ c = q e 2 4 π ε 0 ℏ c ≈ 1 137 {\displaystyle \alpha ={\frac {e^{2}}{\hbar c}}={\frac {q_{e}^{2}}{4\pi \varepsilon _{0}\hbar c}}\approx {\frac {1}{137}}} The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it. The Heisenberg inequality where ħ = ⁠h/2π⁠, and Δx, Δp are the standard deviations of position and momentum states that: Δ x Δ p ≥ 1 2 ℏ {\displaystyle \Delta _{x}\Delta _{p}\geq {\frac {1}{2}}\hbar } It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z0 boson rest energy, mzc2 ≈ 90 GeV, that: α ≈ 1 129 {\displaystyle \alpha \approx {\frac {1}{129}}} rather than the low-energy α ≈ ⁠1/137⁠. The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining α. All in all, α depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of α has been observed for several years now in precision experiment in high-energy physics. === Vacuum birefringence === In the presence of strong electrostatic fields it is predicted that virtual particles become separated from the vacuum state and form real matter. The fact that electromagnetic radiation can be transformed into matter and vice versa leads to fundamentally new features in quantum electrodynamics. One of the most important consequences is that, even in the vacuum, the Maxwell equations have to be exchanged by more complicated formulas. In general, it will be not possible to separate processes in the vacuum from the processes involving matter since electromagnetic fields can create matter if the field fluctuations are strong enough. This leads to highly complex nonlinear interaction – gravity will have an effect on the light at the same time the light has an effect on gravity. These effects were first predicted by Werner Heisenberg and Hans Heinrich Euler in 1936 and independently the same year by Victor Weisskopf who stated: "The physical properties of the vacuum originate in the "zero-point energy" of matter, which also depends on absent particles through the external field strengths and therefore contributes an additional term to the purely Maxwellian field energy". Thus strong magnetic fields vary the energy contained in the vacuum. The scale above which the electromagnetic field is expected to become nonlinear is known as the Schwinger limit. At this point the vacuum has all the properties of a birefringent medium, thus in principle a rotation of the polarization frame (the Faraday effect) can be observed in empty space. Both Einstein's theory of special and general relativity state that light should pass freely through a vacuum without being altered, a principle known as Lorentz invariance. Yet, in theory, large nonlinear self-interaction of light due to quantum fluctuations should lead to this principle being measurably violated if the interactions are strong enough. Nearly all theories of quantum gravity predict that Lorentz invariance is not an exact symmetry of nature. It is predicted the speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. There have been a number of inconclusive results which claim to show evidence of a Lorentz violation by finding a rotation of the polarization plane of light coming from distant galaxies. The first concrete evidence for vacuum birefringence was published in 2017 when a team of astronomers looked at the light coming from the star RX J1856.5-3754, the closest discovered neutron star to Earth. Roberto Mignani at the National Institute for Astrophysics in Milan who led the team of astronomers has commented that "When Einstein came up with the theory of general relativity 100 years ago, he had no idea that it would be used for navigational systems. The consequences of this discovery probably will also have to be realised on a longer timescale." The team found that visible light from the star had undergone linear polarisation of around 16%. If the birefringence had been caused by light passing through interstellar gas or plasma, the effect should have been no more than 1%. Definitive proof would require repeating the observation at other wavelengths and on other neutron stars. At X-ray wavelengths the polarization from the quantum fluctuations should be near 100%. Although no telescope currently exists that can make such measurements, there are several proposed X-ray telescopes that may soon be able to verify the result conclusively such as China's Hard X-ray Modulation Telescope (HXMT) and NASA's Imaging X-ray Polarimetry Explorer (IXPE). == Speculated involvement in other phenomena == === Dark energy === In the late 1990s it was discovered that very distant supernovae were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy. There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum, but this guess is known to be off by 120 orders of magnitude. The European Space Agency's Euclid telescope, launched on 1 July 2023, will map galaxies up to 10 billion light years away. By seeing how dark energy influences their arrangement and shape, the mission will allow scientists to see if the strength of dark energy has changed. If dark energy is found to vary throughout time it would indicate it is due to quintessence, where observed acceleration is due to the energy of a scalar field, rather than the cosmological constant. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses again due to zero-point energy. === Cosmic inflation === Cosmic inflation is phase of accelerated cosmic expansion just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed. The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis, the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe, but this is far from certain. === Cosmology === Paul S. Wesson examined the cosmological implications of assuming that zero-point energy is real. Among numerous difficulties, general relativity requires that such energy not gravitate, so it cannot be similar to electromagnetic radiation. == Alternative theories == There has been a long debate over the question of whether zero-point fluctuations of quantized vacuum fields are "real" i.e. do they have physical effects that cannot be interpreted by an equally valid alternative theory? Schwinger, in particular, attempted to formulate QED without reference to zero-point fluctuations via his "source theory". From such an approach it is possible to derive the Casimir Effect without reference to a fluctuating field. Such a derivation was first given by Schwinger (1975) for a scalar field, and then generalized to the electromagnetic case by Schwinger, DeRaad, and Milton (1978). in which they state "the vacuum is regarded as truly a state with all physical properties equal to zero". Jaffe (2005) has highlighted a similar approach in deriving the Casimir effect stating "the concept of zero-point fluctuations is a heuristic and calculational aid in the description of the Casimir effect, but not a necessity in QED." Milonni has shown the necessity of the vacuum field for the formal consistency of QED. Modern physics does not know any better way to construct gauge-invariant, renormalizable theories than with zero-point energy and they would seem to be a necessity for any attempt at a unified theory. Nevertheless, as pointed out by Jaffe, "no known phenomenon, including the Casimir effect, demonstrates that zero point energies are “real”" == Chaotic and emergent phenomena == The mathematical models used in classical electromagnetism, quantum electrodynamics (QED) and the Standard Model all view the electromagnetic vacuum as a linear system with no overall observable consequence. For example, in the case of the Casimir effect, Lamb shift, and so on these phenomena can be explained by alternative mechanisms other than action of the vacuum by arbitrary changes to the normal ordering of field operators. See the alternative theories section. This is a consequence of viewing electromagnetism as a U(1) gauge theory, which topologically does not allow the complex interaction of a field with and on itself. In higher symmetry groups and in reality, the vacuum is not a calm, randomly fluctuating, largely immaterial and passive substance, but at times can be viewed as a turbulent virtual plasma that can have complex vortices (i.e. solitons vis-à-vis particles), entangled states and a rich nonlinear structure. There are many observed nonlinear physical electromagnetic phenomena such as Aharonov–Bohm (AB) and Altshuler–Aronov–Spivak (AAS) effects, Berry, Aharonov–Anandan, Pancharatnam and Chiao–Wu phase rotation effects, Josephson effect, Quantum Hall effect, the De Haas–Van Alphen effect, the Sagnac effect and many other physically observable phenomena which would indicate that the electromagnetic potential field has real physical meaning rather than being a mathematical artifact and therefore an all encompassing theory would not confine electromagnetism as a local force as is currently done, but as a SU(2) gauge theory or higher geometry. Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence. What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside, FitzGerald, Lodge and Hertz. The original equations used Hamilton's more expressive quaternion notation, a kind of Clifford algebra, which fully subsumes the standard Maxwell vectorial equations largely used today. In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. It has often been argued that quaternions are not compatible with special relativity, but multiple papers have shown ways of incorporating relativity. A good example of nonlinear electromagnetics is in high energy dense plasmas, where vortical phenomena occur which seemingly violate the second law of thermodynamics by increasing the energy gradient within the electromagnetic field and violate Maxwell's laws by creating ion currents which capture and concentrate their own and surrounding magnetic fields. In particular Lorentz force law, which elaborates Maxwell's equations is violated by these force free vortices. These apparent violations are due to the fact that the traditional conservation laws in classical and quantum electrodynamics (QED) only display linear U(1) symmetry (in particular, by the extended Noether theorem, conservation laws such as the laws of thermodynamics need not always apply to dissipative systems, which are expressed in gauges of higher symmetry). The second law of thermodynamics states that in a closed linear system entropy flow can only be positive (or exactly zero at the end of a cycle). However, negative entropy (i.e. increased order, structure or self-organisation) can spontaneously appear in an open nonlinear thermodynamic system that is far from equilibrium, so long as this emergent order accelerates the overall flow of entropy in the total system. The 1977 Nobel Prize in Chemistry was awarded to thermodynamicist Ilya Prigogine for his theory of dissipative systems that described this notion. Prigogine described the principle as "order through fluctuations" or "order out of chaos". It has been argued by some that all emergent order in the universe from galaxies, solar systems, planets, weather, complex chemistry, evolutionary biology to even consciousness, technology and civilizations are themselves examples of thermodynamic dissipative systems; nature having naturally selected these structures to accelerate entropy flow within the universe to an ever-increasing degree. For example, it has been estimated that human body is 10,000 times more effective at dissipating energy per unit of mass than the sun. One may query what this has to do with zero-point energy. Given the complex and adaptive behaviour that arises from nonlinear systems considerable attention in recent years has gone into studying a new class of phase transitions which occur at absolute zero temperature. These are quantum phase transitions which are driven by EM field fluctuations as a consequence of zero-point energy. A good example of a spontaneous phase transition that are attributed to zero-point fluctuations can be found in superconductors. Superconductivity is one of the best known empirically quantified macroscopic electromagnetic phenomena whose basis is recognised to be quantum mechanical in origin. The behaviour of the electric and magnetic fields under superconductivity is governed by the London equations. However, it has been questioned in a series of journal articles whether the quantum mechanically canonised London equations can be given a purely classical derivation. Bostick, for instance, has claimed to show that the London equations do indeed have a classical origin that applies to superconductors and to some collisionless plasmas as well. In particular it has been asserted that the Beltrami vortices in the plasma focus display the same paired flux-tube morphology as Type II superconductors. Others have also pointed out this connection, Fröhlich has shown that the hydrodynamic equations of compressible fluids, together with the London equations, lead to a macroscopic parameter ( μ {\displaystyle \mu } = electric charge density / mass density), without involving either quantum phase factors or the Planck constant. In essence, it has been asserted that Beltrami plasma vortex structures are able to at least simulate the morphology of Type I and Type II superconductors. This occurs because the "organised" dissipative energy of the vortex configuration comprising the ions and electrons far exceeds the "disorganised" dissipative random thermal energy. The transition from disorganised fluctuations to organised helical structures is a phase transition involving a change in the condensate's energy (i.e. the ground state or zero-point energy) but without any associated rise in temperature. This is an example of zero-point energy having multiple stable states (see Quantum phase transition, Quantum critical point, Topological degeneracy, Topological order) and where the overall system structure is independent of a reductionist or deterministic view, that "classical" macroscopic order can also causally affect quantum phenomena. Furthermore, the pair production of Beltrami vortices has been compared to the morphology of pair production of virtual particles in the vacuum. == Purported applications == Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy (work) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines. Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science. As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether. Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research, particularly in the US where it has attracted the attention of major aerospace/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil. === Casimir batteries and engines === A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed; the battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates. In 1999, Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in Physical Review his thought experiment (Gedankenexperiment) for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved." Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system. In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force. A 2008 patent by Haisch and Moddel details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. A published test of this concept by Moddel was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation. === Single heat baths === In 1951 Callen and Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. Such a theory has met with resistance: Macdonald (1962) and Harris (1971) claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) and Kleen (1986), argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics, by exploiting certain quantum mechanical properties. There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations. Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown === Space travel and gravitational shielding === The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, but the interaction (if any) is not yet fully understood. According to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. In certain conditions the gravitomagnetic field can be repulsive. In neutron stars for example, it can produce a gravitational analogue of the Meissner effect, but the force produced in such an example is theorized to be exceedingly weak. In 1963 Robert Forward, a physicist and aerospace engineer at Hughes Research Laboratories, published a paper showing how within the framework of general relativity "anti-gravitational" effects might be achieved. Since all atoms have spin, gravitational permeability may be able to differ from material to material. A strong toroidal gravitational field that acts against the force of gravity could be generated by materials that have nonlinear properties that enhance time-varying gravitational fields. Such an effect would be analogous to the nonlinear electromagnetic permeability of iron, making it an effective core (i.e. the doughnut of iron) in a transformer, whose properties are dependent on magnetic permeability. In 1966 Dewitt was first to identify the significance of gravitational effects in superconductors. Dewitt demonstrated that a magnetic-type gravitational field must result in the presence of fluxoid quantization. In 1983, Dewitt's work was substantially expanded by Ross. From 1971 to 1974 Henry William Wallace, a scientist at GE Aerospace was issued with three patents. Wallace used Dewitt's theory to develop an experimental apparatus for generating and detecting a secondary gravitational field, which he named the kinemassic field (now better known as the gravitomagnetic field). In his three patents, Wallace describes three different methods used for detection of the gravitomagnetic field – change in the motion of a body on a pivot, detection of a transverse voltage in a semiconductor crystal, and a change in the specific heat of a crystal material having spin-aligned nuclei. There are no publicly available independent tests verifying Wallace's devices. Such an effect if any would be small. Referring to Wallace's patents, a New Scientist article in 1980 stated "Although the Wallace patents were initially ignored as cranky, observers believe that his invention is now under serious but secret investigation by the military authorities in the USA. The military may now regret that the patents have already been granted and so are available for anyone to read." A further reference to Wallace's patents occur in an electric propulsion study prepared for the Astronautics Laboratory at Edwards Air Force Base which states: "The patents are written in a very believable style which include part numbers, sources for some components, and diagrams of data. Attempts were made to contact Wallace using patent addresses and other sources but he was not located nor is there a trace of what became of his work. The concept can be somewhat justified on general relativistic grounds since rotating frames of time varying fields are expected to emit gravitational waves." In 1986 the U.S. Air Force's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited "Non Conventional Propulsion Concepts" under a small business research and innovation program. One of the six areas of interest was "Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space..." In the same year BAE Systems launched "Project Greenglow" to provide a "focus for research into novel propulsion systems and the means to power them". In 1988 Kip Thorne et al. published work showing how traversable wormholes can exist in spacetime only if they are threaded by quantum fields generated by some form of exotic matter that has negative energy. In 1993 Scharnhorst and Barton showed that the speed of a photon will be increased if it travels between two Casimir plates, an example of negative energy. In the most general sense, the exotic matter needed to create wormholes would share the repulsive properties of the inflationary energy, dark energy or zero-point radiation of the vacuum. In 1992 Evgeny Podkletnov published a heavily debated journal article claiming a specific type of rotating superconductor could shield gravitational force. Independently of this, from 1991 to 1993 Ning Li and Douglas Torr published a number of articles about gravitational effects in superconductors. One finding they derived is the source of gravitomagnetic flux in a type II superconductor material is due to spin alignment of the lattice ions. Quoting from their third paper: "It is shown that the coherent alignment of lattice ion spins will generate a detectable gravitomagnetic field, and in the presence of a time-dependent applied magnetic vector potential field, a detectable gravitoelectric field." The claimed size of the generated force has been disputed by some but defended by others. In 1997 Li published a paper attempting to replicate Podkletnov's results and showed the effect was very small, if it existed at all. Li is reported to have left the University of Alabama in 1999 to found the company AC Gravity LLC. AC Gravity was awarded a U.S. Department of Defense grant for $448,970 in 2001 to continue anti-gravity research. The grant period ended in 2002 but no results from this research were made public. In 2002 Phantom Works, Boeing's advanced research and development facility in Seattle, approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Russian government, also commenting that "The physical principles – and Podkletnov's device is not the only one – appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable" Froning and Roach (2002) put forward a paper that builds on the work of Puthoff, Haisch and Alcubierre. They used fluid dynamic simulations to model the interaction of a vehicle (like that proposed by Alcubierre) with the zero-point field. Vacuum field perturbations are simulated by fluid field perturbations and the aerodynamic resistance of viscous drag exerted on the interior of the vehicle is compared to the Lorentz force exerted by the zero-point field (a Casimir-like force is exerted on the exterior by unbalanced zero-point radiation pressures). They find that the optimized negative energy required for an Alcubierre drive is where it is a saucer-shaped vehicle with toroidal electromagnetic fields. The EM fields distort the vacuum field perturbations surrounding the craft sufficiently to affect the permeability and permittivity of space. In 2009, Giorgio Fontana and Bernd Binder presented a new method to potentially extract the Zero-point energy of the electromagnetic field and nuclear forces in the form of gravitational waves. In the spheron model of the nucleus, proposed by the two times Nobel laureate Linus Pauling, dineutrons are among the components of this structure. Similarly to a dumbbell put in a suitable rotational state, but with nuclear mass density, dineutrons are nearly ideal sources of gravitational waves at X-ray and gamma-ray frequencies. The dynamical interplay, mediated by nuclear forces, between the electrically neutral dineutrons and the electrically charged core nucleus is the fundamental mechanism by which nuclear vibrations can be converted to a rotational state of dineutrons with emission of gravitational waves. Gravity and gravitational waves are well described by General Relativity, that is not a quantum theory, this implies that there is no Zero-point energy for gravity in this theory, therefore dineutrons will emit gravitational waves like any other known source of gravitational waves. In Fontana and Binder paper, nuclear species with dynamical instabilites, related to the Zero-point energy of the electromagnetic field and nuclear forces, and possessing dineutrons, will emit gravitational waves. In experimental physics this approach is still unexplored. In 2014 NASA's Eagleworks Laboratories announced that they had successfully validated the use of a Quantum Vacuum Plasma Thruster which makes use of the Casimir effect for propulsion. In 2016 a scientific paper by the team of NASA scientists passed peer review for the first time. The paper suggests that the zero-point field acts as pilot-wave and that the thrust may be due to particles pushing off the quantum vacuum. While peer review doesn't guarantee that a finding or observation is valid, it does indicate that independent scientists looked over the experimental setup, results, and interpretation and that they could not find any obvious errors in the methodology and that they found the results reasonable. In the paper, the authors identify and discuss nine potential sources of experimental errors, including rogue air currents, leaky electromagnetic radiation, and magnetic interactions. Not all of them could be completely ruled out, and further peer-reviewed experimentation is needed in order to rule these potential errors out. == See also == == References == === Notes === === Articles in the press === === Bibliography === == Further reading == === Press articles === === Journal articles === === Books === == External links == Nima Arkani-Hamed on the issue of vacuum energy and dark energy. Steven Weinberg on the cosmological constant problem.
Wikipedia/Zero_point_energy
The Deutsch–Jozsa algorithm is a deterministic quantum algorithm proposed by David Deutsch and Richard Jozsa in 1992 with improvements by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca in 1998. Although of little practical use, it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm. The Deutsch–Jozsa problem is specifically designed to be easy for a quantum algorithm and hard for any deterministic classical algorithm. It is a black box problem that can be solved efficiently by a quantum computer with no error, whereas a deterministic classical computer would need an exponential number of queries to the black box to solve the problem. More formally, it yields an oracle relative to which EQP, the class of problems that can be solved exactly in polynomial time on a quantum computer, and P are different. Since the problem is easy to solve on a probabilistic classical computer, it does not yield an oracle separation with BPP, the class of problems that can be solved with bounded error in polynomial time on a probabilistic classical computer. Simon's problem is an example of a problem that yields an oracle separation between BQP and BPP. == Problem statement == In the Deutsch–Jozsa problem, we are given a black box quantum computer known as an oracle that implements some function: f : { 0 , 1 } n → { 0 , 1 } {\displaystyle f\colon \{0,1\}^{n}\to \{0,1\}} The function takes n-bit binary values as input and produces either a 0 or a 1 as output for each such value. We are promised that the function is either constant (0 on all inputs or 1 on all inputs) or balanced (1 for exactly half of the input domain and 0 for the other half). The task then is to determine if f {\displaystyle f} is constant or balanced by using the oracle. == Classical solution == For a conventional deterministic algorithm where n {\displaystyle n} is the number of bits, 2 n − 1 + 1 {\displaystyle 2^{n-1}+1} evaluations of f {\displaystyle f} will be required in the worst case. To prove that f {\displaystyle f} is constant, just over half the set of inputs must be evaluated and their outputs found to be identical (because the function is guaranteed to be either balanced or constant, not somewhere in between). The best case occurs where the function is balanced and the first two output values are different. For a conventional randomized algorithm, a constant k {\displaystyle k} evaluations of the function suffices to produce the correct answer with a high probability (failing with probability ϵ ≤ 1 / 2 k {\displaystyle \epsilon \leq 1/2^{k}} with k ≥ 1 {\displaystyle k\geq 1} ). However, k = 2 n − 1 + 1 {\displaystyle k=2^{n-1}+1} evaluations are still required if we want an answer that has no possibility of error. The Deutsch-Jozsa quantum algorithm produces an answer that is always correct with a single evaluation of f {\displaystyle f} . == History == The Deutsch–Jozsa algorithm generalizes earlier (1985) work by David Deutsch, which provided a solution for the simple case where n = 1 {\displaystyle n=1} . Specifically, finding out if a given Boolean function whose input is one bit, f : { 0 , 1 } → { 0 , 1 } {\displaystyle f:\{0,1\}\to \{0,1\}} , is constant. The algorithm, as Deutsch had originally proposed it, was not deterministic. The algorithm was successful with a probability of one half. In 1992, Deutsch and Jozsa produced a deterministic algorithm which was generalized to a function which takes n {\displaystyle n} bits for its input. Unlike Deutsch's algorithm, this algorithm required two function evaluations instead of only one. Further improvements to the Deutsch–Jozsa algorithm were made by Cleve et al., resulting in an algorithm that is both deterministic and requires only a single query of f {\displaystyle f} . This algorithm is still referred to as Deutsch–Jozsa algorithm in honour of the groundbreaking techniques they employed. == Algorithm == For the Deutsch–Jozsa algorithm to work, the oracle computing f ( x ) {\displaystyle f(x)} from x {\displaystyle x} must be a quantum oracle which does not decohere x {\displaystyle x} . In its computation, it cannot make a copy of x {\displaystyle x} , because that would violate the no cloning theorem. The point of view of the Deutsch-Jozsa algorithm of f {\displaystyle f} as an oracle means that it does not matter what the oracle does, since it just has to perform its promised transformation. The algorithm begins with the n + 1 {\displaystyle n+1} bit state | 0 ⟩ ⊗ n | 1 ⟩ {\displaystyle |0\rangle ^{\otimes n}|1\rangle } . That is, the first n bits are each in the state | 0 ⟩ {\displaystyle |0\rangle } and the final bit is | 1 ⟩ {\displaystyle |1\rangle } . A Hadamard gate is applied to each bit to obtain the state 1 2 n + 1 ∑ x = 0 2 n − 1 | x ⟩ ( | 0 ⟩ − | 1 ⟩ ) , {\displaystyle {\frac {1}{\sqrt {2^{n+1}}}}\sum _{x=0}^{2^{n}-1}|x\rangle (|0\rangle -|1\rangle ),} where x {\displaystyle x} runs over all n {\displaystyle n} -bit strings, which each may be represented by a number from 0 {\displaystyle 0} to 2 n − 1 {\displaystyle 2^{n}-1} . We have the function f {\displaystyle f} implemented as a quantum oracle. The oracle maps its input state | x ⟩ | y ⟩ {\displaystyle |x\rangle |y\rangle } to | x ⟩ | y ⊕ f ( x ) ⟩ {\displaystyle |x\rangle |y\oplus f(x)\rangle } , where ⊕ {\displaystyle \oplus } denotes addition modulo 2. Applying the quantum oracle gives; 1 2 n + 1 ∑ x = 0 2 n − 1 | x ⟩ ( | 0 ⊕ f ( x ) ⟩ − | 1 ⊕ f ( x ) ⟩ ) . {\displaystyle {\frac {1}{\sqrt {2^{n+1}}}}\sum _{x=0}^{2^{n}-1}|x\rangle (|0\oplus f(x)\rangle -|1\oplus f(x)\rangle ).} For each x , f ( x ) {\displaystyle x,f(x)} is either 0 or 1. Testing these two possibilities, we see the above state is equal to 1 2 n + 1 ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) | x ⟩ ( | 0 ⟩ − | 1 ⟩ ) . {\displaystyle {\frac {1}{\sqrt {2^{n+1}}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}|x\rangle (|0\rangle -|1\rangle ).} At this point the last qubit | 0 ⟩ − | 1 ⟩ 2 {\displaystyle {\frac {|0\rangle -|1\rangle }{\sqrt {2}}}} may be ignored and the following remains: 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) | x ⟩ . {\displaystyle {\frac {1}{\sqrt {2^{n}}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}|x\rangle .} Next, we will have each qubit go through a Hadamard gate. The total transformation over all n {\displaystyle n} qubits can be expressed with the following identity: H ⊗ n | k ⟩ = 1 2 n ∑ j = 0 2 n − 1 ( − 1 ) k ⋅ j | j ⟩ {\displaystyle H^{\otimes n}|k\rangle ={\frac {1}{\sqrt {2^{n}}}}\sum _{j=0}^{2^{n}-1}(-1)^{k\cdot j}|j\rangle } ( j ⋅ k = j 0 k 0 ⊕ j 1 k 1 ⊕ ⋯ ⊕ j n − 1 k n − 1 {\displaystyle j\cdot k=j_{0}k_{0}\oplus j_{1}k_{1}\oplus \cdots \oplus j_{n-1}k_{n-1}} is the sum of the bitwise product). This results in 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) [ 1 2 n ∑ y = 0 2 n − 1 ( − 1 ) x ⋅ y | y ⟩ ] = ∑ y = 0 2 n − 1 [ 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) ( − 1 ) x ⋅ y ] | y ⟩ . {\displaystyle {\frac {1}{\sqrt {2^{n}}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}\left[{\frac {1}{\sqrt {2^{n}}}}\sum _{y=0}^{2^{n}-1}{\left(-1\right)}^{x\cdot y}|y\rangle \right]=\sum _{y=0}^{2^{n}-1}\left[{\frac {1}{2^{n}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}(-1)^{x\cdot y}\right]|y\rangle .} From this, we can see that the probability for a state k {\displaystyle k} to be measured is | 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) ( − 1 ) x ⋅ k | 2 {\displaystyle \left|{\frac {1}{2^{n}}}\sum _{x=0}^{2^{n}-1}{\left(-1\right)}^{f(x)}{\left(-1\right)}^{x\cdot k}\right|^{2}} The probability of measuring k = 0 {\displaystyle k=0} , corresponding to | 0 ⟩ ⊗ n {\displaystyle |0\rangle ^{\otimes n}} , is | 1 2 n ∑ x = 0 2 n − 1 ( − 1 ) f ( x ) | 2 {\displaystyle {\bigg |}{\frac {1}{2^{n}}}\sum _{x=0}^{2^{n}-1}(-1)^{f(x)}{\bigg |}^{2}} which evaluates to 1 if f ( x ) {\displaystyle f(x)} is constant (constructive interference) and 0 if f ( x ) {\displaystyle f(x)} is balanced (destructive interference). In other words, the final measurement will be | 0 ⟩ ⊗ n {\displaystyle |0\rangle ^{\otimes n}} (all zeros) if and only if f ( x ) {\displaystyle f(x)} is constant and will yield some other state if f ( x ) {\displaystyle f(x)} is balanced. == Deutsch's algorithm == Deutsch's algorithm is a special case of the general Deutsch–Jozsa algorithm where n = 1 in f : { 0 , 1 } n → { 0 , 1 } {\displaystyle f\colon \{0,1\}^{n}\rightarrow \{0,1\}} . We need to check the condition f ( 0 ) = f ( 1 ) {\displaystyle f(0)=f(1)} . It is equivalent to check f ( 0 ) ⊕ f ( 1 ) {\displaystyle f(0)\oplus f(1)} (where ⊕ {\displaystyle \oplus } is addition modulo 2, which can also be viewed as a quantum XOR gate implemented as a Controlled NOT gate), if zero, then f {\displaystyle f} is constant, otherwise f {\displaystyle f} is not constant. We begin with the two-qubit state | 0 ⟩ | 1 ⟩ {\displaystyle |0\rangle |1\rangle } and apply a Hadamard gate to each qubit. This yields 1 2 ( | 0 ⟩ + | 1 ⟩ ) ( | 0 ⟩ − | 1 ⟩ ) . {\displaystyle {\frac {1}{2}}(|0\rangle +|1\rangle )(|0\rangle -|1\rangle ).} We are given a quantum implementation of the function f {\displaystyle f} that maps | x ⟩ | y ⟩ {\displaystyle |x\rangle |y\rangle } to | x ⟩ | f ( x ) ⊕ y ⟩ {\displaystyle |x\rangle |f(x)\oplus y\rangle } . Applying this function to our current state we obtain 1 2 ( | 0 ⟩ ( | f ( 0 ) ⊕ 0 ⟩ − | f ( 0 ) ⊕ 1 ⟩ ) + | 1 ⟩ ( | f ( 1 ) ⊕ 0 ⟩ − | f ( 1 ) ⊕ 1 ⟩ ) ) = 1 2 ( ( − 1 ) f ( 0 ) | 0 ⟩ ( | 0 ⟩ − | 1 ⟩ ) + ( − 1 ) f ( 1 ) | 1 ⟩ ( | 0 ⟩ − | 1 ⟩ ) ) = ( − 1 ) f ( 0 ) 1 2 ( | 0 ⟩ + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 1 ⟩ ) ( | 0 ⟩ − | 1 ⟩ ) . {\displaystyle {\begin{aligned}&{\frac {1}{2}}(|0\rangle (|f(0)\oplus 0\rangle -|f(0)\oplus 1\rangle )+|1\rangle (|f(1)\oplus 0\rangle -|f(1)\oplus 1\rangle ))\\&={\frac {1}{2}}((-1)^{f(0)}|0\rangle (|0\rangle -|1\rangle )+(-1)^{f(1)}|1\rangle (|0\rangle -|1\rangle ))\\&=(-1)^{f(0)}{\frac {1}{2}}\left(|0\rangle +(-1)^{f(0)\oplus f(1)}|1\rangle \right)(|0\rangle -|1\rangle ).\end{aligned}}} We ignore the last bit and the global phase and therefore have the state 1 2 ( | 0 ⟩ + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 1 ⟩ ) . {\displaystyle {\frac {1}{\sqrt {2}}}(|0\rangle +(-1)^{f(0)\oplus f(1)}|1\rangle ).} Applying a Hadamard gate to this state we have 1 2 ( | 0 ⟩ + | 1 ⟩ + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 0 ⟩ − ( − 1 ) f ( 0 ) ⊕ f ( 1 ) | 1 ⟩ ) = 1 2 ( ( 1 + ( − 1 ) f ( 0 ) ⊕ f ( 1 ) ) | 0 ⟩ + ( 1 − ( − 1 ) f ( 0 ) ⊕ f ( 1 ) ) | 1 ⟩ ) . {\displaystyle {\begin{aligned}&{\frac {1}{2}}(|0\rangle +|1\rangle +(-1)^{f(0)\oplus f(1)}|0\rangle -(-1)^{f(0)\oplus f(1)}|1\rangle )\\&={\frac {1}{2}}((1+(-1)^{f(0)\oplus f(1)})|0\rangle +(1-(-1)^{f(0)\oplus f(1)})|1\rangle ).\end{aligned}}} f ( 0 ) ⊕ f ( 1 ) = 0 {\displaystyle f(0)\oplus f(1)=0} if and only if we measure | 0 ⟩ {\displaystyle |0\rangle } and f ( 0 ) ⊕ f ( 1 ) = 1 {\displaystyle f(0)\oplus f(1)=1} if and only if we measure | 1 ⟩ {\displaystyle |1\rangle } . So with certainty we know whether f ( x ) {\displaystyle f(x)} is constant or balanced. == Deutsch–Jozsa algorithm Qiskit implementation == The quantum circuit shown here is from a simple example of how the Deutsch–Jozsa algorithm can be implemented in Python using Qiskit, an open-source quantum computing software development framework by IBM. == See also == Bernstein–Vazirani algorithm == References == == External links == Deutsch's lecture about the Deutsch-Jozsa algorithm
Wikipedia/Deutsch-Jozsa_algorithm
In physics, a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection): P : ( x y z ) ↦ ( − x − y − z ) . {\displaystyle \mathbf {P} :{\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}-x\\-y\\-z\end{pmatrix}}.} It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity transformation. As established by the Wu experiment conducted at the US National Bureau of Standards by Chinese-American scientist Chien-Shiung Wu, the weak interaction is chiral and thus provides a means for probing chirality in physics. In her experiment, Wu took advantage of the controlling role of weak interactions in radioactive decay of atomic isotopes to establish the chirality of the weak force. By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions. A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation. In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions. == Simple symmetry relations == Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group. Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations. The word projective refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states. The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the special orthogonal group SO(3), are ordinary representations of the special unitary group SU(2). Projective representations of the rotation group that are not representations are called spinors and so quantum states may transform not only as tensors but also as spinors. If one adds to this a classification by parity, these can be extended, for example, into notions of scalars (P = +1) and pseudoscalars (P = −1) which are rotationally invariant. vectors (P = −1) and axial vectors (also called pseudovectors) (P = +1) which both transform as vectors under rotation. One can define reflections such as V x : ( x y z ) ↦ ( − x y z ) , {\displaystyle V_{x}:{\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}-x\\y\\z\end{pmatrix}},} which also have negative determinant and form a valid parity transformation. Then, combining them with rotations (or successively performing x-, y-, and z-reflections) one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an even number of dimensions, though, because it results in a positive determinant. In even dimensions only the latter example of a parity transformation (or any reflection of an odd number of coordinates) can be used. Parity forms the abelian group Z 2 {\displaystyle \mathbb {Z} _{2}} due to the relation P ^ 2 = 1 ^ {\displaystyle {\hat {\mathcal {P}}}^{2}={\hat {1}}} . All Abelian groups have only one-dimensional irreducible representations. For Z 2 {\displaystyle \mathbb {Z} _{2}} , there are two irreducible representations: one is even under parity, P ^ ϕ = + ϕ {\displaystyle {\hat {\mathcal {P}}}\phi =+\phi } , the other is odd, P ^ ϕ = − ϕ {\displaystyle {\hat {\mathcal {P}}}\phi =-\phi } . These are useful in quantum mechanics. However, as is elaborated below, in quantum mechanics states need not transform under actual representations of parity but only under projective representations and so in principle a parity transformation may rotate a state by any phase. === Representations of O(3) === An alternative way to write the above classification of scalars, pseudoscalars, vectors and pseudovectors is in terms of the representation space that each object transforms in. This can be given in terms of the group homomorphism ρ {\displaystyle \rho } which defines the representation. For a matrix R ∈ O ( 3 ) , {\displaystyle R\in {\text{O}}(3),} scalars: ρ ( R ) = 1 {\displaystyle \rho (R)=1} , the trivial representation pseudoscalars: ρ ( R ) = det ( R ) {\displaystyle \rho (R)=\det(R)} vectors: ρ ( R ) = R {\displaystyle \rho (R)=R} , the fundamental representation pseudovectors: ρ ( R ) = det ( R ) R . {\displaystyle \rho (R)=\det(R)R.} When the representation is restricted to SO ( 3 ) {\displaystyle {\text{SO}}(3)} , scalars and pseudoscalars transform identically, as do vectors and pseudovectors. == Classical mechanics == Newton's equation of motion F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } (if the mass is constant) equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, invariant under parity. However, angular momentum L {\displaystyle \mathbf {L} } is an axial vector, L = r × p P ^ ( L ) = ( − r ) × ( − p ) = L . {\displaystyle {\begin{aligned}\mathbf {L} &=\mathbf {r} \times \mathbf {p} \\{\hat {P}}\left(\mathbf {L} \right)&=(-\mathbf {r} )\times (-\mathbf {p} )=\mathbf {L} .\end{aligned}}} In classical electrodynamics, the charge density ρ {\displaystyle \rho } is a scalar, the electric field, E {\displaystyle \mathbf {E} } , and current j {\displaystyle \mathbf {j} } are vectors, but the magnetic field, B {\displaystyle \mathbf {B} } is an axial vector. However, Maxwell's equations are invariant under parity because the curl of an axial vector is a vector. == Effect of spatial inversion on some variables of classical physics == The two major divisions of classical physical variables have either even or odd parity. The way into which particular variables and vectors sort out into either category depends on whether the number of dimensions of space is either an odd or even number. The categories of odd or even given below for the parity transformation is a different, but intimately related issue. The answers given below are correct for 3 spatial dimensions. In a 2 dimensional space, for example, when constrained to remain on the surface of a planet, some of the variables switch sides. === Odd === Classical variables whose signs flip under space inversion are predominantly vectors. They include: === Even === Classical variables, predominantly scalar quantities, which do not change upon spatial inversion include: == Quantum mechanics == === Possible eigenvalues === In quantum mechanics, spacetime transformations act on quantum states. The parity transformation, P ^ {\displaystyle {\hat {\mathcal {P}}}} , is a unitary operator, in general acting on a state ψ {\displaystyle \psi } as follows: P ^ ψ ( r ) = e i ϕ / 2 ψ ( − r ) {\displaystyle {\hat {\mathcal {P}}}\,\psi {\left(r\right)}=e^{{i\phi }/{2}}\psi {\left(-r\right)}} . One must then have P ^ 2 ψ ( r ) = e i ϕ ψ ( r ) {\displaystyle {\hat {\mathcal {P}}}^{2}\,\psi {\left(r\right)}=e^{i\phi }\psi {\left(r\right)}} , since an overall phase is unobservable. The operator P ^ 2 {\displaystyle {\hat {\mathcal {P}}}^{2}} , which reverses the parity of a state twice, leaves the spacetime invariant, and so is an internal symmetry which rotates its eigenstates by phases e i ϕ {\displaystyle e^{i\phi }} . If P ^ 2 {\displaystyle {\hat {\mathcal {P}}}^{2}} is an element e i Q {\displaystyle e^{iQ}} of a continuous U(1) symmetry group of phase rotations, then e − i Q {\displaystyle e^{-iQ}} is part of this U(1) and so is also a symmetry. In particular, we can define P ^ ′ ≡ P ^ e − i Q / 2 {\displaystyle {\hat {\mathcal {P}}}'\equiv {\hat {\mathcal {P}}}\,e^{-{iQ}/{2}}} , which is also a symmetry, and so we can choose to call P ^ ′ {\displaystyle {\hat {\mathcal {P}}}'} our parity operator, instead of P ^ {\displaystyle {\hat {\mathcal {P}}}} . Note that P ^ ′ 2 = 1 {\displaystyle {{\hat {\mathcal {P}}}'}^{2}=1} and so P ^ ′ {\displaystyle {\hat {\mathcal {P}}}'} has eigenvalues ± 1 {\displaystyle \pm 1} . Wave functions with eigenvalue + 1 {\displaystyle +1} under a parity transformation are even functions, while eigenvalue − 1 {\displaystyle -1} corresponds to odd functions. However, when no such symmetry group exists, it may be that all parity transformations have some eigenvalues which are phases other than ± 1 {\displaystyle \pm 1} . For electronic wavefunctions, even states are usually indicated by a subscript g for gerade (German: even) and odd states by a subscript u for ungerade (German: odd). For example, the lowest energy level of the hydrogen molecule ion (H2+) is labelled 1 σ g {\displaystyle 1\sigma _{g}} and the next-closest (higher) energy level is labelled 1 σ u {\displaystyle 1\sigma _{u}} . The wave functions of a particle moving into an external potential, which is centrosymmetric (potential energy invariant with respect to a space inversion, symmetric to the origin), either remain invariable or change signs: these two possible states are called the even state or odd state of the wave functions. The law of conservation of parity of particles states that, if an isolated ensemble of particles has a definite parity, then the parity remains invariable in the process of ensemble evolution. However this is not true for the beta decay of nuclei, because the weak nuclear interaction violates parity. The parity of the states of a particle moving in a spherically symmetric external field is determined by the angular momentum, and the particle state is defined by three quantum numbers: total energy, angular momentum and the projection of angular momentum. === Consequences of parity symmetry === When parity generates the Abelian group Z 2 {\displaystyle \mathbb {Z} _{2}} , one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is ±1. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number. In quantum mechanics, Hamiltonians are invariant (symmetric) under a parity transformation if P ^ {\displaystyle {\hat {\mathcal {P}}}} commutes with the Hamiltonian. In non-relativistic quantum mechanics, this happens for any scalar potential, i.e., V = V ( r ) {\displaystyle V=V{\left(r\right)}} , hence the potential is spherically symmetric. The following facts can be easily proven: If | φ ⟩ {\displaystyle |\varphi \rangle } and | ψ ⟩ {\displaystyle |\psi \rangle } have the same parity, then ⟨ φ | X ^ | ψ ⟩ = 0 {\displaystyle \langle \varphi |{\hat {X}}|\psi \rangle =0} where X ^ {\displaystyle {\hat {X}}} is the position operator. For a state | L → , L z ⟩ {\displaystyle {\bigl |}{\vec {L}},L_{z}{\bigr \rangle }} of orbital angular momentum L → {\displaystyle {\vec {L}}} with z-axis projection L z {\displaystyle L_{z}} , then P ^ | L → , L z ⟩ = ( − 1 ) L | L → , L z ⟩ {\displaystyle {\hat {\mathcal {P}}}{\bigl |}{\vec {L}},L_{z}{\bigr \rangle }=\left(-1\right)^{L}{\bigl |}{\vec {L}},L_{z}{\bigr \rangle }} . If [ H ^ , P ^ ] = 0 {\displaystyle {\bigl [}{\hat {H}},{\hat {\mathcal {P}}}{\bigr ]}=0} , then atomic dipole transitions only occur between states of opposite parity. If [ H ^ , P ^ ] = 0 {\displaystyle {\bigl [}{\hat {H}},{\hat {\mathcal {P}}}{\bigr ]}=0} , then a non-degenerate eigenstate of H ^ {\displaystyle {\hat {H}}} is also an eigenstate of the parity operator; i.e., a non-degenerate eigenfunction of H ^ {\displaystyle {\hat {H}}} is either invariant to P ^ {\displaystyle {\hat {\mathcal {P}}}} or is changed in sign by P ^ {\displaystyle {\hat {\mathcal {P}}}} . Some of the non-degenerate eigenfunctions of H ^ {\displaystyle {\hat {H}}} are unaffected (invariant) by parity P ^ {\displaystyle {\hat {\mathcal {P}}}} and the others are merely reversed in sign when the Hamiltonian operator and the parity operator commute: P ^ | ψ ⟩ = c | ψ ⟩ , {\displaystyle {\hat {\mathcal {P}}}|\psi \rangle =c\left|\psi \right\rangle ,} where c {\displaystyle c} is a constant, the eigenvalue of P ^ {\displaystyle {\hat {\mathcal {P}}}} , P ^ 2 | ψ ⟩ = c P ^ | ψ ⟩ . {\displaystyle {\hat {\mathcal {P}}}^{2}\left|\psi \right\rangle =c\,{\hat {\mathcal {P}}}\left|\psi \right\rangle .} == Many-particle systems: atoms, molecules, nuclei == The overall parity of a many-particle system is the product of the parities of the one-particle states. It is −1 if an odd number of particles are in odd-parity states, and +1 otherwise. Different notations are in use to denote the parity of nuclei, atoms, and molecules. === Atoms === Atomic orbitals have parity (−1)ℓ, where the exponent ℓ is the azimuthal quantum number. The parity is odd for orbitals p, f, ... with ℓ = 1, 3, ..., and an atomic state has odd parity if an odd number of electrons occupy these orbitals. For example, the ground state of the nitrogen atom has the electron configuration 1s22s22p3, and is identified by the term symbol 4So, where the superscript o denotes odd parity. However the third excited term at about 83,300 cm−1 above the ground state has electron configuration 1s22s22p23s has even parity since there are only two 2p electrons, and its term symbol is 4P (without an o superscript). === Molecules === The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or − as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass. Centrosymmetric molecules at equilibrium have a centre of symmetry at their midpoint (the nuclear center of mass). This includes all homonuclear diatomic molecules as well as certain symmetric molecules such as ethylene, benzene, xenon tetrafluoride and sulphur hexafluoride. For centrosymmetric molecules, the point group contains the operation i which is not to be confused with the parity operation. The operation i involves the inversion of the electronic and vibrational displacement coordinates at the nuclear centre of mass. For centrosymmetric molecules the operation i commutes with the rovibronic (rotation-vibration-electronic) Hamiltonian and can be used to label such states. Electronic and vibrational states of centrosymmetric molecules are either unchanged by the operation i, or they are changed in sign by i. The former are denoted by the subscript g and are called gerade, while the latter are denoted by the subscript u and are called ungerade. The complete electromagnetic Hamiltonian of a centrosymmetric molecule does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho-para transitions === Nuclei === In atomic nuclei, the state of each nucleon (proton or neutron) has even or odd parity, and nucleon configurations can be predicted using the nuclear shell model. As for electrons in atoms, the nucleon state has odd overall parity if and only if the number of nucleons in odd-parity states is odd. The parity is usually written as a + (even) or − (odd) following the nuclear spin value. For example, the isotopes of oxygen include 17O(5/2+), meaning that the spin is 5/2 and the parity is even. The shell model explains this because the first 16 nucleons are paired so that each pair has spin zero and even parity, and the last nucleon is in the 1d5/2 shell, which has even parity since ℓ = 2 for a d orbital. == Quantum field theory == If one can show that the vacuum state is invariant under parity, P ^ | 0 ⟩ = | 0 ⟩ {\displaystyle {\hat {\mathcal {P}}}\left|0\right\rangle =\left|0\right\rangle } , the Hamiltonian is parity invariant [ H ^ , P ^ ] {\displaystyle \left[{\hat {H}},{\hat {\mathcal {P}}}\right]} and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction. To show that quantum electrodynamics is invariant under parity, we have to prove that the action is invariant and the quantization is also invariant. For simplicity we will assume that canonical quantization is used; the vacuum state is then invariant under parity by construction. The invariance of the action follows from the classical invariance of Maxwell's equations. The invariance of the canonical quantization procedure can be worked out, and turns out to depend on the transformation of the annihilation operator: P a ( p , ± ) P + = a ( − p , ± ) {\displaystyle \mathbf {Pa} (\mathbf {p} ,\pm )\mathbf {P} ^{+}=\mathbf {a} (-\mathbf {p} ,\pm )} where p {\displaystyle \mathbf {p} } denotes the momentum of a photon and ± {\displaystyle \pm } refers to its polarization state. This is equivalent to the statement that the photon has odd intrinsic parity. Similarly all vector bosons can be shown to have odd intrinsic parity, and all axial-vectors to have even intrinsic parity. A straightforward extension of these arguments to scalar field theories shows that scalars have even parity. That is, P ϕ ( − x , t ) P − 1 = ϕ ( x , t ) {\displaystyle {\mathsf {P}}\phi (-\mathbf {x} ,t){\mathsf {P}}^{-1}=\phi (\mathbf {x} ,t)} , since P a ( p ) P + = a ( − p ) {\displaystyle \mathbf {Pa} (\mathbf {p} )\mathbf {P} ^{+}=\mathbf {a} (-\mathbf {p} )} This is true even for a complex scalar field. (Details of spinors are dealt with in the article on the Dirac equation, where it is shown that fermions and antifermions have opposite intrinsic parity.) With fermions, there is a slight complication because there is more than one spin group. == Parity in the Standard Model == === Fixing the global symmetries === Applying the parity operator twice leaves the coordinates unchanged, meaning that P2 must act as one of the internal symmetries of the theory, at most changing the phase of a state. For example, the Standard Model has three global U(1) symmetries with charges equal to the baryon number B, the lepton number L, and the electric charge Q. Therefore, the parity operator satisfies P2 = eiαB+iβL+iγQ for some choice of α, β, and γ. This operator is also not unique in that a new parity operator P' can always be constructed by multiplying it by an internal symmetry such as P' = P eiαB for some α. To see if the parity operator can always be defined to satisfy P2 = 1, consider the general case when P2 = Q for some internal symmetry Q present in the theory. The desired parity operator would be P' = PQ−1/2. If Q is part of a continuous symmetry group then Q−1/2 exists, but if it is part of a discrete symmetry then this element need not exist and such a redefinition may not be possible. The Standard Model exhibits a (−1)F symmetry, where F is the fermion number operator counting how many fermions are in a state. Since all particles in the Standard Model satisfy F = B + L, the discrete symmetry is also part of the eiα(B + L) continuous symmetry group. If the parity operator satisfied P2 = (−1)F, then it can be redefined to give a new parity operator satisfying P2 = 1. But if the Standard Model is extended by incorporating Majorana neutrinos, which have F = 1 and B + L = 0, then the discrete symmetry (−1)F is no longer part of the continuous symmetry group and the desired redefinition of the parity operator cannot be performed. Instead it satisfies P4 = 1 so the Majorana neutrinos would have intrinsic parities of ±i. === Parity of the pion === In 1954, a paper by William Chinowsky and Jack Steinberger demonstrated that the pion has negative parity. They studied the decay of an "atom" made from a deuteron (21H+) and a negatively charged pion (π− ) in a state with zero orbital angular momentum L = 0 {\displaystyle ~\mathbf {L} ={\boldsymbol {0}}~} into two neutrons ( n {\displaystyle n} ). Neutrons are fermions and so obey Fermi–Dirac statistics, which implies that the final state is antisymmetric. Using the fact that the deuteron has spin one and the pion spin zero together with the antisymmetry of the final state they concluded that the two neutrons must have orbital angular momentum L = 1 . {\displaystyle ~L=1~.} The total parity is the product of the intrinsic parities of the particles and the extrinsic parity of the spherical harmonic function ( − 1 ) L . {\displaystyle ~\left(-1\right)^{L}~.} Since the orbital momentum changes from zero to one in this process, if the process is to conserve the total parity then the products of the intrinsic parities of the initial and final particles must have opposite sign. A deuteron nucleus is made from a proton and a neutron, and so using the aforementioned convention that protons and neutrons have intrinsic parities equal to + 1 {\displaystyle ~+1~} they argued that the parity of the pion is equal to minus the product of the parities of the two neutrons divided by that of the proton and neutron in the deuteron, explicitly ( − 1 ) ( 1 ) 2 ( 1 ) 2 = − 1 , {\textstyle {\frac {(-1)(1)^{2}}{(1)^{2}}}=-1~,} from which they concluded that the pion is a pseudoscalar particle. === Parity violation === Although parity is conserved in electromagnetism and gravity, it is violated in weak interactions, and perhaps, to some degree, in strong interactions. The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in charged weak interactions in the Standard Model. This implies that parity is not a symmetry of our universe, unless a hidden mirror sector exists in which parity is violated in the opposite way. An obscure 1928 experiment, undertaken by R. T. Cox, G. C. McIlwraith, and B. Kurrelmeyer, had in effect reported parity violation in weak decays, but, since the appropriate concepts had not yet been developed, those results had no impact. In 1929, Hermann Weyl explored, without any evidence, the existence of a two-component massless particle of spin one-half. This idea was rejected by Pauli, because it implied parity violation. By the mid-20th century, it had been suggested by several scientists that parity might not be conserved (in different contexts), but without solid evidence these suggestions were not considered important. Then, in 1956, a careful review and analysis by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang went further, showing that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. They were mostly ignored, but Lee was able to convince his Columbia colleague Chien-Shiung Wu to try it. She needed special cryogenic facilities and expertise, so the experiment was done at the National Bureau of Standards. Wu, Ambler, Hayward, Hoppes, and Hudson (1957) found a clear violation of parity conservation in the beta decay of cobalt-60. As the experiment was winding down, with double-checking in progress, Wu informed Lee and Yang of their positive results, and saying the results need further examination, she asked them not to publicize the results first. However, Lee revealed the results to his Columbia colleagues on 4 January 1957 at a "Friday lunch" gathering of the Physics Department of Columbia. Three of them, R. L. Garwin, L. M. Lederman, and R. M. Weinrich, modified an existing cyclotron experiment, and immediately verified the parity violation. They delayed publication of their results until after Wu's group was ready, and the two papers appeared back-to-back in the same physics journal. The discovery of parity violation explained the outstanding τ–θ puzzle in the physics of kaons. In 2010, it was reported that physicists working with the Relativistic Heavy Ion Collider had created a short-lived parity symmetry-breaking bubble in quark–gluon plasmas. An experiment conducted by several physicists in the STAR collaboration, suggested that parity may also be violated in the strong interaction. It is predicted that this local parity violation manifests itself by chiral magnetic effect. === Intrinsic parity of hadrons === To every particle one can assign an intrinsic parity as long as nature preserves parity. Although weak interactions do not, one can still assign a parity to any hadron by examining the strong interaction reaction that produces it, or through decays not involving the weak interaction, such as rho meson decay to pions. == See also == C-symmetry CP violation Electroweak theory Mirror matter Molecular symmetry T-symmetry == References == Footnotes Citations === Sources === Perkins, Donald H. (2000). Introduction to High Energy Physics. Cambridge University Press. ISBN 9780521621960. Sozzi, M. S. (2008). Discrete symmetries and CP violation. Oxford University Press. ISBN 978-0-19-929666-8. Bigi, I. I.; Sanda, A. I. (2000). CP Violation. Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology. Cambridge University Press. ISBN 0-521-44349-0. Weinberg, S. (1995). The Quantum Theory of Fields. Cambridge University Press. ISBN 0-521-67053-5.
Wikipedia/Parity_transformation
In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields. The solutions to the equations, universally denoted as ψ or Ψ (Greek psi), are referred to as "wave functions" in the context of RQM, and "fields" in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background). In the Schrödinger picture, the wave function or field is the solution to the Schrödinger equation, i ℏ ∂ ∂ t ψ = H ^ ψ , {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi ={\hat {H}}\psi ,} one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator Ĥ describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator. More generally – the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group. == History == === Early 1920s: Classical and quantum mechanics === The failure of classical mechanics applied to molecular, atomic, and nuclear systems and smaller induced the need for a new mechanics: quantum mechanics. The mathematical formulation was led by De Broglie, Bohr, Schrödinger, Pauli, and Heisenberg, and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant ħ, the quantum of action, tends to zero. This is the correspondence principle. At this point, special relativity was not fully combined with quantum mechanics, so the Schrödinger and Heisenberg formulations, as originally proposed, could not be used in situations where the particles travel near the speed of light, or when the number of each type of particle changes (this happens in real particle interactions; the numerous forms of particle decays, annihilation, matter creation, pair production, and so on). === Late 1920s: Relativistic quantum mechanics of spin-0 and spin-1/2 particles === A description of quantum mechanical systems which could account for relativistic effects was sought for by many theoretical physicists from the late 1920s to the mid-1940s. The first basis for relativistic quantum mechanics, i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein–Gordon equation: by inserting the energy operator and momentum operator into the relativistic energy–momentum relation: The solutions to (1) are scalar fields. The KG equation is undesirable due to its prediction of negative energies and probabilities, as a result of the quadratic nature of (2) – inevitable in a relativistic theory. This equation was initially proposed by Schrödinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schrödinger equation) was still of importance. Nevertheless, (1) is applicable to spin-0 bosons. Neither the non-relativistic nor relativistic equations found by Schrödinger could predict the fine structure in the Hydrogen spectral series. The mysterious underlying property was spin. The first two-dimensional spin matrices (better known as the Pauli matrices) were introduced by Pauli in the Pauli equation; the Schrödinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields, but this was phenomenological. Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation, for massless spin-1/2 fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation (2) to the electron – by various manipulations he factorized the equation into the form and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices α and β in a relativistic wave equation, and explained the fine structure of hydrogen. The solutions to (3A) are multi-component spinor fields, and each component satisfies (1). A remarkable result of spinor solutions is that half of the components describe a particle while the other half describe an antiparticle; in this case the electron and positron. The Dirac equation is now known to apply for all massive spin-1/2 fermions. In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation. Although a landmark in quantum theory, the Dirac equation is only true for spin-1/2 fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular – not all physicists were comfortable with the "Dirac sea" of negative energy states). === 1930s–1960s: Relativistic quantum mechanics of higher-spin particles === The natural problem became clear: to generalize the Dirac equation to particles with any spin; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions. This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one "root" of (3A): where ψ is a spinor field, now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices α and β are infinite-dimensional matrices, related to infinitesimal Lorentz transformations. He did not demand that each component of 3B satisfy equation (2); instead he regenerated the equation using a Lorentz-invariant action, via the principle of least action, and application of Lorentz group theory. Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938–1939) see Duffin–Kemmer–Petiau algebra. The Dirac–Fierz–Pauli formalism was more sophisticated than Majorana's, as spinors were new mathematical tools in the early twentieth century, although Majorana's paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940. Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors A and B, symmetric in all indices, for a massive particle of spin n + 1/2 for integer n (see Van der Waerden notation for the meaning of the dotted indices): where p is the momentum as a covariant spinor operator. For n = 0, the equations reduce to the coupled Dirac equations, and A and B together transform as the original Dirac spinor. Eliminating either A or B shows that A and B each fulfill (1). The direct derivation of the Dirac–Pauli–Fierz equations using the Bargmann–Wigner operators is given by Isaev and Podoinitsyn. In 1941, Rarita and Schwinger focussed on spin-3/2 particles and derived the Rarita–Schwinger equation, including a Lagrangian to generate it, and later generalized the equations analogous to spin n + 1/2 for integer n. In 1945, Pauli suggested Majorana's 1932 paper to Bhabha, who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in (3A) and (3B) by an arbitrary constant, subject to a set of conditions which the wave functions must obey. Finally, in the year 1948 (the same year as Feynman's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann–Wigner equations. In the early 1960s, a reformulation of the Bargmann–Wigner equations was made by H. Joos and Steven Weinberg, the Joos–Weinberg equation. Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles. === 1960s–present === The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present. == Linear equations == The following equations have solutions which satisfy the superposition principle, that is, the wave functions are additive. Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wave functions are denoted ψ, and ∂μ are the components of the four-gradient operator. In matrix equations, the Pauli matrices are denoted by σμ in which μ = 0, 1, 2, 3, where σ0 is the 2 × 2 identity matrix: σ 0 = ( 1 0 0 1 ) {\displaystyle \sigma ^{0}={\begin{pmatrix}1&0\\0&1\\\end{pmatrix}}} and the other matrices have their usual representations. The expression σ μ ∂ μ ≡ σ 0 ∂ 0 + σ 1 ∂ 1 + σ 2 ∂ 2 + σ 3 ∂ 3 {\displaystyle \sigma ^{\mu }\partial _{\mu }\equiv \sigma ^{0}\partial _{0}+\sigma ^{1}\partial _{1}+\sigma ^{2}\partial _{2}+\sigma ^{3}\partial _{3}} is a 2 × 2 matrix operator which acts on 2-component spinor fields. The gamma matrices are denoted by γμ, in which again μ = 0, 1, 2, 3, and there are a number of representations to select from. The matrix γ0 is not necessarily the 4 × 4 identity matrix. The expression i ℏ γ μ ∂ μ + m c ≡ i ℏ ( γ 0 ∂ 0 + γ 1 ∂ 1 + γ 2 ∂ 2 + γ 3 ∂ 3 ) + m c ( 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle i\hbar \gamma ^{\mu }\partial _{\mu }+mc\equiv i\hbar (\gamma ^{0}\partial _{0}+\gamma ^{1}\partial _{1}+\gamma ^{2}\partial _{2}+\gamma ^{3}\partial _{3})+mc{\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}} is a 4 × 4 matrix operator which acts on 4-component spinor fields. Note that terms such as "mc" scalar multiply an identity matrix of the relevant dimension, the common sizes are 2 × 2 or 4 × 4, and are conventionally not written for simplicity. === Linear gauge fields === The Duffin–Kemmer–Petiau equation is an alternative equation for spin-0 and spin-1 particles: ( i ℏ β a ∂ a − m c ) ψ = 0 {\displaystyle (i\hbar \beta ^{a}\partial _{a}-mc)\psi =0} == Constructing RWEs == === Using 4-vectors and the energy–momentum relation === Start with the standard special relativity (SR) 4-vectors 4-position X μ = X = ( c t , x → ) {\displaystyle X^{\mu }=\mathbf {X} =(ct,{\vec {\mathbf {x} }})} 4-velocity U μ = U = γ ( c , u → ) {\displaystyle U^{\mu }=\mathbf {U} =\gamma (c,{\vec {\mathbf {u} }})} 4-momentum P μ = P = ( E c , p → ) {\displaystyle P^{\mu }=\mathbf {P} =\left({\frac {E}{c}},{\vec {\mathbf {p} }}\right)} 4-wavevector K μ = K = ( ω c , k → ) {\displaystyle K^{\mu }=\mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)} 4-gradient ∂ μ = ∂ = ( ∂ t c , − ∇ → ) {\displaystyle \partial ^{\mu }=\mathbf {\partial } =\left({\frac {\partial _{t}}{c}},-{\vec {\mathbf {\nabla } }}\right)} Note that each 4-vector is related to another by a Lorentz scalar: U = d d τ X {\displaystyle \mathbf {U} ={\frac {d}{d\tau }}\mathbf {X} } , where τ {\displaystyle \tau } is the proper time P = m 0 U {\displaystyle \mathbf {P} =m_{0}\mathbf {U} } , where m 0 {\displaystyle m_{0}} is the rest mass K = ( 1 / ℏ ) P {\displaystyle \mathbf {K} =(1/\hbar )\mathbf {P} } , which is the 4-vector version of the Planck–Einstein relation & the de Broglie matter wave relation ∂ = − i K {\displaystyle \mathbf {\partial } =-i\mathbf {K} } , which is the 4-gradient version of complex-valued plane waves Now, just apply the standard Lorentz scalar product rule to each one: U ⋅ U = ( c ) 2 {\displaystyle \mathbf {U} \cdot \mathbf {U} =(c)^{2}} P ⋅ P = ( m 0 c ) 2 {\displaystyle \mathbf {P} \cdot \mathbf {P} =(m_{0}c)^{2}} K ⋅ K = ( m 0 c ℏ ) 2 {\displaystyle \mathbf {K} \cdot \mathbf {K} =\left({\frac {m_{0}c}{\hbar }}\right)^{2}} ∂ ⋅ ∂ = ( − i m 0 c ℏ ) 2 = − ( m 0 c ℏ ) 2 {\displaystyle \mathbf {\partial } \cdot \mathbf {\partial } =\left({\frac {-im_{0}c}{\hbar }}\right)^{2}=-\left({\frac {m_{0}c}{\hbar }}\right)^{2}} The last equation is a fundamental quantum relation. When applied to a Lorentz scalar field ψ {\displaystyle \psi } , one gets the Klein–Gordon equation, the most basic of the quantum relativistic wave equations. [ ∂ ⋅ ∂ + ( m 0 c ℏ ) 2 ] ψ = 0 {\displaystyle \left[\mathbf {\partial } \cdot \mathbf {\partial } +\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} : in 4-vector format [ ∂ μ ∂ μ + ( m 0 c ℏ ) 2 ] ψ = 0 {\displaystyle \left[\partial _{\mu }\partial ^{\mu }+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} : in tensor format [ ( ℏ ∂ μ + i m 0 c ) ( ℏ ∂ μ − i m 0 c ) ] ψ = 0 {\displaystyle \left[(\hbar \partial _{\mu }+im_{0}c)(\hbar \partial ^{\mu }-im_{0}c)\right]\psi =0} : in factored tensor format The Schrödinger equation is the low-velocity limiting case (v ≪ c) of the Klein–Gordon equation. When the relation is applied to a four-vector field A μ {\displaystyle A^{\mu }} instead of a Lorentz scalar field ψ {\displaystyle \psi } , then one gets the Proca equation (in Lorenz gauge): [ ∂ ⋅ ∂ + ( m 0 c ℏ ) 2 ] A μ = 0 {\displaystyle \left[\mathbf {\partial } \cdot \mathbf {\partial } +\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]A^{\mu }=0} If the rest mass term is set to zero (light-like particles), then this gives the free Maxwell equation (in Lorenz gauge) [ ∂ ⋅ ∂ ] A μ = 0 {\displaystyle [\mathbf {\partial } \cdot \mathbf {\partial } ]A^{\mu }=0} === Representations of the Lorentz group === Under a proper orthochronous Lorentz transformation x → Λx in Minkowski space, all one-particle quantum states ψjσ of spin j with spin z-component σ locally transform under some representation D of the Lorentz group: ψ ( x ) → D ( Λ ) ψ ( Λ − 1 x ) {\displaystyle \psi (x)\rightarrow D(\Lambda )\psi (\Lambda ^{-1}x)} where D(Λ) is some finite-dimensional representation, i.e. a matrix. Here ψ is thought of as a column vector containing components with the allowed values of σ. The quantum numbers j and σ as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of σ may occur more than once depending on the representation. Representations with several possible values for j are considered below. The irreducible representations are labeled by a pair of half-integers or integers (A, B). From these all other representations can be built up using a variety of standard methods, like taking tensor products and direct sums. In particular, space-time itself constitutes a 4-vector representation (⁠1/2⁠, ⁠1/2⁠) so that Λ ∈ D(1/2, 1/2). To put this into context; Dirac spinors transform under the (⁠1/2⁠, 0) ⊕ (0, ⁠1/2⁠) representation. In general, the (A, B) representation space has subspaces that under the subgroup of spatial rotations, SO(3), transform irreducibly like objects of spin j, where each allowed value: j = A + B , A + B − 1 , … , | A − B | , {\displaystyle j=A+B,A+B-1,\dots ,|A-B|,} occurs exactly once. In general, tensor products of irreducible representations are reducible; they decompose as direct sums of irreducible representations. The representations D(j, 0) and D(0, j) can each separately represent particles of spin j. A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation. == Non-linear equations == There are equations which have solutions that do not satisfy the superposition principle. === Nonlinear gauge fields === Yang–Mills equation: describes a non-abelian gauge field Yang–Mills–Higgs equations: describes a non-abelian gauge field coupled with a massive spin-0 particle === Spin 2 === Einstein field equations: describe interaction of matter with the gravitational field (massless spin-2 field): R μ ν − 1 2 g μ ν R + g μ ν Λ = 8 π G c 4 T μ ν {\displaystyle R_{\mu \nu }-{\frac {1}{2}}g_{\mu \nu }\,R+g_{\mu \nu }\Lambda ={\frac {8\pi G}{c^{4}}}T_{\mu \nu }} The solution is a metric tensor field, rather than a wave function. == See also == List of equations in nuclear and particle physics List of equations in quantum mechanics Lorentz transformation Mathematical descriptions of the electromagnetic field Quantization of the electromagnetic field Minimal coupling Scalar field theory Status of special relativity == References == == Further reading ==
Wikipedia/Relativistic_wave_equation
The Spekkens toy model is a conceptually simple toy hidden-variable theory introduced by Robert Spekkens in 2004, to argue in favour of the epistemic view of quantum mechanics. The model is based on a foundational principle: "If one has maximal knowledge, then for every system, at every time, the amount of knowledge one possesses about the ontic state of the system at that time must equal the amount of knowledge one lacks." This is called the "knowledge balance principle". Within the bounds of this model, many phenomena typically associated with strictly quantum-mechanical effects are present. These include (but are not limited to) entanglement, noncommutativity of measurements, teleportation, interference, the no-cloning and no-broadcasting theorems, and unsharp measurements. The toy model cannot, however, reproduce quantum nonlocality and quantum contextuality, as it is a local and non-contextual hidden-variable theory. == Background == For nearly a century, physicists and philosophers have been attempting to explain the physical meaning of quantum states. The argument is typically one between two fundamentally opposed views: the ontic view, which describes quantum states as states of physical reality, and the epistemic view, which describes quantum states as states of our incomplete knowledge about a system. Both views have had strong support over the years; notably, the ontic view was supported by Heisenberg and Schrödinger, and the epistemic view by Einstein. The majority of 20th-century quantum physics was dominated by the ontic view, and it remains the view generally accepted by physicists today. There is, however, a substantial subset of physicists who take the epistemic view. Both views have issues associated with them, as both contradict physical intuition in many cases, and neither has been conclusively proven to be the superior viewpoint. The Spekkens toy model is designed to argue in favour of the epistemic viewpoint. It is, by construction, an epistemic model. The knowledge balance principle of the model ensures that any measurement done on a system within it gives incomplete knowledge of the system, and thus the observable states of the system are epistemic. This model also implicitly assumes that there is an ontic state which the system is in at any given time, but simply that we are unable to observe it. The model can not be used to derive quantum mechanics, as there are fundamental differences between the model and quantum theory. In particular, the model is one of local and noncontextual variables, which Bell's theorem tells us cannot ever reproduce all the predictions of quantum mechanics. The toy model does, however, reproduce a number of strange quantum effects and does so from a strictly epistemic perspective; as such, it can be interpreted as strong evidence in favour of the epistemic view. == The model == The Spekkens toy model is based on the knowledge balance principle "the number of questions about the physical state of a system that are answered must always be equal to the number that are unanswered in a state of maximal knowledge". However, the "knowledge" one can possess about a system must be carefully defined for this principle to have any meaning. To do this, the concept of a canonical set of yes-or-no questions is defined as the minimal number of questions needed. For example, for a system with 4 states, one can ask: "Is the system in state 1?", "Is the system in state 2?" and "Is the system in state 3?", which would determine the state of the system (state 4 being the case if all three questions were answered "No."). However, one could also ask: "Is the system in either state 1 or state 2?" and "Is the system in either state 1 or state 3?", which would also uniquely determine the state and has only two questions in the set. This set of questions is not unique, however, it is clear that at least two questions (bits) are required to exactly represent one of four states. We say that for a system with 4 states, the number of questions in a canonical set is two. As such, in this case, the knowledge balance principle insists that the maximal number of questions in a canonical set that one can have answered at any given time is one, such that the amount of knowledge is equal to the amount of ignorance. It is also assumed in the model that it is always possible to saturate the inequality, i.e. to have knowledge of the system exactly equal to that which is lacked, and thus at least two questions must be in the canonical set. Since no question is allowed to exactly specify the state of the system, the number of possible ontic states must be at least 4 (if it were less than 4, the model would be trivial, since any question that could be asked may return an answer specifying the exact state of the system, thus no question can be asked). Since a system with four states (described above) exists, it is referred to as an elementary system. The model then also assumes that every system is built out of these elementary systems, and that each subsystem of any system also obeys the knowledge balance principle. == Elementary systems == For an elementary system, let 1 ∨ 2 represent the state of knowledge "the system is in state 1 or state 2". Under this model, there are 6 states of maximal knowledge that can be obtained: 1 ∨ 2, 1 ∨ 3, 1 ∨ 4, 2 ∨ 3, 2 ∨ 4 and 3 ∨ 4. There is also a single state less than maximal knowledge, corresponding to 1 ∨ 2 ∨ 3 ∨ 4. These can be mapped to 6 qubit states in a natural manner: 1 ∨ 2 ⟺ | 0 ⟩ , {\displaystyle 1\lor 2\iff |0\rangle ,} 3 ∨ 4 ⟺ | 1 ⟩ , {\displaystyle 3\lor 4\iff |1\rangle ,} 1 ∨ 3 ⟺ | + ⟩ , {\displaystyle 1\lor 3\iff |+\rangle ,} 2 ∨ 4 ⟺ | − ⟩ , {\displaystyle 2\lor 4\iff |-\rangle ,} 1 ∨ 4 ⟺ | i ⟩ , {\displaystyle 1\lor 4\iff |i\rangle ,} 2 ∨ 3 ⟺ | − i ⟩ , {\displaystyle 2\lor 3\iff |-i\rangle ,} 1 ∨ 2 ∨ 3 ∨ 4 ⟺ I / 2. {\displaystyle 1\lor 2\lor 3\lor 4\iff I/2.} Under this mapping, it is clear that two states of knowledge in the toy theory correspond to two orthogonal states for the qubit if and only if they share no ontic states in common. This mapping also gives analogues in the toy model to quantum fidelity, compatibility, convex combinations of states and coherent superposition, and can be mapped to the Bloch sphere in the natural fashion. However, the analogy breaks down to a degree when considering coherent superposition, as one of the forms of the coherent superposition in the toy model returns a state that is orthogonal to what is expected with the corresponding superposition in the quantum model, and this can be shown to be an intrinsic difference between the two systems. This reinforces the earlier point that this model is not a restricted version of quantum mechanics, but instead a separate model that mimics quantum properties. === Transformations === The only transformations on the ontic state of the system that respect the knowledge balance principle are permutations of the 4 ontic states. These map valid epistemic states to other valid epistemic states, for instance (using cycle notation to represent permutations): ( ( 12 ) ( 34 ) ) ( 1 ∨ 2 ) → 1 ∨ 2 , {\displaystyle ((12)(34))(1\lor 2)\to 1\lor 2,} ( ( 12 ) ( 34 ) ) ( 1 ∨ 3 ) → 2 ∨ 4 , {\displaystyle ((12)(34))(1\lor 3)\to 2\lor 4,} ( ( 12 ) ( 3 ) ( 4 ) ) ( 1 ∨ 3 ) → 2 ∨ 3. {\displaystyle ((12)(3)(4))(1\lor 3)\to 2\lor 3.} Considering again the analogy between the epistemic states of this model and the qubit states on the Bloch sphere, these transformations consist of the typical allowed permutations of the 6 analogous states, as well as a set of permutations that are forbidden in the continuous qubit model. These are transformations such as (12)(3)(4), which correspond to antiunitary maps on Hilbert space. These are not allowed in a continuous model, however in this discrete system they arise as natural transformations. There is, however, an analogy to a characteristically quantum phenomenon, that no allowed transformation functions as a universal state inverter. In this case, this means that there is no single transformation S with the properties S ( 1 ∨ 2 ) → 3 ∨ 4 , S ( 3 ∨ 4 ) → 1 ∨ 2 , {\displaystyle S(1\lor 2)\to 3\lor 4,\qquad S(3\lor 4)\to 1\lor 2,} S ( 1 ∨ 3 ) → 2 ∨ 4 , S ( 2 ∨ 4 ) → 1 ∨ 3 , {\displaystyle S(1\lor 3)\to 2\lor 4,\qquad S(2\lor 4)\to 1\lor 3,} S ( 1 ∨ 4 ) → 2 ∨ 3 , S ( 2 ∨ 3 ) → 1 ∨ 4. {\displaystyle S(1\lor 4)\to 2\lor 3,\qquad S(2\lor 3)\to 1\lor 4.} === Measurements === In the theory, only reproducible measurements (measurements that cause the system after the measurement to be consistent with the results of the measurement) are considered. As such, only measurements that distinguish between valid epistemic states are allowed. For instance, we could measure whether the system is in states 1 or 2, 1 or 3, or 1 or 4, corresponding to 1 ∨ 2, 1 ∨ 3, and 1 ∨ 4. Once the measurement has been done, one's state of knowledge about the system in question is updated; specifically, if one measured the system in the state 2 ∨ 4, then the system would now be known to be in the ontic state 2 or the ontic state 4. Before a measurement is done on a system, it has a definite ontic state, in the case of an elementary system 1, 2, 3 or 4. If the initial ontic state of a system is 1, and one measured the state of the system with respect to the {1 ∨ 3, 2 ∨ 4} basis, then one would measure the state 1 ∨ 3. Another measurement done in this basis would produce the same result. However, the underlying ontic state of the system can be changed by such a measurement, to either the state 1 or the state 3. This reflects the nature of measurement in quantum theory. Measurements done on a system in the toy model are non-commutative, as is the case for quantum measurements. This is due to the above fact, that a measurement can change the underlying ontic state of the system. For example, if one measures a system in the state 1 ∨ 3 in the {1 ∨ 3, 2 ∨ 4} basis, then one obtains the state 1 ∨ 3 with certainty. However, if one first measures the system in the {1 ∨ 2, 3 ∨ 4} basis, then in the {1 ∨ 3, 2 ∨ 4} basis, then the final state of the system is uncertain, prior to the measurement. The nature of measurements and of the coherent superposition in this theory also gives rise to the quantum phenomenon of interference. When two states are mixed by a coherent superposition, the result is a sampling of the ontic states from both, rather than the typical "and" or "or". This is one of the most important results of this model, as interference is often seen as evidence against the epistemic view. This model indicates that it can arise from a strictly epistemic system. == Groups of elementary systems == A pair of elementary systems has 16 combined ontic states, corresponding to the combinations of the numbers 1 through 4 with 1 through 4 (i.e. the system can be in the state (1,1), (1,2), etc.). The epistemic state of the system is limited by the knowledge balance principle once again. Now however, not only does it restrict the knowledge of the system as a whole, but also of both of the constituent subsystems. Two types of systems of maximal knowledge arise as a result. The first of these corresponds to having maximal knowledge of both subsystems; for example, that the first subsystem is in the state 1 ∨ 3 and the second is in the state 3 ∨ 4, meaning that the system as a whole is in one of the states (1,3), (1,4), (3,3) or (3,4). In this case, nothing is known about the correspondence between the two systems. The second is more interesting, corresponding to having no knowledge about either system individually, but having maximal knowledge about their interaction. For example, one could know that the ontic state of the system is one of (1,1), (2,2), (3,4) or (4,3). Here nothing is known about the state of either individual system, but knowledge of one system gives knowledge of the other. This corresponds to the entangling of particles in quantum theory. It is possible to consider valid transformations on the states of a group of elementary systems, although the mathematics of such an analysis is more complicated than the case for a single system. Transformations consisting of a valid transformation on each state acting independently are always valid. In the case of a two-system model, there is also a transformation that is analogous to the c-not operator on qubits. Furthermore, within the bounds of the model it is possible to prove no-cloning and no-broadcasting theorems, reproducing a fair deal of the mechanics of quantum information theory. The monogamy of pure entanglement also has a strong analogue within the toy model, as a group of three or more systems in which knowledge of one system would grant knowledge of the others would break the knowledge balance principle. An analogy of quantum teleportation also exists in the model, as well as a number of important quantum phenomena. == Extensions and further work == The toy model with its extensions to both continuous phase space and higher dimensional discrete phase space are coined as "epistricted theories" in Ref. Work has been done on several models of physical systems with similar characteristics, which are described in detail in the main publication on this model. There are ongoing attempts to extend this model in various ways, such as van Enk's model and a continuous-variable version based on Liouville mechanics. The toy model has also been analyzed from the viewpoint of categorical quantum mechanics. Currently, there is work being done to reproduce quantum formalism from information-theoretic axioms. Although the model itself differs in many respects from quantum theory, it reproduces a number of effects considered to be overwhelmingly quantum. As such, the underlying principle, that quantum states are states of incomplete knowledge, may offer some hints as to how to proceed in this manner and may lend hope to those pursuing this goal. == See also == Hidden variable theory Interpretation of quantum mechanics == References == == External links == Ladina Hausmann; Nuriya Nurgalieva; Lídia del Rio (2021-05-07). "A consolidating review of Spekkens' toy theory". arXiv:2105.03277 [quant-ph].
Wikipedia/Spekkens_toy_model
In theoretical physics, the pilot wave theory, also known as Bohmian mechanics, was the first known example of a hidden-variable theory, presented by Louis de Broglie in 1927. Its more modern version, the de Broglie–Bohm theory, interprets quantum mechanics as a deterministic theory, and avoids issues such as wave function collapse, and the paradox of Schrödinger's cat by being inherently nonlocal. The de Broglie–Bohm pilot wave theory is one of several interpretations of (non-relativistic) quantum mechanics. == History == Louis de Broglie's early results on the pilot wave theory were presented in his thesis (1924) in the context of atomic orbitals where the waves are stationary. Early attempts to develop a general formulation for the dynamics of these guiding waves in terms of a relativistic wave equation were unsuccessful until in 1926 Schrödinger developed his non-relativistic wave equation. He further suggested that since the equation described waves in configuration space, the particle model should be abandoned. Shortly thereafter, Max Born suggested that the wave function of Schrödinger's wave equation represents the probability density of finding a particle. Following these results, de Broglie developed the dynamical equations for his pilot wave theory. Initially, de Broglie proposed a double solution approach, in which the quantum object consists of a physical wave (u-wave) in real space which has a spherical singular region that gives rise to particle-like behaviour; in this initial form of his theory he did not have to postulate the existence of a quantum particle. He later formulated it as a theory in which a particle is accompanied by a pilot wave. De Broglie presented the pilot wave theory at the 1927 Solvay Conference. However, Wolfgang Pauli raised an objection to it at the conference, saying that it did not deal properly with the case of inelastic scattering. De Broglie was not able to find a response to this objection, and he abandoned the pilot-wave approach. Unlike David Bohm years later, de Broglie did not complete his theory to encompass the many-particle case. The many-particle case shows mathematically that the energy dissipation in inelastic scattering could be distributed to the surrounding field structure by a yet-unknown mechanism of the theory of hidden variables. In 1932, John von Neumann published a book, part of which claimed to prove that all hidden variable theories were impossible. This result was found to be flawed by Grete Hermann three years later, though for a variety of reasons this went unnoticed by the physics community for over fifty years. In 1952, David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot wave theory. Bohm developed pilot wave theory into what is now called the de Broglie–Bohm theory. The de Broglie–Bohm theory itself might have gone unnoticed by most physicists, if it had not been championed by John Bell, who also countered the objections to it. In 1987, John Bell rediscovered Grete Hermann's work, and thus showed the physics community that Pauli's and von Neumann's objections only showed that the pilot wave theory did not have locality. == The pilot wave theory == === Principles === The pilot wave theory is a hidden-variable theory. Consequently: the theory has realism (meaning that its concepts exist independently of the observer); the theory has determinism. The positions of the particles are considered to be the hidden variables. The observer doesn't know the precise values of these variables; they cannot know them precisely because any measurement disturbs them. On the other hand, the observer is defined not by the wave function of their own atoms but by the atoms' positions. So what one sees around oneself are also the positions of nearby things, not their wave functions. A collection of particles has an associated matter wave which evolves according to the Schrödinger equation. Each particle follows a deterministic trajectory, which is guided by the wave function; collectively, the density of the particles conforms to the magnitude of the wave function. The wave function is not influenced by the particle and can exist also as an empty wave function. The theory brings to light nonlocality that is implicit in the non-relativistic formulation of quantum mechanics and uses it to satisfy Bell's theorem. These nonlocal effects can be shown to be compatible with the no-communication theorem, which prevents use of them for faster-than-light communication, and so is empirically compatible with relativity. == Macroscopic analog == Couder, Fort, et al. claimed that macroscopic oil droplets on a vibrating fluid bath can be used as an analogue model of pilot waves; a localized droplet creates a periodical wave field around itself. They proposed that resonant interaction between the droplet and its own wave field exhibits behaviour analogous to quantum particles: interference in double-slit experiment, unpredictable tunneling (depending in a complicated way on a practically hidden state of field), orbit quantization (that a particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect. While attempts to reproduce these experiments have shown some aspects to be questionable and the interpretation with respect to quantum mechanics has been challenged, work on the concept has continued with some success. == Mathematical foundations == To derive the de Broglie–Bohm pilot-wave for an electron, the quantum Lagrangian L ( t ) = 1 2 m v 2 − ( V + Q ) , {\displaystyle L(t)={\frac {1}{2}}mv^{2}-(V+Q),} where V {\displaystyle V} is the potential energy, v {\displaystyle v} is the velocity and Q {\displaystyle Q} is the potential associated with the quantum force (the particle being pushed by the wave function), is integrated along precisely one path (the one the electron actually follows). This leads to the following formula for the Bohm propagator: K Q ( X 1 , t 1 ; X 0 , t 0 ) = 1 J ( t ) 1 2 exp ⁡ [ i ℏ ∫ t 0 t 1 L ( t ) d t ] . {\displaystyle K^{Q}(X_{1},t_{1};X_{0},t_{0})={\frac {1}{J(t)^{\frac {1}{2}}}}\exp \left[{\frac {i}{\hbar }}\int _{t_{0}}^{t_{1}}L(t)\,dt\right].} This propagator allows one to precisely track the electron over time under the influence of the quantum potential Q {\displaystyle Q} . === Derivation of the Schrödinger equation === Pilot wave theory is based on Hamilton–Jacobi dynamics, rather than Lagrangian or Hamiltonian dynamics. Using the Hamilton–Jacobi equation H ( x → , ∇ → x S , t ) + ∂ S ∂ t ( x → , t ) = 0 {\displaystyle H\left(\,{\vec {x}}\,,\;{\vec {\nabla }}_{\!x}\,S\,,\;t\,\right)+{\partial S \over \partial t}\left(\,{\vec {x}},\,t\,\right)=0} it is possible to derive the Schrödinger equation: Consider a classical particle – the position of which is not known with certainty. We must deal with it statistically, so only the probability density ρ ( x → , t ) {\displaystyle \rho ({\vec {x}},t)} is known. Probability must be conserved, i.e. ∫ ρ d 3 x → = 1 {\displaystyle \int \rho \,\mathrm {d} ^{3}{\vec {x}}=1} for each t {\displaystyle t} . Therefore, it must satisfy the continuity equation ∂ ρ ∂ t = − ∇ → ⋅ ( ρ v → ) ( 1 ) {\displaystyle {\frac {\,\partial \rho \,}{\partial t}}=-{\vec {\nabla }}\cdot (\rho \,{\vec {v}})\qquad \qquad (1)} where v → ( x → , t ) {\displaystyle \,{\vec {v}}({\vec {x}},t)\,} is the velocity of the particle. In the Hamilton–Jacobi formulation of classical mechanics, velocity is given by v → ( x → , t ) = 1 m ∇ → x S ( x → , t ) {\displaystyle \;{\vec {v}}({\vec {x}},t)={\frac {1}{\,m\,}}\,{\vec {\nabla }}_{\!x}S({\vec {x}},\,t)\;} where S ( x → , t ) {\displaystyle \,S({\vec {x}},t)\,} is a solution of the Hamilton-Jacobi equation − ∂ S ∂ t = | ∇ S | 2 2 m + V ~ ( 2 ) {\displaystyle -{\frac {\partial S}{\partial t}}={\frac {\;\left|\,\nabla S\,\right|^{2}\,}{2m}}+{\tilde {V}}\qquad \qquad (2)} ( 1 ) {\displaystyle \,(1)\,} and ( 2 ) {\displaystyle \,(2)\,} can be combined into a single complex equation by introducing the complex function ψ = ρ e i S ℏ , {\displaystyle \;\psi ={\sqrt {\rho \,}}\,e^{\frac {\,i\,S\,}{\hbar }}\;,} then the two equations are equivalent to i ℏ ∂ ψ ∂ t = ( − ℏ 2 2 m ∇ 2 + V ~ − Q ) ψ {\displaystyle i\,\hbar \,{\frac {\,\partial \psi \,}{\partial t}}=\left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+{\tilde {V}}-Q\right)\psi \quad } with Q = − ℏ 2 2 m ∇ 2 ρ ρ . {\displaystyle \;Q=-{\frac {\;\hbar ^{2}\,}{\,2m\,}}{\frac {\nabla ^{2}{\sqrt {\rho \,}}}{\sqrt {\rho \,}}}~.} The time-dependent Schrödinger equation is obtained if we start with V ~ = V + Q , {\displaystyle \;{\tilde {V}}=V+Q\;,} the usual potential with an extra quantum potential Q {\displaystyle Q} . The quantum potential is the potential of the quantum force, which is proportional (in approximation) to the curvature of the amplitude of the wave function. Note this potential is the same one that appears in the Madelung equations, a classical analog of the Schrödinger equation. === Mathematical formulation for a single particle === The matter wave of de Broglie is described by the time-dependent Schrödinger equation: i ℏ ∂ ψ ∂ t = ( − ℏ 2 2 m ∇ 2 + V ) ψ {\displaystyle i\,\hbar \,{\frac {\,\partial \psi \,}{\partial t}}=\left(-{\frac {\hbar ^{2}}{\,2m\,}}\nabla ^{2}+V\right)\psi \quad } The complex wave function can be represented as: ψ = ρ exp ⁡ ( i S ℏ ) {\displaystyle \psi ={\sqrt {\rho \,}}\;\exp \left({\frac {i\,S}{\hbar }}\right)~} By plugging this into the Schrödinger equation, one can derive two new equations for the real variables. The first is the continuity equation for the probability density ρ : {\displaystyle \,\rho \,:} ∂ ρ ∂ t + ∇ → ⋅ ( ρ v → ) = 0 , {\displaystyle {\frac {\,\partial \rho \,}{\,\partial t\,}}+{\vec {\nabla }}\cdot \left(\rho \,{\vec {v}}\right)=0~,} where the velocity field is determined by the “guidance equation” v → ( r → , t ) = 1 m ∇ → S ( r → , t ) . {\displaystyle {\vec {v}}\left(\,{\vec {r}},\,t\,\right)={\frac {1}{\,m\,}}\,{\vec {\nabla }}S\left(\,{\vec {r}},\,t\,\right)~.} According to pilot wave theory, the point particle and the matter wave are both real and distinct physical entities (unlike standard quantum mechanics, which postulates no physical particle or wave entities, only observed wave-particle duality). The pilot wave guides the motion of the point particles as described by the guidance equation. Ordinary quantum mechanics and pilot wave theory are based on the same partial differential equation. The main difference is that in ordinary quantum mechanics, the Schrödinger equation is connected to reality by the Born postulate, which states that the probability density of the particle's position is given by ρ = | ψ | 2 . {\displaystyle \;\rho =|\psi |^{2}~.} Pilot wave theory considers the guidance equation to be the fundamental law, and sees the Born rule as a derived concept. The second equation is a modified Hamilton–Jacobi equation for the action S: − ∂ S ∂ t = | ∇ → S | 2 2 m + V + Q , {\displaystyle -{\frac {\partial S}{\partial t}}={\frac {\;\left|\,{\vec {\nabla }}S\,\right|^{2}\,}{\,2m\,}}+V+Q~,} where Q is the quantum potential defined by Q = − ℏ 2 2 m ∇ 2 ρ ρ . {\displaystyle Q=-{\frac {\hbar ^{2}}{\,2m\,}}{\frac {\nabla ^{2}{\sqrt {\rho \,}}}{\sqrt {\rho \,}}}~.} If we choose to neglect Q, our equation is reduced to the Hamilton–Jacobi equation of a classical point particle. So, the quantum potential is responsible for all the mysterious effects of quantum mechanics. One can also combine the modified Hamilton–Jacobi equation with the guidance equation to derive a quasi-Newtonian equation of motion m d d t v → = − ∇ → ( V + Q ) , {\displaystyle m\,{\frac {d}{dt}}\,{\vec {v}}=-{\vec {\nabla }}(V+Q)~,} where the hydrodynamic time derivative is defined as d d t = ∂ ∂ t + v → ⋅ ∇ → . {\displaystyle {\frac {d}{dt}}={\frac {\partial }{\,\partial t\,}}+{\vec {v}}\cdot {\vec {\nabla }}~.} === Mathematical formulation for multiple particles === The Schrödinger equation for the many-body wave function ψ ( r → 1 , r → 2 , ⋯ , t ) {\displaystyle \psi ({\vec {r}}_{1},{\vec {r}}_{2},\cdots ,t)} is given by i ℏ ∂ ψ ∂ t = ( − ℏ 2 2 ∑ i = 1 N ∇ i 2 m i + V ( r 1 , r 2 , ⋯ r N ) ) ψ {\displaystyle i\hbar {\frac {\partial \psi }{\partial t}}=\left(-{\frac {\hbar ^{2}}{2}}\sum _{i=1}^{N}{\frac {\nabla _{i}^{2}}{m_{i}}}+V(\mathbf {r} _{1},\mathbf {r} _{2},\cdots \mathbf {r} _{N})\right)\psi } The complex wave function can be represented as: ψ = ρ exp ⁡ ( i S ℏ ) {\displaystyle \psi ={\sqrt {\rho \,}}\;\exp \left({\frac {i\,S}{\hbar }}\right)} The pilot wave guides the motion of the particles. The guidance equation for the jth particle is: v → j = ∇ j S m j . {\displaystyle {\vec {v}}_{j}={\frac {\nabla _{j}S}{m_{j}}}\;.} The velocity of the jth particle explicitly depends on the positions of the other particles. This means that the theory is nonlocal. === Relativity === An extension to the relativistic case with spin has been developed since the 1990s. === Empty wave function === Lucien Hardy and John Stewart Bell have emphasized that in the de Broglie–Bohm picture of quantum mechanics there can exist empty waves, represented by wave functions propagating in space and time but not carrying energy or momentum, and not associated with a particle. The same concept was called ghost waves (or "Gespensterfelder", ghost fields) by Albert Einstein. The empty wave function notion has been discussed controversially. In contrast, the many-worlds interpretation of quantum mechanics does not call for empty wave functions. == See also == Hydrodynamic quantum analogues Quantum potential == Notes == == References == == External links == "Pilot waves, Bohmian metaphysics, and the foundations of quantum mechanics" Archived 10 April 2016 at the Wayback Machine, lecture course on pilot wave theory by Mike Towler, Cambridge University (2009). "Bohmian Mechanics" entry by Sheldon Goldstein in the Stanford Encyclopedia of Philosophy, Fall 2021 Klaus von Bloh's Bohmian mechanics demonstrations in: Wolfram Demonstrations Project
Wikipedia/Pilot_wave_theory
Quantum calculus, sometimes called calculus without limits, is equivalent to traditional infinitesimal calculus without the notion of limits. The two types of calculus in quantum calculus are q-calculus and h-calculus. The goal of both types is to find "analogs" of mathematical objects, where, after taking a certain limit, the original object is returned. In q-calculus, the limit as q tends to 1 is taken of the q-analog. Likewise, in h-calculus, the limit as h tends to 0 is taken of the h-analog. The parameters q {\displaystyle q} and h {\displaystyle h} can be related by the formula q = e h {\displaystyle q=e^{h}} . == Differentiation == The q-differential and h-differential are defined as: d q ( f ( x ) ) = f ( q x ) − f ( x ) {\displaystyle d_{q}(f(x))=f(qx)-f(x)} and d h ( f ( x ) ) = f ( x + h ) − f ( x ) {\displaystyle d_{h}(f(x))=f(x+h)-f(x)} , respectively. The q-derivative and h-derivative are then defined as D q ( f ( x ) ) = d q ( f ( x ) ) d q ( x ) = f ( q x ) − f ( x ) q x − x {\displaystyle D_{q}(f(x))={\frac {d_{q}(f(x))}{d_{q}(x)}}={\frac {f(qx)-f(x)}{qx-x}}} and D h ( f ( x ) ) = d h ( f ( x ) ) d h ( x ) = f ( x + h ) − f ( x ) h {\displaystyle D_{h}(f(x))={\frac {d_{h}(f(x))}{d_{h}(x)}}={\frac {f(x+h)-f(x)}{h}}} respectively. By taking the limit as q → 1 {\displaystyle q\rightarrow 1} of the q-derivative or as h → 0 {\displaystyle h\rightarrow 0} of the h-derivative, one can obtain the derivative: lim q → 1 D q f ( x ) = lim h → 0 D h f ( x ) = d d x ( f ( x ) ) {\displaystyle \lim _{q\rightarrow 1}D_{q}f(x)=\lim _{h\rightarrow 0}D_{h}f(x)={\frac {d}{dx}}{\Bigl (}f(x){\Bigr )}} == Integration == === q-integral === A function F(x) is a q-antiderivative of f(x) if DqF(x) = f(x). The q-antiderivative (or q-integral) is denoted by ∫ f ( x ) d q x {\textstyle \int f(x)\,d_{q}x} and an expression for F(x) can be found from: ∫ f ( x ) d q x = ( 1 − q ) ∑ j = 0 ∞ x q j f ( x q j ) {\textstyle \int f(x)\,d_{q}x=(1-q)\sum _{j=0}^{\infty }xq^{j}f(xq^{j})} , which is called the Jackson integral of f(x). For 0 < q < 1, the series converges to a function F(x) on an interval (0,A] if |f(x)xα| is bounded on the interval (0, A] for some 0 ≤ α < 1. The q-integral is a Riemann–Stieltjes integral with respect to a step function having infinitely many points of increase at the points qj..The jump at the point qj is qj. Calling this step function gq(t) gives dgq(t) = dqt. === h-integral === A function F(x) is an h-antiderivative of f(x) if DhF(x) = f(x). The h-integral is denoted by ∫ f ( x ) d h x {\textstyle \int f(x)\,d_{h}x} . If a and b differ by an integer multiple of h then the definite integral ∫ a b f ( x ) d h x {\textstyle \int _{a}^{b}f(x)\,d_{h}x} is given by a Riemann sum of f(x) on the interval [a, b], partitioned into sub-intervals of equal width h. The motivation of h-integral comes from the Riemann sum of f(x). Following the idea of the motivation of classical integrals, some of the properties of classical integrals hold in h-integral. This notion has broad applications in numerical analysis, and especially finite difference calculus. == Example == In infinitesimal calculus, the derivative of the function x n {\displaystyle x^{n}} is n x n − 1 {\displaystyle nx^{n-1}} (for some positive integer n {\displaystyle n} ). The corresponding expressions in q-calculus and h-calculus are: D q ( x n ) = 1 − q n 1 − q x n − 1 = [ n ] q x n − 1 {\displaystyle D_{q}(x^{n})={\frac {1-q^{n}}{1-q}}x^{n-1}=[n]_{q}\ x^{n-1}} where [ n ] q {\displaystyle [n]_{q}} is the q-bracket [ n ] q = 1 − q n 1 − q {\displaystyle [n]_{q}={\frac {1-q^{n}}{1-q}}} and D h ( x n ) = ( x + h ) n − x n h = 1 h ( ∑ k = 0 n ( n k ) x n − k h k − x n ) = 1 h ∑ k = 1 n ( n k ) x n − k h k = ∑ k = 1 n ( n k ) x n − k h k − 1 = n x n − 1 + n ( n − 1 ) 2 h x n − 2 + ⋯ + n h n − 2 x + h n − 1 , {\displaystyle {\begin{aligned}D_{h}(x^{n})&={\frac {(x+h)^{n}-x^{n}}{h}}\\&={\frac {1}{h}}\left(\sum _{k=0}^{n}{{\binom {n}{k}}x^{n-k}h^{k}-x^{n}}\right)\\&={\frac {1}{h}}\sum _{k=1}^{n}{{\binom {n}{k}}x^{n-k}h^{k}}\\&=\sum _{k=1}^{n}{{\binom {n}{k}}x^{n-k}h^{k-1}}\\&=nx^{n-1}+{\frac {n(n-1)}{2}}hx^{n-2}+\cdots +nh^{n-2}x+h^{n-1},\end{aligned}}} respectively. The expression [ n ] q x n − 1 {\displaystyle [n]_{q}x^{n-1}} is then the q-analog and ∑ k = 1 n ( n k ) x n − k h k − 1 {\textstyle \sum _{k=1}^{n}{{\binom {n}{k}}x^{n-k}h^{k-1}}} is the h-analog of the power rule for positive integral powers. The q-Taylor expansion allows for the definition of q-analogs of all of the usual functions, such as the sine function, whose q-derivative is the q-analog of cosine. == History == The h-calculus is the calculus of finite differences, which was studied by George Boole and others, and has proven useful in combinatorics and fluid mechanics. In a sense, q-calculus dates back to Leonhard Euler and Carl Gustav Jacobi, but has only recently begun to find usefulness in quantum mechanics, given its intimate connection with commutativity relations and Lie algebras, specifically quantum groups. == See also == Noncommutative geometry Quantum differential calculus Time scale calculus q-analog Basic hypergeometric series Quantum dilogarithm == References == == Further reading == George Gasper, Mizan Rahman, Basic Hypergeometric Series, 2nd ed, Cambridge University Press (2004), ISBN 978-0-511-52625-1, doi:10.1017/CBO9780511526251 Jackson, F. H. (1908). "On q-functions and a certain difference operator". Transactions of the Royal Society of Edinburgh. 46 (2): 253–281. doi:10.1017/S0080456800002751. S2CID 123927312. Exton, H. (1983). q-Hypergeometric Functions and Applications. New York: Halstead Press. ISBN 0-85312-491-4. Kac, Victor; Cheung, Pokman (2002). Quantum calculus. Universitext. Springer-Verlag. ISBN 0-387-95341-8.
Wikipedia/Quantum_calculus
A quantum cryptographic protocol is device-independent if its security does not rely on trusting that the quantum devices used are truthful. Thus the security analysis of such a protocol needs to consider scenarios of imperfect or even malicious devices. Several important problems have been shown to admit unconditional secure and device-independent protocols. A closely related topic (that is not discussed in this article) is measurement-device independent quantum key distribution. == Overview and history == Dominic Mayers and Andrew Yao proposed the idea of designing quantum protocols using "self-testing" quantum apparatus, the internal operations of which can be uniquely determined by their input-output statistics. Subsequently, Roger Colbeck in his thesis proposed the use of Bell tests for checking the honesty of the devices. Since then, several problems have been shown to admit unconditional secure and device-independent protocols, even when the actual devices performing the Bell test are substantially "noisy," i.e., far from being ideal. These problems include quantum key distribution, randomness expansion, and randomness amplification. == Key distribution == The goal of quantum key distribution is for two parties, Alice and Bob, to share a common secret string through communications over public channels. This was a problem of central interest in quantum cryptography. It was also the motivating problem in Mayers and Yao's paper. A long sequence of works aim to prove unconditional security with robustness. Umesh Vazirani and Thomas Vidick were the first to reach this goal. Subsequently, Carl A. Miller and Yaoyun Shi proved a similar result using a different approach. == Randomness expansion == The goal of randomness expansion is to generate a longer private random string starting from a uniform input string and using untrusted quantum devices. The idea of using Bell test to achieve this goal was first proposed by Colbeck. Subsequent works have aimed to prove unconditional security with robustness and the increase the rate of expansion. Vazrani and Vidick were the first to prove full quantum security for an exponentially expanding protocol. Miller and Shi achieved several additional features, including cryptographic level security, robustness, and a single-qubit requirement on the quantum memory. The approach was subsequently extended by the same authors to show that the noise level can approach the obvious upper bound, when the output may become deterministic. == Randomness amplification == The goal of randomness amplification is to generate near-perfect randomness (approximating a fair coin toss) starting from a single source of weak randomness (a coin each of whose tosses is somewhat unpredictable, though it may be biased and correlated with previous tosses). This is known to be impossible classically. However, by using quantum devices, it becomes possible even if the devices are untrusted. Roger Colbeck and Renato Renner were motivated by physics considerations to ask the question first. Their construction and the subsequent improvement by Rodrigo Gallego et al. are secure against a non-signalling adversary, and have significant physical interpretations. The first construction that does not require any structural assumptions on the weak source is due to Kai-Min Chung, Yaoyun Shi, and Xiaodi Wu. Since then, research has focused on making constructions that are suitable for implementation. == References ==
Wikipedia/Device-independent_quantum_cryptography
The noisy-storage model refers to a cryptographic model employed in quantum cryptography. It assumes that the quantum memory device of an attacker (adversary) trying to break the protocol is imperfect (noisy). The main goal of this model is to enable the secure implementation of two-party cryptographic primitives, such as bit commitment, oblivious transfer and secure identification. == Motivation == Quantum communication has proven to be extremely useful when it comes to distributing encryption keys. It allows two distant parties Alice and Bob to expand a small initial secret key into an arbitrarily long secret key by sending qubits (quantum bits) to each other. Most importantly, it can be shown that any eavesdropper trying to listen into their communication cannot intercept any information about the long key. This is known as quantum key distribution (QKD). Yet, it has been shown that even quantum communication does not allow the secure implementation of many other two-party cryptographic tasks. These all form instances of secure function evaluation. An example is oblivious transfer. What sets these tasks apart from key distribution is that they aim to solve problems between two parties, Alice and Bob, who do not trust each other. That is, there is no outside party like an eavesdropper, only Alice and Bob. Intuitively, it is this lack of trust that makes the problem hard. Unlike in quantum key distribution, Alice and Bob cannot collaborate to try and detect any eavesdropping activity. Instead, each party has to fend for himself. Since tasks like secure identification are of practical interest, one is willing to make assumptions on how powerful the adversary can be. Security then holds as long as these assumptions are satisfied. In classical cryptography, i.e., without the use of quantum tools, most of these are computational assumptions. Such assumptions consists of two parts. First, one assumes that a particular problem is difficult to solve. For example, one might assume that it is hard to factor a large integer into its prime factors (e.g. 15=5x3). Second, one assumes that the adversary has a limited amount of computing power, namely less than what is (thought to be) required to solve the chosen problem. === Bounded storage === In information theoretic cryptography physical assumptions appear, which do not rely on any hardness assumptions, but merely assume a limit on some other resource. In classical cryptography, the bounded-storage model introduced by Ueli Maurer assumes that the adversary can only store a certain number of classical bits. Protocols are known that do (in principle) allow the secure implementation of any cryptographic task as long as the adversary's storage is small. Very intuitively, security becomes possible under this assumption since the adversary has to make a choice which information to keep. That is, the protocol effectively overflows his memory device leading to an inevitable lack on information for the adversary. It was later discovered that any classical protocol which requires the honest parties to store n {\displaystyle n} bits in order to execute it successfully can be broken by an adversary that can store more than about O ( n 2 ) {\displaystyle O(n^{2})} bits. That is, the gap between what is required to execute the protocol, and what is required to break the security is relatively small. === Bounded quantum storage === This gap changes dramatically when using quantum communication . That is, Alice and Bob can send qubits to each other as part of the protocol. Likewise, one now assumes that the adversary's quantum storage is limited to a certain number of qubits. There is no restriction on how many classical bits the adversary can store. This is known as the bounded-quantum-storage model. It was shown that there exist quantum protocols in which the honest parties need no quantum storage at all to execute them, but are nevertheless secure as long as Alice transmits more than twice the number of qubits than the adversary can store. === Noisy storage === More generally, security is possible as long as the amount of information that the adversary can store in his memory device is limited. This intuition is captured by the noisy-storage model, which includes the bounded-quantum-storage model as a special case. Such a limitation can, for example, come about if the memory device is extremely large, but very imperfect. In information theory such an imperfect memory device is also called a noisy channel. The motivation for this more general model is threefold. First, it allows one to make statements about much more general memory devices that the adversary may have available. Second, security statements could be made when the signals transmitted, or the storage device itself, uses continuous variables whose dimension is infinite and thus cannot be captured by a bounded storage assumption without additional constraints. Third, even if the dimension of the signals itself is small, the noisy-storage analysis allows security beyond the regime where bounded-storage itself can make any security statement. For example, if the storage channel is entanglement breaking, security is possible even if the storage device is arbitrarily large (i.e., not bounded in any way). == Assumption == The assumption of the noisy-storage model is that during waiting times Δ t {\displaystyle \Delta t} introduced into the protocol, the adversary can only store quantum information in his noisy memory device. Such a device is simply a quantum channel F : S ( H i n ) → S ( H o u t ) {\displaystyle {\mathcal {F}}:{\mathcal {S}}({\mathcal {H}}_{\rm {in}})\rightarrow {\mathcal {S}}({\mathcal {H}}_{\rm {out}})} that takes input states ρ i n ∈ S ( H i n ) {\displaystyle \rho _{\rm {in}}\in {\mathcal {S}}({\mathcal {H}}_{\rm {in}})} to some noisy output states ρ o u t ∈ S ( H o u t ) {\displaystyle \rho _{\rm {out}}\in {\mathcal {S}}({\mathcal {H}}_{\rm {out}})} . Otherwise, the adversary is all powerful. For example, he can store an unlimited amount of classical information and perform any computation instantaneously. The latter assumption also implies that he can perform any form of error correcting encoding before and after using the noisy memory device, even if it is computationally very difficult to do (i.e., it requires a long time). In this context, this is generally referred to as an encoding attack E {\displaystyle {\mathcal {E}}} and a decoding attack D {\displaystyle {\mathcal {D}}} . Since the adversary's classical memory can be arbitrarily large, the encoding E {\displaystyle {\mathcal {E}}} may not only generate some quantum state as input to the storage device F {\displaystyle {\mathcal {F}}} but also output classical information. The adversary's decoding attack D {\displaystyle {\mathcal {D}}} can make use of this extra classical information, as well as any additional information that the adversary may gain after the waiting time has passed. In practise, one often considers storage devices that consist of N {\displaystyle N} memory cells, each of which is subject to noise. In information-theoretic terms, this means that the device has the form F = N ⊗ N {\displaystyle {\mathcal {F}}={\mathcal {N}}^{\otimes N}} , where N : S ( C d ) → S ( C d ) {\displaystyle {\mathcal {N}}:S(\mathbb {C} ^{d})\rightarrow S(\mathbb {C} ^{d})} is a noisy quantum channel acting on a memory cell of dimension d {\displaystyle d} . === Examples === The storage device consists of N {\displaystyle N} qubits, each of which is subject to depolarizing noise. That is, F = N ⊗ N {\displaystyle {\mathcal {F}}={\mathcal {N}}^{\otimes N}} , where N ( ρ ) = λ ρ + ( 1 − λ ) i d / 2 {\displaystyle {\mathcal {N}}(\rho )=\lambda \rho +(1-\lambda ){\mathsf {id}}/2} is the 2-dimensional depolarizing channel. The storage device consists of N {\displaystyle N} qubits, which are noise-free. This corresponds to the special case of bounded-quantum-storage. That is, F = i d ⊗ N {\displaystyle {\mathcal {F}}={\mathsf {id}}^{\otimes N}} , where i d {\displaystyle {\mathsf {id}}} is the identity channel. == Protocols == Most protocols proceed in two steps. First, Alice and Bob exchange n {\displaystyle n} qubits encoded in two or three mutually unbiased bases. These are the same encodings which are used in the BB84 or six-state protocols of quantum key distribution. Typically, this takes the form of Alice sending such qubits to Bob, and Bob measuring them immediately on arrival. This has the advantage that Alice and Bob need no quantum storage to execute the protocol. It is furthermore experimentally relatively easy to create such qubits, making it possible to implement such protocols using currently available technology. The second step is to perform classical post-processing of the measurement data obtained in step one. Techniques used depend on the protocol in question and include privacy amplification, error-correcting codes, min-entropy sampling, and interactive hashing. === General === To demonstrate that all two-party cryptographic tasks can be implemented securely, a common approach is to show that a simple cryptographic primitive can be implemented that is known to be universal for secure function evaluation. That is, once one manages to build a protocol for such a cryptographic primitive all other tasks can be implemented by using this primitive as a basic building block. One such primitive is oblivious transfer. In turn, oblivious transfer can be constructed from an even simpler building block known as weak string erasure in combination with cryptographic techniques such as privacy amplification. All protocols proposed to date allow one of the parties (Alice) to have even an unlimited amount of noise-free quantum memory. I.e., the noisy-storage assumption is applied to only one of the parties (Bob). For storage devices of the form F = N ⊗ N {\displaystyle {\mathcal {F}}={\mathcal {N}}^{\otimes N}} it is known that any two-party cryptographic task can be implemented securely by means of weak string erasure and oblivious transfer whenever any of the following conditions hold. For bounded-quantum-storage (i.e., N = i d {\displaystyle {\mathcal {N}}={\mathsf {id}}} ), security can be achieved using a protocol in which Alice sends n > 2 N {\displaystyle n>2N} BB84 encoded qubits. That is, security can be achieved when Alice sends more than twice the number of qubits than Bob can store. One can also look at this from Bob's perspective and say that security can be achieved when Bob can store strictly less than half of the qubits that Alice sent, i.e., N < n / 2 {\displaystyle N<n/2} . For bounded-quantum-storage using higher-dimensional memory cells (i.e., each cell is not a qubit, but a qudit), security can be achieved in a protocol in which Alice sends n {\displaystyle n} higher-dimensional qudits encoded one of the possible mutually unbiased bases. In the limit of large dimensions, security can be achieved whenever n ⪆ N {\displaystyle n\gtrapprox N} . That is, security can always be achieved as long as Bob cannot store any constant fraction of the transmitted signals. This is optimal for the protocols considered since for n = N {\displaystyle n=N} a dishonest Bob can store all qudits sent by Alice. It is not known whether the same is possible using merely BB84 encoded qubits. For noisy-storage and devices of the form F = N ⊗ N {\displaystyle {\mathcal {F}}={\mathcal {N}}^{\otimes N}} security can be achieved using a protocol in which Alice sends n {\displaystyle n} BB84 encoded qubits if n > 2 ⋅ N ⋅ C ( N ) {\displaystyle n>2\cdot N\cdot C({\mathcal {N}})} , where C ( N ) {\displaystyle C({\mathcal {N}})} is the classical capacity of the quantum channel N {\displaystyle {\mathcal {N}}} , and N {\displaystyle {\mathcal {N}}} obeys the so-called strong converse property, or, if n > 2 ⋅ N ⋅ E C ( N ) {\displaystyle n>2\cdot N\cdot E_{C}({\mathcal {N}})} , where E C ( N ) {\displaystyle E_{C}({\mathcal {N}})} is the entanglement cost of the quantum channel N {\displaystyle {\mathcal {N}}} . This is generally much better than the condition on the classical capacity, however it is harder to evaluate E C ( N ) {\displaystyle E_{C}({\mathcal {N}})} . For noisy-storage and devices of the form F = N ⊗ N {\displaystyle {\mathcal {F}}={\mathcal {N}}^{\otimes N}} security can be achieved using a protocol in which Alice sends n {\displaystyle n} qubits encoded in one of the three mutually unbiased bases per qubit, if n > Q ( N ) N {\displaystyle n>Q({\mathcal {N}})N} , where Q {\displaystyle Q} is the quantum capacity of N {\displaystyle {\mathcal {N}}} , and the strong converse parameter of N {\displaystyle {\mathcal {N}}} is not too small. The three mutually unbiased bases are the same encodings as in the six-state protocol of quantum key distribution. The last condition does form the best known condition for most channels, yet the quantum capacity as well as the strong converse parameter are generally not easy to determine. === Specific tasks === Using such basic primitives as building blocks is not always the most efficient way to solve a cryptographic task. Specialized protocols targeted to solve specific problems are generally more efficient. Examples of known protocols are Bit commitment in the noisy-storage model, and in the case of bounded-quantum-storage Oblivious transfer in the noisy-storage model, and in the case of bounded-quantum-storage Secure identification in the bounded-quantum-storage model == Noisy-storage and QKD == The assumption of bounded-quantum-storage has also been applied outside the realm of secure function evaluation. In particular, it has been shown that if the eavesdropper in quantum key distribution is memory bounded, higher bit error rates can be tolerated in an experimental implementation. == See also == Stephanie Wehner == References ==
Wikipedia/Noisy-storage_model
Zero trust architecture (ZTA) or perimeterless security is a design and implementation strategy of IT systems. The principle is that users and devices should not be trusted by default, even if they are connected to a privileged network such as a corporate LAN and even if they were previously verified. ZTA is implemented by establishing identity verification, validating device compliance prior to granting access, and ensuring least privilege access to only explicitly-authorized resources. Most modern corporate networks consist of many interconnected zones, cloud services and infrastructure, connections to remote and mobile environments, and connections to non-conventional IT, such as IoT devices. The traditional approach by trusting users and devices within a notional "corporate perimeter" or via a VPN connection is commonly not sufficient in the complex environment of a corporate network. The zero trust approach advocates mutual authentication, including checking the identity and integrity of users and devices without respect to location, and providing access to applications and services based on the confidence of user and device identity and device status in combination with user authentication. The zero trust architecture has been proposed for use in specific areas such as supply chains. The principles of zero trust can be applied to data access, and to the management of data. This brings about zero trust data security where every request to access the data needs to be authenticated dynamically and ensure least privileged access to resources. In order to determine if access can be granted, policies can be applied based on the attributes of the data, who the user is, and the type of environment using Attribute-Based Access Control (ABAC). This zero-trust data security approach can protect access to the data. == History == In April 1994, the term "zero trust" was coined by Stephen Paul Marsh in his doctoral thesis on computer security at the University of Stirling. Marsh's work studied trust as something finite that can be described mathematically, asserting that the concept of trust transcends human factors such as morality, ethics, lawfulness, justice, and judgement. The problems of the Smartie or M&M model of the network (the precursor description of de-perimeterisation) was described by a Sun Microsystems engineer in a Network World article in May 1994, who described firewalls' perimeter defence, as a hard shell around a soft centre, like a Cadbury Egg. In 2001 the first version of the OSSTMM (Open Source Security Testing Methodology Manual) was released and this had some focus on trust. Version 3 which came out around 2007 has a whole chapter on Trust which says "Trust is a Vulnerability" and talks about how to apply the OSSTMM 10 controls based on Trust levels. In 2003 the challenges of defining the perimeter to an organisation's IT systems was highlighted by the Jericho Forum of this year, discussing the trend of what was then given the name "de-perimeterisation". In response to Operation Aurora, a Chinese APT attack throughout 2009, Google started to implement a zero-trust architecture referred to as BeyondCorp. In 2010 the term zero trust model was used by analyst John Kindervag of Forrester Research to denote stricter cybersecurity programs and access control within corporations. However, it would take almost a decade for zero trust architectures to become prevalent, driven in part by increased adoption of mobile and cloud services. In 2018, work undertaken in the United States by cybersecurity researchers at NIST and NCCoE led to the publication of NIST SP 800-207 – Zero Trust Architecture. The publication defines zero trust (ZT) as a collection of concepts and ideas designed to reduce the uncertainty in enforcing accurate, per-request access decisions in information systems and services in the face of a network viewed as compromised. A zero trust architecture (ZTA) is an enterprise's cyber security plan that utilizes zero trust concepts and encompasses component relationships, workflow planning, and access policies. Therefore, a zero trust enterprise is the network infrastructure (physical and virtual) and operational policies that are in place for an enterprise as a product of a zero trust architecture plan. There are several ways to implement all the tenets of ZT; a full ZTA solution will include elements of all three: Using enhanced identity governance and policy-based access controls. Using micro-segmentation Using overlay networks or software-defined perimeters In 2019 the United Kingdom National Cyber Security Centre (NCSC) recommended that network architects consider a zero trust approach for new IT deployments, particularly where significant use of cloud services is planned. An alternative but consistent approach is taken by NCSC, in identifying the key principles behind zero trust architectures: Single strong source of user identity User authentication Machine authentication Additional context, such as policy compliance and device health Authorization policies to access an application Access control policies within an application == See also == Trust, but verify – Russian proverb Blast radius Password fatigue Secure access service edge Identity threat detection and response == References ==
Wikipedia/Zero_trust_security_model
A cryptosystem is considered to have information-theoretic security (also called unconditional security) if the system is secure against adversaries with unlimited computing resources and time. In contrast, a system which depends on the computational cost of cryptanalysis to be secure (and thus can be broken by an attack with unlimited computation) is called computationally secure or conditionally secure. == Overview == An encryption protocol with information-theoretic security is impossible to break even with infinite computational power. Protocols proven to be information-theoretically secure are resistant to future developments in computing. The concept of information-theoretically secure communication was introduced in 1949 by American mathematician Claude Shannon, one of the founders of classical information theory, who used it to prove the one-time pad system was secure. Information-theoretically secure cryptosystems have been used for the most sensitive governmental communications, such as diplomatic cables and high-level military communications. There are a variety of cryptographic tasks for which information-theoretic security is a meaningful and useful requirement. A few of these are: Secret sharing schemes such as Shamir's are information-theoretically secure (and also perfectly secure) in that having less than the requisite number of shares of the secret provides no information about the secret. More generally, secure multiparty computation protocols often have information-theoretic security. Private information retrieval with multiple databases can be achieved with information-theoretic privacy for the user's query. Reductions between cryptographic primitives or tasks can often be achieved information-theoretically. Such reductions are important from a theoretical perspective because they establish that primitive Π {\displaystyle \Pi } can be realized if primitive Π ′ {\displaystyle \Pi '} can be realized. Symmetric encryption can be constructed under an information-theoretic notion of security called entropic security, which assumes that the adversary knows almost nothing about the message being sent. The goal here is to hide all functions of the plaintext rather than all information about it. Information-theoretic cryptography is quantum-safe. == Physical layer encryption == === Technical limitations === Algorithms which are computationally or conditionally secure (i.e., they are not information-theoretically secure) are dependent on resource limits. For example, RSA relies on the assertion that factoring large numbers is hard. A weaker notion of security, defined by Aaron D. Wyner, established a now-flourishing area of research that is known as physical layer encryption. It exploits the physical wireless channel for its security by communications, signal processing, and coding techniques. The security is provable, unbreakable, and quantifiable (in bits/second/hertz). Wyner's initial physical layer encryption work in the 1970s posed the Alice–Bob–Eve problem in which Alice wants to send a message to Bob without Eve decoding it. If the channel from Alice to Bob is statistically better than the channel from Alice to Eve, it had been shown that secure communication is possible. That is intuitive, but Wyner measured the secrecy in information theoretic terms defining secrecy capacity, which essentially is the rate at which Alice can transmit secret information to Bob. Shortly afterward, Imre Csiszár and Körner showed that secret communication was possible even if Eve had a statistically better channel to Alice than Bob did. The basic idea of the information theoretic approach to securely transmit confidential messages (without using an encryption key) to a legitimate receiver is to use the inherent randomness of the physical medium (including noises and channel fluctuations due to fading) and exploit the difference between the channel to a legitimate receiver and the channel to an eavesdropper to benefit the legitimate receiver. More recent theoretical results are concerned with determining the secrecy capacity and optimal power allocation in broadcast fading channels. There are caveats, as many capacities are not computable unless the assumption is made that Alice knows the channel to Eve. If that were known, Alice could simply place a null in Eve's direction. Secrecy capacity for MIMO and multiple colluding eavesdroppers is more recent and ongoing work, and such results still make the non-useful assumption about eavesdropper channel state information knowledge. Still other work is less theoretical by attempting to compare implementable schemes. One physical layer encryption scheme is to broadcast artificial noise in all directions except that of Bob's channel, which basically jams Eve. One paper by Negi and Goel details its implementation, and Khisti and Wornell computed the secrecy capacity when only statistics about Eve's channel are known. Parallel to that work in the information theory community is work in the antenna community, which has been termed near-field direct antenna modulation or directional modulation. It has been shown that by using a parasitic array, the transmitted modulation in different directions could be controlled independently. Secrecy could be realized by making the modulations in undesired directions difficult to decode. Directional modulation data transmission was experimentally demonstrated using a phased array. Others have demonstrated directional modulation with switched arrays and phase-conjugating lenses. That type of directional modulation is really a subset of Negi and Goel's additive artificial noise encryption scheme. Another scheme using pattern-reconfigurable transmit antennas for Alice called reconfigurable multiplicative noise (RMN) complements additive artificial noise. The two work well together in channel simulations in which nothing is assumed known to Alice or Bob about the eavesdroppers. == Secret key agreement == The different works mentioned in the previous part employ, in one way or another, the randomness present in the wireless channel to transmit information-theoretically secure messages. Conversely, we could analyze how much secrecy one can extract from the randomness itself in the form of a secret key. That is the goal of secret key agreement. In this line of work, started by Maurer and Ahlswede and Csiszár, the basic system model removes any restriction on the communication schemes and assumes that the legitimate users can communicate over a two-way, public, noiseless, and authenticated channel at no cost. This model has been subsequently extended to account for multiple users and a noisy channel among others. == See also == Leftover hash lemma (privacy amplification) Semantic security == References ==
Wikipedia/Unconditional_security_(cryptography)
A physical unclonable function (sometimes also called physically-unclonable function, which refers to a weaker security metric than a physical unclonable function ), or PUF, is a physical object whose operation cannot be reproduced ("cloned") in physical way (by making another system using the same technology), that for a given input and conditions (challenge), provides a physically defined "digital fingerprint" output (response) that serves as a unique identifier, most often for a semiconductor device such as a microprocessor. PUFs are often based on unique physical variations occurring naturally during semiconductor manufacturing. A PUF is a physical entity embodied in a physical structure. PUFs are implemented in integrated circuits, including FPGAs, and can be used in applications with high-security requirements, more specifically cryptography, Internet of Things (IOT) devices and privacy protection. == History == Early references about systems that exploit the physical properties of disordered systems for authentication purposes date back to Bauder in 1983 and Simmons in 1984. Naccache and Frémanteau provided an authentication scheme in 1992 for memory cards. PUFs were first formally proposed in a general fashion by Pappu in 2001, under the name Physical One-Way Function (POWF), with the term PUF being coined in 2002, whilst describing the first integrated PUF where, unlike PUFs based on optics, the measurement circuitry and the PUF are integrated onto the same electrical circuit (and fabricated on silicon). Starting in 2010, PUF gained attention in the smartcard market as a promising way to provide "silicon fingerprints", creating cryptographic keys that are unique to individual smartcards. PUFs are now established as a secure alternative to battery-backed storage of secret keys in commercial FPGAs, such as the Xilinx Zynq Ultrascale+, and Altera Stratix 10. == Concept == PUFs depend on the uniqueness of their physical microstructure. This microstructure depends on random physical factors introduced during manufacturing. These factors are unpredictable and uncontrollable, which makes it virtually impossible to duplicate or clone the structure. Rather than embodying a single cryptographic key, PUFs implement challenge–response authentication to evaluate this microstructure. When a physical stimulus is applied to the structure, it reacts in an unpredictable (but repeatable) way due to the complex interaction of the stimulus with the physical microstructure of the device. This exact microstructure depends on physical factors introduced during manufacture, which are unpredictable (like a fair coin). The applied stimulus is called the challenge, and the reaction of the PUF is called the response. A specific challenge and its corresponding response together form a challenge-response pair or CRP. The device's identity is established by the properties of the microstructure itself. As this structure is not directly revealed by the challenge-response mechanism, such a device is resistant to spoofing attacks. Using a fuzzy extractor or the fuzzy commitment scheme that are provably suboptimal in terms of storage and privacy leakage amount or using nested polar codes that can be made asymptotically optimal, one can extract a unique strong cryptographic key from the physical microstructure. The same unique key is reconstructed every time the PUF is evaluated. The challenge-response mechanism is then implemented using cryptography. PUFs can be implemented with a very small hardware investment compared to other cryptographic primitives that provide unpredictable input/output behavior, such as pseudo-random functions. In some cases, PUFs can even be built from existing hardware with the right properties. Unclonability means that each PUF device has a unique and unpredictable way of mapping challenges to responses, even if it was manufactured with the same process as a similar device, and it is infeasible to construct a PUF with the same challenge-response behavior as another given PUF because exact control over the manufacturing process is infeasible. Mathematical unclonability means that it should be very hard to compute an unknown response given the other CRPs or some of the properties of the random components from a PUF. This is because a response is created by a complex interaction of the challenge with many or all of the random components. In other words, given the design of the PUF system, without knowing all of the physical properties of the random components, the CRPs are highly unpredictable. The combination of physical and mathematical unclonability renders a PUF truly unclonable. Note that a PUF is "unclonable" using the same physical implementation, but once a PUF key is extracted, there's generally no problem with cloning the key – the output of the PUF – using other means. For "strong PUFs" one can train a neural network on observed challenge-response pairs and use it to predict unobserved responses. Because of these properties, PUFs can be used as a unique and untamperable device identifier. PUFs can also be used for secure key generation and storage and for a source of randomness. == Classification == === Strong/Weak === Weak PUFs can be considered a kind of memory that is randomly initialized during PUF manufacture. A challenge can be considered an address within the memory, and response can be considered the random value stored by that address. This way count of unique challenge-response pairs (CRPs) scales lineary with count of random elements of the PUF. The advantage of such PUFs is that they are actual random oracles, so are immune to machine-learning attacks. The weakness is that count of CRPs is small and can be exhausted either by an adversary, that can probe the PUF directly, or during authentication protocols over insecure channels, in which case verifier has to keep track of challenges already known to adversary. That's why the main application of weak PUFs is the source of randomness for deriving crypto keys. Strong PUFs are systems doing computation based on their internal structure. Their count of unique CRPs scales faster than linearily with increase in count of random elements because of interactions between the elements. The advantage is that this way space of CRPs can be made large enough to make its exhaustion practically impossible and collisions of 2 randomly chosen elements of the space improbable enough, allowing the verifying party not to keep track of used elements but just to choose them randomly from the space. Another advantage is that the randomness can be stored not only within the elements but also within their interactions, which sometimes can not be read directly. The weakness is that the same elements and their interactions are reused for different challenges, which opens the possibility to derive some information about the elements and their connections and use it to predict the reaction of the system to the unobserved challenges. === Implicit/explicit === All implementations of a certain PUF within certain device are created uniformly using scalable processes. For example when a cryptoprocessor based on a silicon chip is produced, a lot of processors are created on the same silicon wafer. Foundry equipment applies the same operations to all the chips on a wafer and tries to do it as much reproducible as possible in order to have predictable and high performance and reliability characteristics within all the chips. Despite this there should be generated randomness to make PUF in each chip unique. Explicit PUF randomness is created explicitly in a separate technological operation. It is a disadvantage because a separate operation imposes additional costs and because manufacturer can intentionally replace that separate operation with something else, which can reduce randomness and compromise security characteristics. Implicit PUF uses technology imperfections as a source of randomness by designing a PUF as a device which operation is strongly affected by technology imperfections instead of being unaffected, as it is done for usual curcuitry, and fabricating it simultaneously with the rest of the device. Since foundries themselves cannot defeat the imperfections of the technology despite having strong economic incentive in being capable to fabricate more performant and more reliable chips, it gives some protection from foundry backdooring such PUFs this way. Backdooring PUFs by tampering with lithographic masks can be detected by reverse engineering the resulting devices. Fabricating the PUF as the part of the rest of the device makes it cheaper than explicit PUFs. === Intrinsic/extrinsic === Extrinsic PUFs rely on sensors to measure a system containing the randomness. Such sensors are a weak point since they can be replaced with fakes sending the needed measurements. Intrinsic PUF's operation is affected by randomness contained within the system itself. == Types == Over 40 types of PUF have been suggested. These range from PUFs that evaluate an intrinsic element of a pre-existing integrated electronic system to concepts that involve explicitly introducing random particle distributions to the surface of physical objects for authentication. All PUFs are subject to environmental variations such as temperature, supply voltage and electromagnetic interference, which can affect their performance. Therefore, rather than just being random, the real power of a PUF is its ability to be different between devices but simultaneously to be the same under different environmental conditions on the same device. == Error correction == In many applications, it is important that the output is stable. If the PUF is used for a key in cryptographic algorithms, it is necessary that error correction be done to correct any errors caused by the underlying physical processes and reconstruct exactly the same key each time under all operating conditions. In principle there are two basic concepts: Pre-Processing and Post-Processing Error Correction Code (ECC). On-chip ECC units increase size, power, and data processing time overheads; they also expose vulnerabilities to power analysis attacks that attempt to model the PUF mathematically. Alternatively, some PUF designs like the EC-PUF do not require an on-chip ECC unit. Strategies have been developed which lead SRAM PUF to become more reliable over time without degrading the other PUF quality measures such as security and efficiency. Research at Carnegie Mellon University into various PUF implementations found that some error reduction techniques reduced errors in PUF response in a range of ~70 percent to ~100 percent. Research at the University of Massachusetts Amherst to improve the reliability of SRAM PUF-generated keys posited an error correction technique to reduce the error rate. Joint reliability–secrecy coding methods based on transform coding are used to obtain significantly higher reliabilities for each bit generated from a PUF such that low-complexity error-correcting codes such as BCH codes suffice to satisfy a block error probability constraint of 1 bit errors out of 1 billion bits. Nested polar codes are used for vector quantization and error correction jointly. Their performance is asymptotically optimal in terms of, for a given blocklength, the maximum number of secret bits generated, the minimum amount of private information leaked about the PUF outputs, and minimum storage required. The fuzzy commitment scheme and fuzzy extractors are shown to be suboptimal in terms of the minimum storage. == Availability == PUF technology can be licensed from several companies including eMemory, or its subsidiary, PUFsecurity, Enthentica, ICTK, Intrinsic ID, Invia, QuantumTrace, Granite Mountain Technologies and Verayo. PUF technology has been implemented in several hardware platforms including Microsemi SmartFusion2, NXP SmartMX2, Coherent Logix HyperX, InsideSecure MicroXsafe, Altera Stratix 10, Redpine Signals WyzBee and Xilinx Zynq Ultrascale+. == Vulnerabilities == In 2011, university research showed that delay-based PUF implementations are vulnerable to side-channel attacks and recommends that countermeasures be employed in the design to prevent this type of attack. Also, improper implementation of PUF could introduce "backdoors" to an otherwise secure system. In June 2012, Dominik Merli, a scientist at Fraunhofer Research Institution for Applied and Integrated Security (AISEC) further claimed that PUF introduces more entry points for hacking into a cryptographic system and that further investigation into the vulnerabilities of PUFs is required before PUFs can be used in practical security-related applications. The presented attacks are all on PUFs implemented in insecure systems, such as FPGA or Static RAM (SRAM). It is also important to ensure that the environment is suitable for the needed security level, as otherwise attacks taking advantage of temperature and other variations may be possible. In 2015, some studies claimed it is possible to attack certain kinds of PUFs with low-cost equipment in a matter of milliseconds. A team at Ruhr Universität of Bochum, Germany, demonstrated a method to create a model of XOR Arbiter PUFs and thus be able to predict their response to any kind of challenge. Their method requires only 4 CRPs, which even on resource-constrained devices should not take more than about 200ms to produce. Using this method and a $25 device or an NFC-enabled smartphone, the team was able to successfully clone PUF-based RFID cards stored in the wallet of users while it was in their back pocket. === Provable machine learning attacks === The attacks mentioned above range from invasive, e.g., to non-invasive attacks. One of the most celebrated types of non-invasive attacks is machine learning (ML) attacks. From the beginning of the era of PUFs, it has been doubted if these primitives are subject to this type of attacks. In the lack of thorough analysis and mathematical proofs of the security of PUFs, ad hoc attacks against PUFs have been introduced in the literature. Consequently, countermeasures presented to cope with these attacks are less effective. In line with these efforts, it has been conjectured if PUFs can be considered as circuits, being provably hard to break. In response, a mathematical framework has been suggested, where provable ML algorithms against several known families of PUFs have been introduced. Along with this provable ML framework, to assess the security of PUFs against ML attacks, property testing algorithms have been reintroduced in the hardware security community and made publicly accessible. These algorithms trace their roots back to well-established fields of research, namely property testing, machine learning theory, and Boolean analysis. ML attacks can also apply to PUFs because most of the pre and post-processing methods applied until now ignore the effect of correlations between PUF-circuit outputs. For instance, obtaining one bit by comparing two ring oscillator outputs is a method to decrease the correlation. However, this method does not remove all correlations. Therefore, the classic transforms from the signal-processing literature are applied to raw PUF-circuit outputs to decorrelate them before quantizing the outputs in the transform domain to generate bit sequences. Such decorrelation methods can help to overcome the correlation-based information leakages about the PUF outputs even if the ambient temperature and supply voltage change. == Optical PUFs == Optical PUFs rely on a random optical multiple-scattering medium, which serves as a token. Optical PUFs offer a promising approach to developing entity authentication schemes that are robust against many of the aforementioned attacks. However, their security against emulation attacks can be ensured only in the case of quantum readout (see below), or when the database of challenge-response pairs is somehow encrypted. Optical PUFs can be made very easily: a varnish containing glitter, a metallic paint, or a frosted finish obtained by sandblasting a surface, for example, are practically impossible to clone. Their appearance changes depending on the point of view and the lighting. Authentication of an optical PUF requires a photographic acquisition to measure the luminosity of several of its parts and the comparison of this acquisition with another previously made from the same point of view. This acquisition must be supplemented by an additional acquisition either from another point of view, or under different lighting to verify that this results in a modification of the appearance of the PUF. This can be done with a smartphone, without additional equipment, using optical means to determine the position in which the smartphone is in relation to the PUF. Theoretical investigations suggest that optical PUFs with nonlinear multiple-scattering media, may be more robust than their linear counterparts against the potential cloning of the medium. == See also == Hardware Trojan Quantum Readout of PUFs Random number generation Defense strategy (computing) == References == == External links == "Physical Unclonable Functions and Applications", by Srini Devadas and others, MIT Ultra-low-cost true randomness AND physical fingerprinting "Mixed-signal physically unclonable function with CMOS capacitive cells", by Kamal Kamal and Radu Muresan
Wikipedia/Physical_unclonable_function
The DARPA Quantum Network (2002–2007) was the world's first quantum key distribution (QKD) network, operating 10 optical nodes across Boston and Cambridge, Massachusetts. It became fully operational on October 23, 2003 in BBN's laboratories, and in June 2004 was fielded through dark fiber under the streets of Cambridge and Boston, where it ran continuously for over 3 years. The project also created and fielded the world's first superconducting nanowire single-photon detector. It was sponsored by DARPA as part of the QuIST program, and built and operated by BBN Technologies in close collaboration with colleagues at Harvard University and the Boston University Photonics Center. The DARPA Quantum Network was fully compatible with standard Internet technology, and could provide QKD-derived key material to create Virtual Private Networks, to support IPsec or other authentication, or for any other purpose. All control mechanisms and protocols were implemented in the Unix kernel and field-programmable gate arrays. QKD-derived key material was routinely used for video-conferencing or other applications. The DARPA Quantum Network was built in stages. In the project's first year (year 1), BBN designed and built a full QKD system (Alice and Bob), with an attenuated laser source (~ 0.1 mean photon number) running through telecom fiber, phase-modulated via an actively stabilized Mach-Zender interferometer. BBN also implemented a full suite of industrial-strength QKD protocols based on BB84. In year 2, BBN created two 'Mark 2' versions of this system (4 nodes) with commercial-quality InGaAs detectors created by IBM Research. These 4 nodes ran continuously in BBN's laboratory from October 2003, then two were deployed at Harvard and Boston University in June 2004, when the network began running continuously across the metro Boston area, 24x7. In year 3, the network expanded to 8 nodes with the addition of an entanglement-based system (derived from work at Boston University) designed for telecom fibers, and a high-speed atmospheric (freespace) link designed and built by the National Institute of Standards and Technology. In year 4, BBN added a second freespace link to the overall network, using nodes created by Qinetiq, and investigated improved QKD protocols and detectors. Finally, in year 5, BBN added the world's first superconducting nanowire single-photon detector to the operational network. It was created by a collaboration between researchers at BBN, the University of Rochester, and the National Institute of Standards and Technology; that first 100 MHz system ran 20x faster than any existing single-photon detector at telecom wavelengths. In that final year, BBN also collaborated with researchers at the Massachusetts Institute of Technology to implement, and experiment with, a proof-of-concept version of the world's first quantum eavesdropper (Eve). When fully built, the network's 10 nodes were as follows. All ran BBN's quantum key distribution and quantum network protocols so they inter-operated to achieve any-to-any key distribution. Alice, Bob – 5 MHz, attenuated laser pulses through telecom fiber, phase-modulated Anna, Boris – 5 MHz, attenuated laser pulses through telecom fiber, phase-modulated Alex, Barb – entanglement based photons through telecom fiber, polarization-modulated Ali, Baba – approximately 400 MHz, attenuated laser pulses through the atmosphere, polarization-modulated Amanda, Brian – attenuated laser pulses through the atmosphere, polarization-modulated The DARPA Quantum Network implemented a variety of quantum key distribution protocols, to explore their properties. All were integrated into a single, production-quality protocol stack. Authentication was based on public keys, shared private keys, or a combination of the two. (The shared private keys could be refreshed by QKD-derived keys.) Privacy amplification was implemented via GF[2n] Universal Hash. Entropy estimation was based on Rényi entropy, and implemented by BBBSS 92, Slutsky, Myers / Pearson, and Shor / Preskill protocols. Error correction was implemented by a BBN variant of the Cascade protocol, or the BBN Niagara protocol which provided efficient, one-pass operation near the Shannon limit via forward error correction based on low-density parity-check codes (LDPC). Sifting was performed either by traditional methods, run-length encoding, or so-called "SARG" sifting. It also implemented two major forms of QKD networking protocols. First, key relay employed "trusted" nodes in the network to relay materials for key distillation between the two endpoints. This approach permitted nodes to agree upon shared key material even if they were implemented via two incompatible technologies; for example, a node based on phase-modulation through fiber could exchange keys with one based on polarization-modulation through the atmosphere. In fact, it even permitted transmitters to share key material with other (compatible or incompatible) transmitters. Furthermore, the raw key material could be routed by multiple "striped" paths through the network (e.g. disjoint paths) and recombined end-to-end, thus erasing the advantage that Eve would gain by controlling one of the network nodes along the way. Second, QKD-aware optical routing protocols enabled nodes to control transparent optical switches within the network, so that multiple QKD systems could share the same optical network infrastructure. == Selected papers == "Building the quantum network", Chip Elliott, in New Journal of Physics, July 2002. "Quantum cryptography in practice", Chip Elliott, David Pearson, Gregory Troxel, ACM SIGCOMM 2002. "Path-length control in an interferometric QKD link", Chip Elliott, Oleksiy Pikalo, John Schlafer, Greg Troxel, Proceedings AeroSense 2003, Volume 5105, Quantum Information and Computation, 2003. "The DARPA Quantum Network", Chip Elliott, December 2004. "Current status of the DARPA Quantum Network", Chip Elliott, Alexander Colvin, David Pearson, Oleksiy Pikalo, John Schlafer, Henry Yeh, SPIE Defense + Commercial Sensing 2005. "Building a QKD Network out of Theories and Devices" (slide presentation), David Pearson, "The DARPA Quantum Network", C. Elliott, in Quantum Communications and Cryptography, edited by Alexander V. Sergienko, CRC Press, 2005. "On the Optimal Mean Photon Number for Quantum Cryptography", David Pearson and Chip Elliott, in Computer Science and Quantum Computing, edited by James E. Stones, Nova Science Publishers, 2007. DARPA Quantum Network Testbed: Final Technical Report, Chip Elliott and Henry Yeh, BBN Technologies, July 2007. "The Networking in Quantum Networking", Chip Elliott, 2018. == References ==
Wikipedia/DARPA_Quantum_Network
Spontaneous parametric down-conversion (also known as SPDC, parametric fluorescence or parametric scattering) is a nonlinear instant optical process that converts one photon of higher energy (namely, a pump photon) into a pair of photons (namely, signal and idler photons) of lower energy, in accordance with the laws of energy conservation and momentum conservation. It is an important process in quantum optics, for the generation of entangled photon pairs and of single photons. == Description == A nonlinear crystal is used to produce pairs of photons from a photon beam. In accordance with conservations of energy and momentum, the pairs need to have combined energies and momenta equal to the energy and momentum of the original photon. Because the index of refraction changes with frequency (dispersion), only certain triplets of frequencies will be phase-matched so that simultaneous energy and momentum conservation can be achieved. Phase-matching is most commonly achieved using birefringent nonlinear materials, whose index of refraction changes with polarization. As a result of this, different types of SPDC are categorized by the polarizations of the input photon (pump) and the two output photons (signal and idler). If the signal and idler photons share the same polarization with each other and the pump photon is destroyed, it is deemed Type-0 SPDC. If the signal and idler photons share the same polarization with each other, but are orthogonal to the pump polarization, it is Type-I SPDC. If the signal and idler photons have perpendicular polarizations, it is deemed Type II SPDC. The conversion efficiency of SPDC is typically very low, with the highest efficiency obtained on the order of 4x10−6 incoming photons for PPLN in waveguides. However, if one half of the pair is detected at any time then its partner is known to be present. The degenerate portion of the output of a Type I down converter is a squeezed vacuum that contains only even photon number terms. The nondegenerate output of the Type II down converter is a two-mode squeezed vacuum. == Example == In a commonly used SPDC apparatus design, a strong laser beam, termed the "pump" beam, is directed at a BBO (beta-barium borate) or lithium niobate crystal. Most of the photons continue straight through the crystal. However, occasionally, some of the photons undergo spontaneous down-conversion with Type II polarization correlation, and the resultant correlated photon pairs have trajectories that are constrained along the sides of two cones whose axes are symmetrically arranged relative to the pump beam. Due to the conservation of momentum, the two photons are always symmetrically located on the sides of the cones, relative to the pump beam. In particular, the trajectories of a small proportion of photon pairs will lie simultaneously on the two lines where the surfaces of the two cones intersect. This results in entanglement of the polarizations of the pairs of photons emerging on those two lines. The photon pairs are in an equal weight quantum superposition of the unentangled states | H ⟩ | V ⟩ {\displaystyle \vert H\rangle \vert V\rangle } and | V ⟩ | H ⟩ {\displaystyle \vert V\rangle \vert H\rangle } , corresponding to polarizations of left-hand side photon, right-hand side photon.: 205  Another crystal is KDP (potassium dihydrogen phosphate) which is mostly used in Type I down conversion, where both photons have the same polarization. Some of the characteristics of effective parametric down-converting nonlinear crystals include: Nonlinearity: The refractive index of the crystal changes with the intensity of the incident light. This is known as the nonlinear optical response. Periodicity: The crystal has a regular, repeating structure. This is known as the lattice structure, which is responsible for the regular arrangement of the atoms in the crystal. Optical anisotropy (or birefringence): The crystal has different refractive indices along different crystallographic axes. Temperature and pressure sensitivity: The nonlinearity of the crystal can change with temperature and pressure, and thus the crystal should be kept in a stable temperature and pressure environment. High nonlinear coefficient: Large nonlinear coefficient is desirable, this allow to generate a high number of entangled photons. High optical damage threshold: Crystal with high optical damage threshold can endure high intensity of the pumping beam. Transparency in the desired wavelength range: It is important for the crystal to be transparent in the wavelength range of the pump beam for efficient nonlinear interactions High optical quality and low absorption: The crystal should be high optical quality and low absorption to minimize loss of the pump beam and the generated entangled photons. == History == SPDC was demonstrated as early as 1967 by S. E. Harris, M. K. Oshman, and R. L. Byer, as well as by D. Magde and H. Mahr. It was first applied to experiments related to coherence by two independent pairs of researchers in the late 1980s: Carroll Alley and Yanhua Shih, and Rupamanjari Ghosh and Leonard Mandel. The duality between incoherent (Van Cittert–Zernike theorem) and biphoton emissions was found. == Applications == SPDC allows for the creation of optical fields containing (to a good approximation) a single photon. As of 2005, this is the predominant mechanism for an experimenter to create single photons (also known as Fock states). The single photons as well as the photon pairs are often used in quantum information experiments and applications like quantum cryptography and Bell test experiments. SPDC is widely used to create pairs of entangled photons with a high degree of spatial correlation. Such pairs are used in ghost imaging, in which information is combined from two light detectors: a conventional, multi-pixel detector that does not view the object, and a single-pixel (bucket) detector that does view the object. == Alternatives == The newly observed effect of two-photon emission from electrically driven semiconductors has been proposed as a basis for more efficient sources of entangled photon pairs. Other than SPDC-generated photon pairs, the photons of a semiconductor-emitted pair usually are not identical but have different energies. Until recently, within the constraints of quantum uncertainty, the pair of emitted photons were assumed to be co-located: they are born from the same location. However, a new nonlocalized mechanism for the production of correlated photon pairs in SPDC has highlighted that occasionally the individual photons that constitute the pair can be emitted from spatially separated points. == See also == Photon upconversion == References ==
Wikipedia/Spontaneous_parametric_down-conversion
In physics, non-Hermitian quantum mechanics describes quantum mechanical systems where Hamiltonians are not Hermitian. == History == The first paper that has "non-Hermitian quantum mechanics" in the title was published in 1996 by Naomichi Hatano and David R. Nelson. The authors mapped a classical statistical model of flux-line pinning by columnar defects in high-Tc superconductors to a quantum model by means of an inverse path-integral mapping and ended up with a non-Hermitian Hamiltonian with an imaginary vector potential in a random scalar potential. They further mapped this into a lattice model and came up with a tight-binding model with asymmetric hopping, which is now widely called the Hatano-Nelson model. The authors showed that there is a region where all eigenvalues are real despite the non-Hermiticity. Parity–time (PT) symmetry was initially studied as a specific system in non-Hermitian quantum mechanics. In 1998, physicist Carl Bender and former graduate student Stefan Boettcher published a paper where they found non-Hermitian Hamiltonians endowed with an unbroken PT symmetry (invariance with respect to the simultaneous action of the parity-inversion and time reversal symmetry operators) also may possess a real spectrum. Under a correctly-defined inner product, a PT-symmetric Hamiltonian's eigenfunctions have positive norms and exhibit unitary time evolution, requirements for quantum theories. Bender won the 2017 Dannie Heineman Prize for Mathematical Physics for his work. A closely related concept is that of pseudo-Hermitian operators, which were considered by physicists Paul Dirac, Wolfgang Pauli, and Tsung-Dao Lee and Gian Carlo Wick. Pseudo-Hermitian operators were discovered (or rediscovered) almost simultaneously by mathematicians Mark Krein and collaborators as G-Hamiltonian in the study of linear dynamical systems. The equivalence between pseudo-Hermiticity and G-Hamiltonian is easy to establish. In the early 1960s, Olga Taussky, Michael Drazin, and Emilie Haynsworth demonstrated that the necessary and sufficient criteria for a finite-dimensional matrix to have real eigenvalues is that said matrix is pseudo-Hermitian with a positive-definite metric. In 2002, Ali Mostafazadeh showed that diagonalizable PT-symmetric Hamiltonians belong to the class of pseudo-Hermitian Hamiltonians. In 2003, it was proven that in finite dimensions, PT-symmetry is equivalent to pseudo-Hermiticity regardless of diagonalizability, thereby applying to the physically interesting case of non-diagonalizable Hamiltonians at exceptional points. This indicates that the mechanism of PT-symmetry breaking at exception points, where the Hamiltionian is usually not diagonalizable, is the Krein collision between two eigenmodes with opposite signs of actions. In 2005, PT symmetry was introduced to the field of optics by the research group of Gonzalo Muga by noting that PT symmetry corresponds to the presence of balanced gain and loss. In 2007, the physicist Demetrios Christodoulides and his collaborators further studied the implications of PT symmetry in optics. The coming years saw the first experimental demonstrations of PT symmetry in passive and active systems. PT symmetry has also been applied to classical mechanics, metamaterials, electric circuits, and nuclear magnetic resonance. In 2017, a non-Hermitian PT-symmetric Hamiltonian was proposed by Dorje Brody and Markus Müller that "formally satisfies the conditions of the Hilbert–Pólya conjecture." == References ==
Wikipedia/Non-Hermitian_quantum_mechanics
In quantum mechanics, a translation operator is defined as an operator which shifts particles and fields by a certain amount in a certain direction. It is a special case of the shift operator from functional analysis. More specifically, for any displacement vector x {\displaystyle \mathbf {x} } , there is a corresponding translation operator T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} that shifts particles and fields by the amount x {\displaystyle \mathbf {x} } . For example, if T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} acts on a particle located at position r {\displaystyle \mathbf {r} } , the result is a particle at position r + x {\displaystyle \mathbf {r} +\mathbf {x} } . Translation operators are unitary. Translation operators are closely related to the momentum operator; for example, a translation operator that moves by an infinitesimal amount in the y {\displaystyle y} direction has a simple relationship to the y {\displaystyle y} -component of the momentum operator. Because of this relationship, conservation of momentum holds when the translation operators commute with the Hamiltonian, i.e. when laws of physics are translation-invariant. This is an example of Noether's theorem. == Action on position eigenkets and wavefunctions == The translation operator T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} moves particles and fields by the amount x {\displaystyle \mathbf {x} } . Therefore, if a particle is in an eigenstate | r ⟩ {\displaystyle |\mathbf {r} \rangle } of the position operator (i.e., precisely located at the position r {\displaystyle \mathbf {r} } ), then after T ^ ( x ) = ∫ d r | r + x ⟩ ⟨ r | {\displaystyle {\hat {T}}(\mathbf {x} )=\int \!d\mathbf {r} ~|\mathbf {r+x} \rangle \langle \mathbf {r} |} acts on it, the particle is at the position r + x {\displaystyle \mathbf {r} +\mathbf {x} } : T ^ ( x ) | r ⟩ = | r + x ⟩ . {\displaystyle {\hat {T}}(\mathbf {x} )|\mathbf {r} \rangle =|\mathbf {r} +\mathbf {x} \rangle .} An alternative (and equivalent) way to describe what the translation operator determines is based on position-space wavefunctions. If a particle has a position-space wavefunction ψ ( r ) {\displaystyle \psi (\mathbf {r} )} , and T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} acts on the particle, the new position-space wavefunction is ψ ′ ( r ) = T ^ ( x ) ψ ( r ) {\displaystyle \psi '(\mathbf {r} )={\hat {T}}(\mathbf {x} )\psi (\mathbf {r} )} defined by ψ ′ ( r ) = ψ ( r − x ) . {\displaystyle \psi '(\mathbf {r} )=\psi (\mathbf {r} -\mathbf {x} ).} This relation is easier to remember as ψ ′ ( r + x ) = ψ ( r ) , {\displaystyle \psi '(\mathbf {r} +\mathbf {x} )=\psi (\mathbf {r} ),} which can be read as: "The value of the new wavefunction at the new point equals the value of the old wavefunction at the old point". Here is an example showing that these two descriptions are equivalent. The state | a ⟩ {\displaystyle |\mathbf {a} \rangle } corresponds to the wavefunction ψ ( r ) = δ ( r − a ) {\displaystyle \psi (\mathbf {r} )=\delta (\mathbf {r} -\mathbf {a} )} (where δ {\displaystyle \delta } is the Dirac delta function), while the state T ^ ( x ) | a ⟩ = | a + x ⟩ {\displaystyle {\hat {T}}(\mathbf {x} )|\mathbf {a} \rangle =|\mathbf {a} +\mathbf {x} \rangle } corresponds to the wavefunction ψ ′ ( r ) = δ ( r − ( a + x ) ) . {\displaystyle \psi '(\mathbf {r} )=\delta (\mathbf {r} -(\mathbf {a} +\mathbf {x} )).} These indeed satisfy ψ ′ ( r ) = ψ ( r − x ) . {\displaystyle \psi '(\mathbf {r} )=\psi (\mathbf {r} -\mathbf {x} ).} == Momentum as generator of translations == In introductory physics, momentum is usually defined as mass times velocity. However, there is a more fundamental way to define momentum, in terms of translation operators. This is more specifically called canonical momentum, and it is usually but not always equal to mass times velocity. One notable exception pertains to a charged particle in a magnetic field in which the canonical momentum includes both the usual momentum and a second terms proportional to the magnetic vector potential. This definition of momentum is especially important because the law of conservation of momentum applies only to canonical momentum, and is not universally valid if momentum is defined instead as mass times velocity (the so-called "kinetic momentum"), for reasons explained below. The (canonical) momentum operator is defined as the gradient of the translation operators near the origin: where ℏ {\displaystyle \hbar } is the reduced Planck constant. For example, what is the result when the p ^ x {\displaystyle {\hat {p}}_{x}} operator acts on a quantum state? To find the answer, translate the state by an infinitesimal amount in the x {\displaystyle x} -direction, calculate the rate that the state is changing, and multiply the result by i ℏ {\displaystyle i\hbar } . For example, if a state does not change at all when it is translated an infinitesimal amount the x {\displaystyle x} -direction, then its x {\displaystyle x} -component of momentum is 0. More explicitly, p ^ {\displaystyle \mathbf {\hat {p}} } is a vector operator (i.e. a vector operator consisting of three operators ( p ^ x , p ^ y , p ^ z ) {\displaystyle ({\hat {p}}_{x},{\hat {p}}_{y},{\hat {p}}_{z})} ), x {\displaystyle x} components is given by: p ^ x = i ℏ lim a → 0 T ^ ( a x ^ ) − I ^ a {\displaystyle {\hat {p}}_{x}=i\hbar \lim _{a\to 0}{\frac {{\hat {T}}(a\mathbf {\hat {x}} )-{\hat {\mathbb {I} }}}{a}}} where I ^ {\displaystyle {\hat {\mathbb {I} }}} is the identity operator and x ^ {\displaystyle \mathbf {\hat {x}} } is the unit vector in the x {\displaystyle x} -direction. ( p ^ y {\displaystyle {\hat {p}}_{y}} and p ^ z {\displaystyle {\hat {p}}_{z}} are defined analogously.) The equation above is the most general definition of p ^ {\displaystyle \mathbf {\hat {p}} } . In the special case of a single particle with wavefunction ψ ( r ) {\displaystyle \psi (\mathbf {r} )} , p ^ {\displaystyle \mathbf {\hat {p}} } can be written in a more specific and useful form. In one dimension: ( p ^ ψ ) ( r ) = i ℏ lim a → 0 ( T ^ ( a ) ψ ) ( r ) − ψ ( r ) a = i ℏ lim a → 0 ψ ( r − a ) − ψ ( r ) a = − i ℏ ∂ ∂ r ψ ( r ) . {\displaystyle {\begin{aligned}({\hat {p}}\psi )(r)&=i\hbar \lim _{a\to 0}{\frac {({\hat {T}}(a)\psi )(r)-\psi (r)}{a}}\\&=i\hbar \lim _{a\to 0}{\frac {\psi (r-a)-\psi (r)}{a}}\\&=-i\hbar {\frac {\partial }{\partial r}}\psi (r).\end{aligned}}} While in three dimensions, p ^ = − i ℏ ∇ {\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla } as an operator acting on position-space wavefunctions. This is the familiar quantum-mechanical expression for p ^ {\displaystyle \mathbf {\hat {p}} } , but we have derived it here from a more basic starting point. We have now defined p ^ {\displaystyle \mathbf {\hat {p}} } in terms of translation operators. It is also possible to write a translation operator as a function of p ^ {\displaystyle \mathbf {\hat {p}} } . The method consists of considering an infinitesimal action on a wavefunction, and expanding the transformed wavefunction as a sum of the initial wavefunction and a first-order perturbative correction; and then expressing a finite translation as a huge number N {\displaystyle N} of consecutive tiny translations, and then use the fact that infinitesimal translations can be written in terms of p ^ {\displaystyle \mathbf {\hat {p}} } . From what has been stated previously, we know from above that if T ^ ( d x ) {\displaystyle {\widehat {T}}(d\mathbf {x} )} acts on ψ ( r ) {\displaystyle \psi (\mathbf {r} )} that the result is ψ ′ ( r ) = ψ ( r − d x ) . {\displaystyle {\begin{aligned}\psi '(\mathbf {r} )=\psi (\mathbf {r} -d\mathbf {x} ).\end{aligned}}} The right-hand side may be written as a Taylor series ψ ( r ) − d x ⋅ ∂ ψ ( r ) ∂ r + 1 2 ( d x ) 2 ( ∂ ψ ( r ) ∂ r ) 2 + ⋯ . {\displaystyle {\begin{aligned}\psi (\mathbf {r} )-d\mathbf {x} \cdot {\frac {\partial \psi (\mathbf {r} )}{\partial \mathbf {r} }}+{\frac {1}{2}}\left(d\mathbf {x} \right)^{2}\left({\frac {\partial \psi (\mathbf {r} )}{\partial \mathbf {r} }}\right)^{2}+\cdots .\end{aligned}}} We suppose that for an infinitesimal translation that the higher-order terms in the series become successively smaller. From which we write ψ ′ ( r ) = ψ ( r ) − d x ⋅ ∂ ψ ( r ) ∂ r = ( 1 − i d x ⋅ p ^ ℏ ) ψ ( r ) . {\displaystyle {\begin{aligned}\psi '(\mathbf {r} )=\psi (\mathbf {r} )-d\mathbf {x} \cdot {\frac {\partial \psi (\mathbf {r} )}{\partial \mathbf {r} }}=\left(1-{\frac {id\mathbf {x} \cdot {\widehat {\mathbf {p} }}}{\hbar }}\right)\psi (\mathbf {r} ).\end{aligned}}} With this preliminary result, we proceed to write the an infinite amount of infinitesimal actions as T ^ ( x ) = lim N → ∞ ( T ^ ( x / N ) ) N = lim N → ∞ [ 1 − i x ⋅ p ^ N ℏ ] N . {\displaystyle {\begin{aligned}{\hat {T}}(\mathbf {x} )&=\lim _{N\to \infty }({\hat {T}}(\mathbf {x} /N))^{N}\\&=\lim _{N\to \infty }\left[1-{\frac {i\mathbf {x} \cdot \mathbf {\hat {p}} }{N\hbar }}\right]^{N}\end{aligned}}.} The right-hand side is precisely a series for an exponential. Hence, where exp {\displaystyle \exp } is the operator exponential and the right-hand side is the Taylor series expansion. For very small x {\displaystyle \mathbf {x} } , one can use the approximation: T ^ ( x ) ≈ 1 − i x ⋅ p ^ / ℏ for x → 0 {\displaystyle {\hat {T}}(\mathbf {x} )\approx 1-i\mathbf {x} \cdot \mathbf {\hat {p}} /\hbar ~~{\text{for}}~~\mathbf {x} \to \mathbf {0} } The operator equation exp ⁡ ( − x ⋅ i p ^ ℏ ) ψ ( r ) = ψ ( r − x ) {\displaystyle \exp \left(-\mathbf {x} \cdot {\frac {i\mathbf {\hat {p}} }{\hbar }}\right)\psi (\mathbf {r} )=\psi (\mathbf {r} -\mathbf {x} )\,} is an operator version of Taylor's theorem; and is, therefore, only valid under caveats about ψ {\displaystyle \psi } being an analytic function. Concentrating on the operator part, it shows that i p ^ ℏ {\displaystyle {\frac {i\mathbf {\hat {p}} }{\hbar }}} is an infinitesimal transformation, generating translations of the real line via the exponential. It is for this reason that the momentum operator is referred to as the generator of translation. A nice way to double-check that these relations are correct is to do a Taylor expansion of the translation operator acting on a position-space wavefunction. Expanding the exponential to all orders, the translation operator generates exactly the full Taylor expansion of a test function: ψ ( r − x ) = T ^ ( x ) ψ ( r ) = exp ⁡ ( − i x ⋅ p ^ ℏ ) ψ ( r ) = ( ∑ n = 0 ∞ 1 n ! ( − i ℏ x ⋅ p ^ ) n ) ψ ( r ) = ( ∑ n = 0 ∞ 1 n ! ( − x ⋅ ∇ ) n ) ψ ( r ) = ψ ( r ) − x ⋅ ∇ ψ ( r ) + 1 2 ! ( x ⋅ ∇ ) 2 ψ ( r ) − … {\displaystyle {\begin{aligned}\psi (\mathbf {r} -\mathbf {x} )&={\hat {T}}(\mathbf {x} )\psi (\mathbf {r} )\\&=\exp \left(-{\frac {i\mathbf {x} \cdot \mathbf {\hat {p}} }{\hbar }}\right)\psi (\mathbf {r} )\\&=\left(\sum _{n=0}^{\infty }{\frac {1}{n!}}(-{\frac {i}{\hbar }}\mathbf {x} \cdot \mathbf {\hat {p}} )^{n}\right)\psi (\mathbf {r} )\\&=\left(\sum _{n=0}^{\infty }{\frac {1}{n!}}(-\mathbf {x} \cdot \mathbf {\nabla } )^{n}\right)\psi (\mathbf {r} )\\&=\psi (\mathbf {r} )-\mathbf {x} \cdot \mathbf {\nabla } \psi (\mathbf {r} )+{\frac {1}{2!}}(\mathbf {x} \cdot \mathbf {\nabla } )^{2}\psi (\mathbf {r} )-\dots \end{aligned}}} So every translation operator generates exactly the expected translation on a test function if the function is analytic in some domain of the complex plane. == Properties == === Successive translations === T ^ ( x 1 ) T ^ ( x 2 ) = T ^ ( x 1 + x 2 ) {\displaystyle {\hat {T}}(\mathbf {x} _{1}){\hat {T}}(\mathbf {x} _{2})={\hat {T}}(\mathbf {x} _{1}+\mathbf {x} _{2})} In other words, if particles and fields are moved by the amount x 2 {\displaystyle \mathbf {x} _{2}} and then by the amount x 1 {\displaystyle \mathbf {x} _{1}} , overall they have been moved by the amount x 1 + x 2 {\displaystyle \mathbf {x} _{1}+\mathbf {x} _{2}} . For a mathematical proof, one can look at what these operators do to a particle in a position eigenstate: T ^ ( x 1 ) T ^ ( x 2 ) | r ⟩ = T ^ ( x 1 ) | x 2 + r ⟩ = | x 1 + x 2 + r ⟩ = T ^ ( x 1 + x 2 ) | r ⟩ {\displaystyle {\hat {T}}(\mathbf {x} _{1}){\hat {T}}(\mathbf {x} _{2})|\mathbf {r} \rangle ={\hat {T}}(\mathbf {x} _{1})|\mathbf {x} _{2}+\mathbf {r} \rangle =|\mathbf {x} _{1}+\mathbf {x} _{2}+\mathbf {r} \rangle ={\hat {T}}(\mathbf {x} _{1}+\mathbf {x} _{2})|\mathbf {r} \rangle } Since the operators T ^ ( x 1 ) T ^ ( x 2 ) {\displaystyle {\hat {T}}(\mathbf {x} _{1}){\hat {T}}(\mathbf {x} _{2})} and T ^ ( x 1 + x 2 ) {\displaystyle {\hat {T}}(\mathbf {x} _{1}+\mathbf {x} _{2})} have the same effect on every state in an eigenbasis, it follows that the operators are equal. === Identity translation === The translation T ^ ( 0 ) = I ^ {\displaystyle {\hat {T}}(\mathbf {0} )={\hat {\mathbb {I} }}} , i.e. a translation by a distance of 0 is the same as the identity operator which leaves all states unchanged. === Inverse === The translation operators are invertible, and their inverses are: ( T ^ ( x ) ) − 1 = T ^ ( − x ) {\displaystyle ({\hat {T}}(\mathbf {x} ))^{-1}={\hat {T}}(-\mathbf {x} )} This follows from the "successive translations" property above, and the identity translation. === Translation operators commute with each other === T ^ ( x ) T ^ ( y ) = T ^ ( y ) T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} ){\hat {T}}(\mathbf {y} )={\hat {T}}(\mathbf {y} ){\hat {T}}(\mathbf {x} )} because both sides are equal to T ^ ( x + y ) {\displaystyle {\hat {T}}(\mathbf {x} +\mathbf {y} )} . === Translation operators are unitary === To show that translation operators are unitary, we first must prove that the momentum operator p ^ {\displaystyle {\widehat {p}}} is Hermitian. Then, we can prove that the translation operator meets two criteria that are necessary to be a unitary operator. To begin with, the linear momentum operator p ^ : L 2 ( [ − ∞ , ∞ ] , μ ) → L 2 ( [ − ∞ , ∞ ] , μ ) {\displaystyle {\widehat {p}}:L^{2}([-\infty ,\infty ],\mu )\to L^{2}([-\infty ,\infty ],\mu )} is the rule that assigns to any ψ ( r ) {\displaystyle \psi (r)} in the domain the one vector ψ ′ ( r ) = [ p ^ ψ ] ( r ) = [ − i ℏ d d x ψ ] ( r ) {\displaystyle \psi '(r)=\left[{\widehat {p}}\psi \right](r)=\left[-i\hbar {\frac {d}{dx}}\psi \right](r)} in the codomain is. Since ⟨ p ^ ψ , ϕ ⟩ = ⟨ ψ , p ^ ϕ ⟩ , for all ψ , ϕ ∈ L 2 ( [ − ∞ , ∞ ] , μ ) {\displaystyle \langle {\widehat {p}}\psi ,\phi \rangle =\langle \psi ,{\widehat {p}}\phi \rangle ,\quad {\text{for all}}~\psi ,\phi \in L^{2}([-\infty ,\infty ],\mu )} therefore the linear momentum operator p ^ {\displaystyle {\widehat {p}}} is, in fact, a Hermitian operator. Detailed proofs of this can be found in many textbooks and online (e.g. https://physics.stackexchange.com/a/832341/194354). Having in hand that the momentum operator is Hermitian, we can prove that the translation operator is a unitary operator. First, it must shown that translation operator is a bounded operator. It is sufficient to state that for all ψ ∈ L 2 ( [ a , b ] , μ ) {\displaystyle \psi \in L^{2}([a,b],\mu )} that ‖ [ T ^ x ψ ] ( r ) ‖ L 2 ( [ a , b ] , μ ) = ‖ ψ ( r ) ‖ L 2 ( [ a , b ] , μ ) . {\displaystyle \left\|\left[{\widehat {T}}_{\mathbf {x} }\psi \right]\!(r)\left\|_{L^{2}([a,b],\mu )}=\right\|\psi (r)\right\|_{L^{2}([a,b],\mu )}.} Second, it must be (and can be) shown that T ^ x † T ^ x = T ^ x T ^ x † = I . {\displaystyle {\widehat {T}}_{\mathbf {x} }^{\dagger }{\widehat {T}}_{\mathbf {x} }={\widehat {T}}_{\mathbf {x} }{\widehat {T}}_{\mathbf {x} }^{\dagger }=\mathbb {I} .} A detailed proof can be found in reference https://math.stackexchange.com/a/4990451/309209. === Translation Operator operating on a bra === A translation operator T ^ ( x ) = ∫ d r | r + x ⟩ ⟨ r | {\displaystyle {\hat {T}}(\mathbf {x} )=\int d\mathbf {r} ~|\mathbf {r+x} \rangle \langle \mathbf {r} |} operating on a bra in the position eigenbasis gives: ⟨ r | T ^ ( x ) = ⟨ r − x | {\displaystyle \langle \mathbf {r} |{\hat {T}}(\mathbf {x} )=\langle \mathbf {r} -\mathbf {x} |} === Splitting a translation into its components === According to the "successive translations" property above, a translation by the vector x = ( x , y , z ) {\displaystyle \mathbf {x} =(x,y,z)} can be written as the product of translations in the component directions: T ^ ( x ) = T ^ ( x x ^ ) T ^ ( y y ^ ) T ^ ( z z ^ ) {\displaystyle {\hat {T}}(\mathbf {x} )={\hat {T}}(x\mathbf {\hat {x}} )\,{\hat {T}}(y\mathbf {\hat {y}} )\,{\hat {T}}(z\mathbf {\hat {z}} )} where x ^ , y ^ , z ^ {\displaystyle \mathbf {\hat {x}} ,\mathbf {\hat {y}} ,\mathbf {\hat {z}} } are unit vectors. === Commutator with position operator === Suppose | r ⟩ {\displaystyle |\mathbf {r} \rangle } is an eigenvector of the position operator r ^ {\displaystyle \mathbf {\hat {r}} } with eigenvalue r {\displaystyle \mathbf {r} } . We have T ^ ( x ) r ^ | r ⟩ = T ^ ( x ) r | r ⟩ = r | x + r ⟩ {\displaystyle {\hat {T}}(\mathbf {x} )\mathbf {\hat {r}} |\mathbf {r} \rangle ={\hat {T}}(\mathbf {x} )\mathbf {r} |\mathbf {r} \rangle =\mathbf {r} |\mathbf {x} +\mathbf {r} \rangle } while r ^ T ^ ( x ) | r ⟩ = r ^ | x + r ⟩ = ( x + r ) | x + r ⟩ {\displaystyle \mathbf {\hat {r}} {\hat {T}}(\mathbf {x} )|\mathbf {r} \rangle ={\hat {\mathbf {r} }}|\mathbf {x} +\mathbf {r} \rangle =(\mathbf {x} +\mathbf {r} )|\mathbf {x} +\mathbf {r} \rangle } Therefore, the commutator between a translation operator and the position operator is: [ r ^ , T ^ ( x ) ] ≡ r ^ T ^ ( x ) − T ^ ( x ) r ^ = x T ^ ( x ) {\displaystyle [\mathbf {\hat {r}} ,{\hat {T}}(\mathbf {x} )]\equiv \mathbf {\hat {r}} {\hat {T}}(\mathbf {x} )-{\hat {T}}(\mathbf {x} )\mathbf {\hat {r}} =\mathbf {x} {\hat {T}}(\mathbf {x} )} This can also be written (using the above properties) as: ( T ^ ( x ) ) − 1 r ^ T ^ ( x ) = r ^ + x I ^ {\displaystyle ({\hat {T}}(\mathbf {x} ))^{-1}\mathbf {\hat {r}} {\hat {T}}(\mathbf {x} )=\mathbf {\hat {r}} +\mathbf {x} {\hat {\mathbb {I} }}} where I ^ {\displaystyle {\hat {\mathbb {I} }}} is the identity operator. === Commutator with momentum operator === Since translation operators all commute with each other (see above), and since each component of the momentum operator is a sum of two scaled translation operators (e.g. p ^ y = lim ε → 0 i ℏ ε ( T ^ ( ( 0 , ε , 0 ) ) − T ^ ( ( 0 , 0 , 0 ) ) ) {\displaystyle {\hat {p}}_{y}=\lim _{\varepsilon \to 0}{\frac {i\hbar }{\varepsilon }}\left({\hat {T}}((0,\varepsilon ,0))-{\hat {T}}((0,0,0))\right)} ), it follows that translation operators all commute with the momentum operator, i.e. T ^ ( x ) p ^ = p ^ T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} ){\hat {\mathbf {p} }}={\hat {\mathbf {p} }}{\hat {T}}(\mathbf {x} )} This commutation with the momentum operator holds true generally even if the system is not isolated where energy or momentum may not be conserved. == Translation group == The set T {\displaystyle {\mathfrak {T}}} of translation operators T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} for all x {\displaystyle \mathbf {x} } , with the operation of multiplication defined as the result of successive translations (i.e. function composition), satisfies all the axioms of a group: Closure When two translations are done consecutively, the result is a single different translation. (See "successive translations" property above.) Existence of identity A translation by the vector 0 {\displaystyle \mathbf {0} } is the identity operator, i.e. the operator that has no effect on anything. It functions as the identity element of the group. Every element has an inverse As proven above, any translation operator T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} is the inverse of the reverse translation T ^ ( − x ) {\displaystyle {\hat {T}}(-\mathbf {x} )} . Associativity This is the claim that T ^ ( x 1 ) ( T ^ ( x 2 ) T ^ ( x 3 ) ) = ( T ^ ( x 1 ) T ^ ( x 2 ) ) T ^ ( x 3 ) {\displaystyle {\hat {T}}(\mathbf {x} _{1})\left({\hat {T}}(\mathbf {x} _{2}){\hat {T}}(\mathbf {x} _{3})\right)=\left({\hat {T}}(\mathbf {x} _{1}){\hat {T}}(\mathbf {x} _{2})\right){\hat {T}}(\mathbf {x} _{3})} . It is true by definition, as is the case for any group based on function composition. Therefore, the set T {\displaystyle {\mathfrak {T}}} of translation operators T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} for all x {\displaystyle \mathbf {x} } forms a group. Since there are continuously infinite number of elements, the translation group is a continuous group. Moreover, the translation operators commute among themselves, i.e. the product of two translation (a translation followed by another) does not depend on their order. Therefore, the translation group is an abelian group. The translation group acting on the Hilbert space of position eigenstates is isomorphic to the group of vector additions in the Euclidean space. == Expectation values of position and momentum in the translated state == Consider a single particle in one dimension. Unlike classical mechanics, in quantum mechanics a particle neither has a well-defined position nor a well-defined momentum. In the quantum formulation, the expectation values play the role of the classical variables. For example, if a particle is in a state | ψ ⟩ {\displaystyle |\psi \rangle } , then the expectation value of the position is ⟨ ψ | r ^ | ψ ⟩ {\displaystyle \langle \psi |\mathbf {\hat {r}} |\psi \rangle } , where r ^ {\displaystyle \mathbf {\hat {r}} } is the position operator. If a translation operator T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} acts on the state | ψ ⟩ {\displaystyle |\psi \rangle } , creating a new state | ψ 2 ⟩ {\displaystyle |\psi _{2}\rangle } then the expectation value of position for | ψ 2 ⟩ {\displaystyle |\psi _{2}\rangle } is equal to the expectation value of position for | ψ ⟩ {\displaystyle |\psi \rangle } plus the vector x {\displaystyle \mathbf {x} } . This result is consistent with what you would expect from an operation that shifts the particle by that amount. On the other hand, when the translation operator acts on a state, the expectation value of the momentum is not changed. This can be proven in a similar way as the above, but using the fact that translation operators commute with the momentum operator. This result is again consistent with expectations: translating a particle does not change its velocity or mass, so its momentum should not change. == Translational invariance == In quantum mechanics, the Hamiltonian is the operator corresponding to the total energy of a system. For any | r ⟩ {\displaystyle |\mathbf {r} \rangle } in the domain, let the one vector | r T ⟩ ≡ T ^ x | r ⟩ {\displaystyle |\mathbf {r} _{T}\rangle \equiv {\hat {T}}_{\mathbf {x} }|\mathbf {r} \rangle } in the codomain be a newly translated state. If ⟨ r T | H ^ | r T ⟩ = ⟨ r | H ^ | r ⟩ , {\displaystyle \langle \mathbf {r} _{T}|{\hat {H}}|\mathbf {r} _{T}\rangle =\langle \mathbf {r} |{\hat {H}}|\mathbf {r} \rangle ,} then a Hamiltonian is said to be invariant. Since the translation operator is a unitary operator, the antecedent can also be written as ⟨ r | T ^ x − 1 H ^ T ^ x | r ⟩ = ⟨ r | H ^ | r ⟩ . {\displaystyle \langle \mathbf {r} |{\hat {T}}_{\mathbf {x} }^{-1}{\hat {H}}{\hat {T}}_{\mathbf {x} }|\mathbf {r} \rangle =\langle \mathbf {r} |{\hat {H}}|\mathbf {r} \rangle .} Since this hold for any | r ⟩ {\displaystyle |\mathbf {r} \rangle } in the domain, the implication is that T ^ x − 1 H ^ T ^ x = H ^ {\displaystyle {\hat {T}}_{\mathbf {x} }^{-1}{\hat {H}}{\hat {T}}_{\mathbf {x} }={\hat {H}}} or that H ^ T ^ x − T ^ x H ^ = [ H ^ , T ^ x ] = 0. {\displaystyle {\hat {H}}{\hat {T}}_{\mathbf {x} }-{\hat {T}}_{\mathbf {x} }{\hat {H}}=[{\hat {H}},{\hat {T}}_{\mathbf {x} }]=0.} Thus, if Hamiltonian commutes with the translation operator, then the Hamiltonian is invariant under translation. Loosely speaking, if we translate the system, then measure its energy, then translate it back, it amounts to the same thing as just measuring its energy directly. === Continuous translational symmetry === First we consider the case where all the translation operators are symmetries of the system. Second we consider the case where the translation operator is not a symmetries of the system. As we will see, only in the first case does the conservation of momentum occur. For example, let H ^ {\displaystyle {\hat {H}}} be the Hamiltonian describing all particles and fields in the universe, and let T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} be the continuous translation operator that shifts all particles and fields in the universe simultaneously by the same amount. If we assert the a priori axiom that this translation is a continuous symmetry of the Hamiltonian (i.e., that H ^ {\displaystyle {\hat {H}}} is independent of location), then, as a consequence, conservation of momentum is universally valid. On the other hand, perhaps H ^ {\displaystyle {\hat {H}}} and T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} refer to just one particle. Then the translation operators T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} are exact symmetries only if the particle is alone in a vacuum. Correspondingly, the momentum of a single particle is not usually conserved (it changes when the particle bumps into other objects or is otherwise deflected by the potential energy fields of the other particles), but it is conserved if the particle is alone in a vacuum. Since the Hamiltonian operator commutes with the translation operator when the Hamiltonian is an invariant with respect to translation, therefore [ H ^ , T ^ ( x ) ] = 0 . {\displaystyle \left[{\hat {H}},{\hat {T}}(\mathbf {x} )\right]=0\,.} Further, the Hamiltonian operator also commutes with the infinitesimal translation operator [ H ^ , 1 − i x ⋅ p ^ N ℏ ] = 0 ⇒ [ H ^ , p ^ ] = 0 ⇒ d d t ⟨ p ^ ⟩ = i ℏ [ H ^ , p ^ ] = 0. {\displaystyle {\begin{aligned}&\left[{\hat {H}},1-{\frac {i\mathbf {x} \cdot {\hat {\mathbf {p} }}}{N\hbar }}\right]=0\\&\Rightarrow [{\hat {H}},{\hat {\mathbf {p} }}]=0\\&\Rightarrow {\frac {d}{dt}}\langle {\hat {\mathbf {p} }}\rangle ={\frac {i}{\hbar }}[{\hat {H}},{\hat {\mathbf {p} }}]=0.\end{aligned}}} In summary, whenever the Hamiltonian for a system remains invariant under continuous translation, then the system has conservation of momentum, meaning that the expectation value of the momentum operator remains constant. This is an example of Noether's theorem. === Discrete translational symmetry === There is another special case where the Hamiltonian may be translationally invariant. This type of translational symmetry is observed whenever the potential is periodic: V ( r j ± a ) = V ( r j ) {\displaystyle V(r_{j}\pm a)=V(r_{j})} In general, the Hamiltonian is not invariant under any translation represented by T ^ j ( x j ) {\displaystyle {\hat {T}}_{j}(x_{j})} with x j {\displaystyle x_{j}} arbitrary, where T ^ j ( x j ) {\displaystyle {\hat {T}}_{j}(x_{j})} has the property: T ^ j ( x j ) | r j ⟩ = | r j + x j ⟩ {\displaystyle {\hat {T}}_{j}(x_{j})|r_{j}\rangle =|r_{j}+x_{j}\rangle } and, ( T ^ j ( x j ) ) † r ^ j T ^ j ( x j ) = r ^ j + x j I ^ {\displaystyle ({\hat {T}}_{j}(x_{j}))^{\dagger }{\hat {r}}_{j}{\hat {T}}_{j}(x_{j})={\hat {r}}_{j}+x_{j}{\hat {\mathbb {I} }}} (where I ^ {\displaystyle {\hat {\mathbb {I} }}} is the identity operator; see proof above). But, whenever x j {\displaystyle x_{j}} coincides with the period of the potential a {\displaystyle a} , ( T ^ j ( a ) ) † V ( r ^ j ) T ^ j ( a ) = V ( r ^ j + a I ^ ) = V ( r ^ j ) {\displaystyle ({\hat {T}}_{j}(a))^{\dagger }V({\hat {r}}_{j}){\hat {T}}_{j}(a)=V({\hat {r}}_{j}+a{\hat {\mathbb {I} }})=V({\hat {r}}_{j})} Since the kinetic energy part of the Hamiltonian H ^ {\displaystyle {\hat {H}}} is already invariant under any arbitrary translation, being a function of p ^ {\displaystyle \mathbf {\hat {p}} } , the entire Hamiltonian satisfies, ( T ^ j ( a ) ) † H ^ T ^ j ( a ) = H ^ {\displaystyle ({\hat {T}}_{j}(a))^{\dagger }{\hat {H}}{\hat {T}}_{j}(a)={\hat {H}}} Now, the Hamiltonian commutes with translation operator, i.e. they can be simultaneously diagonalised. Therefore, the Hamiltonian is invariant under such translation (which no longer remains continuous). The translation becomes discrete with the period of the potential. == Discrete translation in periodic potential: Bloch's theorem == The ions in a perfect crystal are arranged in a regular periodic array. So we are led to the problem of an electron in a potential V ( r ) {\displaystyle V(\mathbf {r} )} with the periodicity of the underlying Bravais lattice V ( r + R ) = V ( r ) {\displaystyle V(\mathbf {r} +\mathbf {R} )=V(\mathbf {r} )} for all Bravais lattice vectors R {\displaystyle \mathbf {R} } However, perfect periodicity is an idealisation. Real solids are never absolutely pure, and in the neighbourhood of the impurity atoms the solid is not the same as elsewhere in the crystal. Moreover, the ions are not in fact stationary, but continually undergo thermal vibrations about their equilibrium positions. These destroy the perfect translational symmetry of a crystal. To deal with this type of problems the main problem is artificially divided in two parts: (a) the ideal fictitious perfect crystal, in which the potential is genuinely periodic, and (b) the effects on the properties of a hypothetical perfect crystal of all deviations from perfect periodicity, treated as small perturbations. Although, the problem of electrons in a solid is in principle a many-electron problem, in independent electron approximation each electron is subjected to the one-electron Schrödinger equation with a periodic potential and is known as Bloch electron (in contrast to free particles, to which Bloch electrons reduce when the periodic potential is identically zero.) For each Bravais lattice vector R {\displaystyle \mathbf {R} } we define a translation operator T ^ R {\displaystyle {\hat {T}}_{\mathbf {R} }} which, when operating on any function f ( r ) {\displaystyle f(\mathbf {r} )} shifts the argument by R {\displaystyle \mathbf {R} } : T ^ R f ( r ) = f ( r + R ) {\displaystyle {\hat {T}}_{\mathbf {R} }f(\mathbf {r} )=f(\mathbf {r} +\mathbf {R} )} Since all translations form an Abelian group, the result of applying two successive translations does not depend on the order in which they are applied, i.e. T ^ R 1 T ^ R 2 = T ^ R 2 T ^ R 1 = T ^ R 1 + R 2 {\displaystyle {\hat {T}}_{\mathbf {R} _{1}}{\hat {T}}_{\mathbf {R} _{2}}={\hat {T}}_{\mathbf {R} _{2}}{\hat {T}}_{\mathbf {R} _{1}}={\hat {T}}_{\mathbf {R} _{1}+\mathbf {R} _{2}}} In addition, as the Hamiltonian is periodic, we have, T ^ R H ^ = H ^ T ^ R {\displaystyle {\hat {T}}_{\mathbf {R} }{\hat {H}}={\hat {H}}{\hat {T}}_{\mathbf {R} }} Hence, the T ^ R {\displaystyle {\hat {T}}_{\mathbf {R} }} for all Bravais lattice vectors R {\displaystyle \mathbf {R} } and the Hamiltonian H ^ {\displaystyle {\hat {H}}} form a set of commutating operators. Therefore, the eigenstates of H ^ {\displaystyle {\hat {H}}} can be chosen to be simultaneous eigenstates of all the T ^ R {\displaystyle {\hat {T}}_{\mathbf {R} }} : H ^ ψ = E ψ {\displaystyle {\hat {H}}\psi ={\mathcal {E}}\psi } T ^ R ψ = c ( R ) ψ {\displaystyle {\hat {T}}_{\mathbf {R} }\psi =c(\mathbf {R} )\psi } The eigenvalues c ( R ) {\displaystyle c(\mathbf {R} )} of the translation operators are related because of the condition: T ^ R 1 T ^ R 2 = T ^ R 2 T ^ R 1 = T ^ R 1 + R 2 {\displaystyle {\hat {T}}_{\mathbf {R} _{1}}{\hat {T}}_{\mathbf {R} _{2}}={\hat {T}}_{\mathbf {R} _{2}}{\hat {T}}_{\mathbf {R} _{1}}={\hat {T}}_{\mathbf {R} _{1}+\mathbf {R} _{2}}} We have, T ^ R 1 T ^ R 2 ψ = c ( R 1 ) T ^ R 2 ψ = c ( R 1 ) c ( R 2 ) ψ {\displaystyle {\begin{aligned}{\hat {T}}_{\mathbf {R} _{1}}{\hat {T}}_{\mathbf {R} _{2}}\psi &=c(\mathbf {R} _{1}){\hat {T}}_{\mathbf {R} _{2}}\psi \\&=c(\mathbf {R} _{1})c(\mathbf {R} _{2})\psi \end{aligned}}} And, T ^ R 1 + R 2 ψ = c ( R 1 + R 2 ) ψ {\displaystyle {\hat {T}}_{\mathbf {R_{1}+R_{2}} }\psi =c(\mathbf {R} _{1}+\mathbf {R} _{2})\psi } Therefore, it follows that, c ( R 1 + R 2 ) = c ( R 1 ) c ( R 2 ) {\displaystyle c(\mathbf {R} _{1}+\mathbf {R} _{2})=c(\mathbf {R} _{1})c(\mathbf {R} _{2})} Now let the a i {\displaystyle \mathbf {a} _{i}} 's be the three primitive vector for the Bravais lattice. By a suitable choice of x i {\displaystyle x_{i}} , we can always write c ( a i ) {\displaystyle c(\mathbf {a} _{i})} in the form c ( a i ) = e 2 π i x i {\displaystyle c(\mathbf {a} _{i})=e^{2\pi ix_{i}}} If R {\displaystyle \mathbf {R} } is a general Bravais lattice vector, given by R = n 1 a 1 + n 2 a 2 + n 3 a 3 {\displaystyle \mathbf {R} =n_{1}\mathbf {a} _{1}+n_{2}\mathbf {a} _{2}+n_{3}\mathbf {a} _{3}} it follows then, c ( R ) = c ( n 1 a 1 + n 2 a 2 + n 3 a 3 ) = c ( n 1 a 1 ) c ( n 2 a 2 ) c ( n 3 a 3 ) = c ( a 1 ) n 1 c ( a 2 ) n 2 c ( a 3 ) n 3 {\displaystyle {\begin{aligned}c(\mathbf {R} )&=c(n_{1}\mathbf {a} _{1}+n_{2}\mathbf {a} _{2}+n_{3}\mathbf {a} _{3})\\&=c(n_{1}\mathbf {a} _{1})c(n_{2}\mathbf {a} _{2})c(n_{3}\mathbf {a} _{3})\\&=c(\mathbf {a} _{1})^{n_{1}}c(\mathbf {a} _{2})^{n_{2}}c(\mathbf {a} _{3})^{n_{3}}\end{aligned}}} Substituting c ( a i ) = e 2 π i x i {\displaystyle c(\mathbf {a} _{i})=e^{2\pi ix_{i}}} one gets, c ( R ) = e 2 π i ( n 1 x 1 + n 2 x 2 + n 3 x 3 ) = e i k ⋅ R {\displaystyle {\begin{aligned}c(\mathbf {R} )&=e^{2\pi i(n_{1}x_{1}+n_{2}x_{2}+n_{3}x_{3})}\\&=e^{i\mathbf {k} \cdot \mathbf {R} }\end{aligned}}} where k = x 1 b 1 + x 2 b 2 + x 3 b 3 {\displaystyle \mathbf {k} =x_{1}\mathbf {b} _{1}+x_{2}\mathbf {b} _{2}+x_{3}\mathbf {b} _{3}} and the b i {\displaystyle \mathbf {b} _{i}} 's are the reciprocal lattice vectors satisfying the equation b i ⋅ a j = 2 π δ i j {\displaystyle \mathbf {b} _{i}\cdot \mathbf {a} _{j}=2\pi \delta _{ij}} Therefore, one can choose the simultaneous eigenstates ψ {\displaystyle \psi } of the Hamiltonian H ^ {\displaystyle {\hat {H}}} and T ^ R {\displaystyle {\hat {T}}_{\mathbf {R} }} so that for every Bravais lattice vector R {\displaystyle \mathbf {R} } , ψ ( r + R ) = T ^ R ψ ( r ) = c ( R ) ψ ( r ) = e i k ⋅ R ψ ( r ) {\displaystyle {\begin{aligned}\psi (\mathbf {r+R} )&={\hat {T}}_{\mathbf {R} }\psi (\mathbf {r} )\\&=c(\mathbf {R} )\psi (\mathbf {r} )\\&=e^{i\mathbf {k} \cdot \mathbf {R} }\psi (\mathbf {r} )\end{aligned}}} So, This result is known as Bloch's Theorem. == Time evolution and translational invariance == In the passive transformation picture, translational invariance requires, [ T ^ ( x ) , H ^ ] = 0 {\displaystyle [{\hat {T}}(\mathbf {x} ),{\hat {H}}]=0} It follows that [ T ^ ( x ) , U ^ ( t ) ] = 0 {\displaystyle [{\hat {T}}(\mathbf {x} ),{\hat {U}}(t)]=0} where U ^ ( t ) {\displaystyle {\hat {U}}(t)} is the unitary time evolution operator. When the Hamiltonian is time independent, U ^ ( t ) = exp ⁡ ( − i H ^ t ℏ ) {\displaystyle {\hat {U}}(t)=\exp \left({\frac {-i{\hat {H}}t}{\hbar }}\right)} If the Hamiltonian is time dependent, the above commutation relation is satisfied if p ^ {\displaystyle {\hat {p}}} or T ^ ( x ) {\displaystyle {\hat {T}}(\mathbf {x} )} commutes with H ^ ( t ) {\displaystyle {\hat {H}}(t)} for all t. === Example === Suppose at t = 0 {\displaystyle t=0} two observers A and B prepare identical systems at x = 0 {\displaystyle x=0} and x = a {\displaystyle x=a} (fig. 1), respectively. If | ψ ( 0 ) ⟩ {\displaystyle |\psi (0)\rangle } be the state vector of the system prepared by A, then the state vector of the system prepared by B will be given by T ^ ( a ) | ψ ( 0 ) ⟩ {\displaystyle {\hat {T}}(\mathbf {a} )|\psi (0)\rangle } Both the systems look identical to the observers who prepared them. After time t {\displaystyle t} , the state vectors evolve into U ^ ( t ) | ψ ( 0 ) ⟩ {\displaystyle {\hat {U}}(t)|\psi (0)\rangle } and U ^ ( t ) T ^ ( a ) | ψ ( 0 ) ⟩ {\displaystyle {\hat {U}}(t){\hat {T}}(\mathbf {a} )|\psi (0)\rangle } respectively. Using the above-mentioned commutation relation, the later may be written as, T ^ ( a ) U ^ ( t ) | ψ ( 0 ) ⟩ {\displaystyle {\hat {T}}(\mathbf {a} ){\hat {U}}(t)|\psi (0)\rangle } which is just the translated version of the system prepared by A at time t {\displaystyle t} . Therefore, the two systems, which differed only by a translation at t = 0 {\displaystyle t=0} , differ only by the same translation at any instant of time. The time evolution of both the systems appear the same to the observers who prepared them. It can be concluded that the translational invariance of Hamiltonian implies that the same experiment repeated at two different places will give the same result (as seen by the local observers). == See also == Bloch state Group Periodic function Shift operator Symmetries in quantum mechanics Time translation symmetry Translational symmetry == References ==
Wikipedia/Translation_operator_(quantum_mechanics)
For small angles, the trigonometric functions sine, cosine, and tangent can be calculated with reasonable accuracy by the following simple approximations: sin ⁡ θ ≈ tan ⁡ θ ≈ θ , cos ⁡ θ ≈ 1 − 1 2 θ 2 ≈ 1 , {\displaystyle {\begin{aligned}\sin \theta &\approx \tan \theta \approx \theta ,\\[5mu]\cos \theta &\approx 1-{\tfrac {1}{2}}\theta ^{2}\approx 1,\end{aligned}}} provided the angle is measured in radians. Angles measured in degrees must first be converted to radians by multiplying them by ⁠ π / 180 {\displaystyle \pi /180} ⁠. These approximations have a wide range of uses in branches of physics and engineering, including mechanics, electromagnetism, optics, cartography, astronomy, and computer science. One reason for this is that they can greatly simplify differential equations that do not need to be answered with absolute precision. There are a number of ways to demonstrate the validity of the small-angle approximations. The most direct method is to truncate the Maclaurin series for each of the trigonometric functions. Depending on the order of the approximation, cos ⁡ θ {\displaystyle \textstyle \cos \theta } is approximated as either 1 {\displaystyle 1} or as 1 − 1 2 θ 2 {\textstyle 1-{\frac {1}{2}}\theta ^{2}} . == Justifications == === Geometric === For a small angle, H and A are almost the same length, and therefore cos θ is nearly 1. The segment d (in red to the right) is the difference between the lengths of the hypotenuse, H, and the adjacent side, A, and has length H − H 2 − O 2 {\displaystyle \textstyle H-{\sqrt {H^{2}-O^{2}}}} , which for small angles is approximately equal to O 2 / 2 H ≈ 1 2 θ 2 H {\displaystyle \textstyle O^{2}\!/2H\approx {\tfrac {1}{2}}\theta ^{2}H} . As a second-order approximation, cos ⁡ θ ≈ 1 − θ 2 2 . {\displaystyle \cos {\theta }\approx 1-{\frac {\theta ^{2}}{2}}.} The opposite leg, O, is approximately equal to the length of the blue arc, s. The arc s has length θA, and by definition sin θ = ⁠O/H⁠ and tan θ = ⁠O/A⁠, and for a small angle, O ≈ s and H ≈ A, which leads to: sin ⁡ θ = O H ≈ O A = tan ⁡ θ = O A ≈ s A = A θ A = θ . {\displaystyle \sin \theta ={\frac {O}{H}}\approx {\frac {O}{A}}=\tan \theta ={\frac {O}{A}}\approx {\frac {s}{A}}={\frac {A\theta }{A}}=\theta .} Or, more concisely, sin ⁡ θ ≈ tan ⁡ θ ≈ θ . {\displaystyle \sin \theta \approx \tan \theta \approx \theta .} === Calculus === Using the squeeze theorem, we can prove that lim θ → 0 sin ⁡ ( θ ) θ = 1 , {\displaystyle \lim _{\theta \to 0}{\frac {\sin(\theta )}{\theta }}=1,} which is a formal restatement of the approximation sin ⁡ ( θ ) ≈ θ {\displaystyle \sin(\theta )\approx \theta } for small values of θ. A more careful application of the squeeze theorem proves that lim θ → 0 tan ⁡ ( θ ) θ = 1 , {\displaystyle \lim _{\theta \to 0}{\frac {\tan(\theta )}{\theta }}=1,} from which we conclude that tan ⁡ ( θ ) ≈ θ {\displaystyle \tan(\theta )\approx \theta } for small values of θ. Finally, L'Hôpital's rule tells us that lim θ → 0 cos ⁡ ( θ ) − 1 θ 2 = lim θ → 0 − sin ⁡ ( θ ) 2 θ = − 1 2 , {\displaystyle \lim _{\theta \to 0}{\frac {\cos(\theta )-1}{\theta ^{2}}}=\lim _{\theta \to 0}{\frac {-\sin(\theta )}{2\theta }}=-{\frac {1}{2}},} which rearranges to cos ⁡ ( θ ) ≈ 1 − θ 2 2 {\textstyle \cos(\theta )\approx 1-{\frac {\theta ^{2}}{2}}} for small values of θ. Alternatively, we can use the double angle formula cos ⁡ 2 A ≡ 1 − 2 sin 2 ⁡ A {\displaystyle \cos 2A\equiv 1-2\sin ^{2}A} . By letting θ = 2 A {\displaystyle \theta =2A} , we get that cos ⁡ θ = 1 − 2 sin 2 ⁡ θ 2 ≈ 1 − θ 2 2 {\textstyle \cos \theta =1-2\sin ^{2}{\frac {\theta }{2}}\approx 1-{\frac {\theta ^{2}}{2}}} . === Algebraic === The Taylor series expansions of trigonometric functions sine, cosine, and tangent near zero are: sin ⁡ θ = θ − 1 6 θ 3 + 1 120 θ 5 − ⋯ , cos ⁡ θ = 1 − 1 2 θ 2 + 1 24 θ 4 − ⋯ , tan ⁡ θ = θ + 1 3 θ 3 + 2 15 θ 5 + ⋯ . {\displaystyle {\begin{aligned}\sin \theta &=\theta -{\frac {1}{6}}\theta ^{3}+{\frac {1}{120}}\theta ^{5}-\cdots ,\\[6mu]\cos \theta &=1-{\frac {1}{2}}{\theta ^{2}}+{\frac {1}{24}}\theta ^{4}-\cdots ,\\[6mu]\tan \theta &=\theta +{\frac {1}{3}}\theta ^{3}+{\frac {2}{15}}\theta ^{5}+\cdots .\end{aligned}}} where ⁠ θ {\displaystyle \theta } ⁠ is the angle in radians. For very small angles, higher powers of ⁠ θ {\displaystyle \theta } ⁠ become extremely small, for instance if ⁠ θ = 0.01 {\displaystyle \theta =0.01} ⁠, then ⁠ θ 3 = 0.000 001 {\displaystyle \theta ^{3}=0.000\,001} ⁠, just one ten-thousandth of ⁠ θ {\displaystyle \theta } ⁠. Thus for many purposes it suffices to drop the cubic and higher terms and approximate the sine and tangent of a small angle using the radian measure of the angle, ⁠ sin ⁡ θ ≈ tan ⁡ θ ≈ θ {\displaystyle \sin \theta \approx \tan \theta \approx \theta } ⁠, and drop the quadratic term and approximate the cosine as ⁠ cos ⁡ θ ≈ 1 {\displaystyle \cos \theta \approx 1} ⁠. If additional precision is needed the quadratic and cubic terms can also be included, ⁠ sin ⁡ θ ≈ θ − 1 6 θ 3 {\displaystyle \sin \theta \approx \theta -{\tfrac {1}{6}}\theta ^{3}} ⁠, ⁠ cos ⁡ θ ≈ 1 − 1 2 θ 2 {\displaystyle \cos \theta \approx 1-{\tfrac {1}{2}}\theta ^{2}} ⁠, and ⁠ tan ⁡ θ ≈ θ + 1 3 θ 3 {\displaystyle \tan \theta \approx \theta +{\tfrac {1}{3}}\theta ^{3}} ⁠. ==== Dual numbers ==== One may also use dual numbers, defined as numbers in the form a + b ε {\displaystyle a+b\varepsilon } , with a , b ∈ R {\displaystyle a,b\in \mathbb {R} } and ε {\displaystyle \varepsilon } satisfying by definition ε 2 = 0 {\displaystyle \varepsilon ^{2}=0} and ε ≠ 0 {\displaystyle \varepsilon \neq 0} . By using the MacLaurin series of cosine and sine, one can show that cos ⁡ ( θ ε ) = 1 {\displaystyle \cos(\theta \varepsilon )=1} and sin ⁡ ( θ ε ) = θ ε {\displaystyle \sin(\theta \varepsilon )=\theta \varepsilon } . Furthermore, it is not hard to prove that the Pythagorean identity holds: sin 2 ⁡ ( θ ε ) + cos 2 ⁡ ( θ ε ) = ( θ ε ) 2 + 1 2 = θ 2 ε 2 + 1 = θ 2 ⋅ 0 + 1 = 1 {\displaystyle \sin ^{2}(\theta \varepsilon )+\cos ^{2}(\theta \varepsilon )=(\theta \varepsilon )^{2}+1^{2}=\theta ^{2}\varepsilon ^{2}+1=\theta ^{2}\cdot 0+1=1} == Error of the approximations == Near zero, the relative error of the approximations ⁠ cos ⁡ θ ≈ 1 {\displaystyle \cos \theta \approx 1} ⁠, ⁠ sin ⁡ θ ≈ θ {\displaystyle \sin \theta \approx \theta } ⁠, and ⁠ tan ⁡ θ ≈ θ {\displaystyle \tan \theta \approx \theta } ⁠ is quadratic in ⁠ θ {\displaystyle \theta } ⁠: for each order of magnitude smaller the angle is, the relative error of these approximations shrinks by two orders of magnitude. The approximation ⁠ cos ⁡ θ ≈ 1 − 1 2 θ 2 {\displaystyle \textstyle \cos \theta \approx 1-{\tfrac {1}{2}}\theta ^{2}} ⁠ has relative error which is quartic in ⁠ θ {\displaystyle \theta } ⁠: for each order of magnitude smaller the angle is, the relative error shrinks by four orders of magnitude. Figure 3 shows the relative errors of the small angle approximations. The angles at which the relative error exceeds 1% are as follows: ⁠ cos ⁡ θ ≈ 1 {\displaystyle \cos \theta \approx 1} ⁠ at about 0.14 radians (8.1°) ⁠ tan ⁡ θ ≈ θ {\displaystyle \tan \theta \approx \theta } ⁠ at about 0.17 radians (9.9°) ⁠ sin ⁡ θ ≈ θ {\displaystyle \sin \theta \approx \theta } ⁠ at about 0.24 radians (14.0°) ⁠ cos ⁡ θ ≈ 1 − 1 2 θ 2 {\displaystyle \textstyle \cos \theta \approx 1-{\tfrac {1}{2}}\theta ^{2}} ⁠ at about 0.66 radians (37.9°) == Slide-rule approximations == Many slide rules – especially "trig" and higher models – include an "ST" (sines and tangents) or "SRT" (sines, radians, and tangents) scale on the front or back of the slide, for computing with sines and tangents of angles smaller than about 0.1 radian. The right-hand end of the ST or SRT scale cannot be accurate to three decimal places for both arcsine(0.1) = 5.74 degrees and arctangent(0.1) = 5.71 degrees, so sines and tangents of angles near 5 degrees are given with somewhat worse than the usual expected "slide-rule accuracy". Some slide rules, such as the K&E Deci-Lon in the photo, calibrate to be accurate for radian conversion, at 5.73 degrees (off by nearly 0.4% for the tangent and 0.2% for the sine for angles around 5 degrees). Others are calibrated to 5.725 degrees, to balance the sine and tangent errors at below 0.3%. == Angle sum and difference == The angle addition and subtraction theorems reduce to the following when one of the angles is small (β ≈ 0): == Specific uses == === Astronomy === In astronomy, the angular size or angle subtended by the image of a distant object is often only a few arcseconds (denoted by the symbol ″), so it is well suited to the small angle approximation. The linear size (D) is related to the angular size (X) and the distance from the observer (d) by the simple formula: D = X d 206 265 ″ {\displaystyle D=X{\frac {d}{206\,265{''}}}} where X is measured in arcseconds. The quantity 206265″ is approximately equal to the number of arcseconds in a circle (1296000″), divided by 2π, or, the number of arcseconds in 1 radian. The exact formula is D = d tan ⁡ ( X 2 π 1 296 000 ″ ) {\displaystyle D=d\tan \left(X{\frac {2\pi }{1\,296\,000{''}}}\right)} and the above approximation follows when tan X is replaced by X. For example, the parsec is defined by the value of d when D=1 AU, X=1 arcsecond, but the definition used is the small-angle approximation (the first equation above). === Motion of a pendulum === The second-order cosine approximation is especially useful in calculating the potential energy of a pendulum, which can then be applied with a Lagrangian to find the indirect (energy) equation of motion. When calculating the period of a simple pendulum, the small-angle approximation for sine is used to allow the resulting differential equation to be solved easily by comparison with the differential equation describing simple harmonic motion. === Optics === In optics, the small-angle approximations form the basis of the paraxial approximation. === Wave interference === The sine and tangent small-angle approximations are used in relation to the double-slit experiment or a diffraction grating to develop simplified equations like the following, where y is the distance of a fringe from the center of maximum light intensity, m is the order of the fringe, D is the distance between the slits and projection screen, and d is the distance between the slits: y ≈ m λ D d {\displaystyle y\approx {\frac {m\lambda D}{d}}} === Structural mechanics === The small-angle approximation also appears in structural mechanics, especially in stability and bifurcation analyses (mainly of axially-loaded columns ready to undergo buckling). This leads to significant simplifications, though at a cost in accuracy and insight into the true behavior. === Piloting === The 1 in 60 rule used in air navigation has its basis in the small-angle approximation, plus the fact that one radian is approximately 60 degrees. === Interpolation === The formulas for addition and subtraction involving a small angle may be used for interpolating between trigonometric table values: Example: sin(0.755) sin ⁡ ( 0.755 ) = sin ⁡ ( 0.75 + 0.005 ) ≈ sin ⁡ ( 0.75 ) + ( 0.005 ) cos ⁡ ( 0.75 ) ≈ ( 0.6816 ) + ( 0.005 ) ( 0.7317 ) ≈ 0.6853. {\displaystyle {\begin{aligned}\sin(0.755)&=\sin(0.75+0.005)\\&\approx \sin(0.75)+(0.005)\cos(0.75)\\&\approx (0.6816)+(0.005)(0.7317)\\&\approx 0.6853.\end{aligned}}} where the values for sin(0.75) and cos(0.75) are obtained from trigonometric table. The result is accurate to the four digits given. == See also == Skinny triangle Versine Exsecant == References ==
Wikipedia/Small_angle_approximation
In physics, dynamics or classical dynamics is the study of forces and their effect on motion. It is a branch of classical mechanics, along with statics and kinematics. The fundamental principle of dynamics is linked to Newton's second law. == Subdivisions == === Rigid bodies === === Fluids === == Applications == Classical dynamics finds many applications: Aerodynamics, the study of the motion of air Brownian dynamics, the occurrence of Langevin dynamics in the motion of particles in solution File dynamics, stochastic motion of particles in a channel Flight dynamics, the science of aircraft and spacecraft design Molecular dynamics, the study of motion on the molecular level Langevin dynamics, a mathematical model for stochastic dynamics Orbital dynamics, the study of the motion of rockets and spacecraft Stellar dynamics, a description of the collective motion of stars Vehicle dynamics, the study of vehicles in motion == Generalizations == Non-classical dynamics include: System dynamics, the study of the behavior of complex systems Quantum dynamics analogue of classical dynamics in a quantum physics context Quantum chromodynamics, a theory of the strong interaction (color force) Quantum electrodynamics, a description of how matter and light interact Relativistic dynamics, a combination of relativistic and quantum concepts Thermodynamics, the study of the relationships between heat and mechanical energy == See also == Analytical dynamics Ballistics Contact dynamics Dynamical simulation Kinetics (physics) Multibody dynamics n-body problem == References ==
Wikipedia/Classical_dynamics
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space. == History == The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza. They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators. == Motivation == === Linear differential operators with constant coefficients === Consider a linear differential operator with constant coefficients, P ( D ) := ∑ α a α D α {\displaystyle P(D):=\sum _{\alpha }a_{\alpha }\,D^{\alpha }} which acts on smooth functions u {\displaystyle u} with compact support in Rn. This operator can be written as a composition of a Fourier transform, a simple multiplication by the polynomial function (called the symbol) P ( ξ ) = ∑ α a α ξ α , {\displaystyle P(\xi )=\sum _{\alpha }a_{\alpha }\,\xi ^{\alpha },} and an inverse Fourier transform, in the form: Here, α = ( α 1 , … , α n ) {\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})} is a multi-index, a α {\displaystyle a_{\alpha }} are complex numbers, and D α = ( − i ∂ 1 ) α 1 ⋯ ( − i ∂ n ) α n {\displaystyle D^{\alpha }=(-i\partial _{1})^{\alpha _{1}}\cdots (-i\partial _{n})^{\alpha _{n}}} is an iterated partial derivative, where ∂j means differentiation with respect to the j-th variable. We introduce the constants − i {\displaystyle -i} to facilitate the calculation of Fourier transforms. Derivation of formula (1) The Fourier transform of a smooth function u, compactly supported in Rn, is u ^ ( ξ ) := ∫ e − i y ξ u ( y ) d y {\displaystyle {\hat {u}}(\xi ):=\int e^{-iy\xi }u(y)\,dy} and Fourier's inversion formula gives u ( x ) = 1 ( 2 π ) n ∫ e i x ξ u ^ ( ξ ) d ξ = 1 ( 2 π ) n ∬ e i ( x − y ) ξ u ( y ) d y d ξ {\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\hat {u}}(\xi )d\xi ={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }u(y)\,dy\,d\xi } By applying P(D) to this representation of u and using P ( D x ) e i ( x − y ) ξ = e i ( x − y ) ξ P ( ξ ) {\displaystyle P(D_{x})\,e^{i(x-y)\xi }=e^{i(x-y)\xi }\,P(\xi )} one obtains formula (1). === Representation of solutions to partial differential equations === To solve the partial differential equation P ( D ) u = f {\displaystyle P(D)\,u=f} we (formally) apply the Fourier transform on both sides and obtain the algebraic equation P ( ξ ) u ^ ( ξ ) = f ^ ( ξ ) . {\displaystyle P(\xi )\,{\hat {u}}(\xi )={\hat {f}}(\xi ).} If the symbol P(ξ) is never zero when ξ ∈ Rn, then it is possible to divide by P(ξ): u ^ ( ξ ) = 1 P ( ξ ) f ^ ( ξ ) {\displaystyle {\hat {u}}(\xi )={\frac {1}{P(\xi )}}{\hat {f}}(\xi )} By Fourier's inversion formula, a solution is u ( x ) = 1 ( 2 π ) n ∫ e i x ξ 1 P ( ξ ) f ^ ( ξ ) d ξ . {\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\frac {1}{P(\xi )}}{\hat {f}}(\xi )\,d\xi .} Here it is assumed that: P(D) is a linear differential operator with constant coefficients, its symbol P(ξ) is never zero, both u and ƒ have a well defined Fourier transform. The last assumption can be weakened by using the theory of distributions. The first two assumptions can be weakened as follows. In the last formula, write out the Fourier transform of ƒ to obtain u ( x ) = 1 ( 2 π ) n ∬ e i ( x − y ) ξ 1 P ( ξ ) f ( y ) d y d ξ . {\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }{\frac {1}{P(\xi )}}f(y)\,dy\,d\xi .} This is similar to formula (1), except that 1/P(ξ) is not a polynomial function, but a function of a more general kind. == Definition of pseudo-differential operators == Here we view pseudo-differential operators as a generalization of differential operators. We extend formula (1) as follows. A pseudo-differential operator P(x,D) on Rn is an operator whose value on the function u(x) is the function of x: where u ^ ( ξ ) {\displaystyle {\hat {u}}(\xi )} is the Fourier transform of u and the symbol P(x,ξ) in the integrand belongs to a certain symbol class. For instance, if P(x,ξ) is an infinitely differentiable function on Rn × Rn with the property | ∂ ξ α ∂ x β P ( x , ξ ) | ≤ C α , β ( 1 + | ξ | ) m − | α | {\displaystyle |\partial _{\xi }^{\alpha }\partial _{x}^{\beta }P(x,\xi )|\leq C_{\alpha ,\beta }\,(1+|\xi |)^{m-|\alpha |}} for all x,ξ ∈Rn, all multiindices α,β, some constants Cα, β and some real number m, then P belongs to the symbol class S 1 , 0 m {\displaystyle \scriptstyle {S_{1,0}^{m}}} of Hörmander. The corresponding operator P(x,D) is called a pseudo-differential operator of order m and belongs to the class Ψ 1 , 0 m . {\displaystyle \Psi _{1,0}^{m}.} == Properties == Linear differential operators of order m with smooth bounded coefficients are pseudo-differential operators of order m. The composition PQ of two pseudo-differential operators P, Q is again a pseudo-differential operator and the symbol of PQ can be calculated by using the symbols of P and Q. The adjoint and transpose of a pseudo-differential operator is a pseudo-differential operator. If a differential operator of order m is (uniformly) elliptic (of order m) and invertible, then its inverse is a pseudo-differential operator of order −m, and its symbol can be calculated. This means that one can solve linear elliptic differential equations more or less explicitly by using the theory of pseudo-differential operators. Differential operators are local in the sense that one only needs the value of a function in a neighbourhood of a point to determine the effect of the operator. Pseudo-differential operators are pseudo-local, which means informally that when applied to a distribution they do not create a singularity at points where the distribution was already smooth. Just as a differential operator can be expressed in terms of D = −id/dx in the form p ( x , D ) {\displaystyle p(x,D)\,} for a polynomial p in D (which is called the symbol), a pseudo-differential operator has a symbol in a more general class of functions. Often one can reduce a problem in analysis of pseudo-differential operators to a sequence of algebraic problems involving their symbols, and this is the essence of microlocal analysis. == Kernel of pseudo-differential operator == Pseudo-differential operators can be represented by kernels. The singularity of the kernel on the diagonal depends on the degree of the corresponding operator. In fact, if the symbol satisfies the above differential inequalities with m ≤ 0, it can be shown that the kernel is a singular integral kernel. == See also == Differential algebra for a definition of pseudo-differential operators in the context of differential algebras and differential rings. Fourier transform Fourier integral operator Oscillatory integral operator Sato's fundamental theorem Operational calculus == Footnotes == == References == Stein, Elias (1993), Harmonic Analysis: Real-Variable Methods, Orthogonality and Oscillatory Integrals, Princeton University Press. Atiyah, Michael F.; Singer, Isadore M. (1968), "The Index of Elliptic Operators I", Annals of Mathematics, 87 (3): 484–530, doi:10.2307/1970715, JSTOR 1970715 == Further reading == Nicolas Lerner, Metrics on the phase space and non-selfadjoint pseudo-differential operators. Pseudo-Differential Operators. Theory and Applications, 3. Birkhäuser Verlag, Basel, 2010. Michael E. Taylor, Pseudodifferential Operators, Princeton Univ. Press 1981. ISBN 0-691-08282-0 M. A. Shubin, Pseudodifferential Operators and Spectral Theory, Springer-Verlag 2001. ISBN 3-540-41195-X Francois Treves, Introduction to Pseudo Differential and Fourier Integral Operators, (University Series in Mathematics), Plenum Publ. Co. 1981. ISBN 0-306-40404-4 F. G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge University Press 1999. ISBN 0-521-64971-4 Hörmander, Lars (1987). The Analysis of Linear Partial Differential Operators III: Pseudo-Differential Operators. Springer. ISBN 3-540-49937-7. André Unterberger, Pseudo-differential operators and applications: an introduction. Lecture Notes Series, 46. Aarhus Universitet, Matematisk Institut, Aarhus, 1976. == External links == Lectures on Pseudo-differential Operators by Mark S. Joshi on arxiv.org. "Pseudo-differential operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Pseudodifferential_operators
In mathematics, a dilation is a function f {\displaystyle f} from a metric space M {\displaystyle M} into itself that satisfies the identity d ( f ( x ) , f ( y ) ) = r d ( x , y ) {\displaystyle d(f(x),f(y))=rd(x,y)} for all points x , y ∈ M {\displaystyle x,y\in M} , where d ( x , y ) {\displaystyle d(x,y)} is the distance from x {\displaystyle x} to y {\displaystyle y} and r {\displaystyle r} is some positive real number. In Euclidean space, such a dilation is a similarity of the space. Dilations change the size but not the shape of an object or figure. Every dilation of a Euclidean space that is not a congruence has a unique fixed point that is called the center of dilation. Some congruences have fixed points and others do not. == See also == Homothety Dilation (operator theory) == References ==
Wikipedia/Dilation_theory
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines field theory and the principle of relativity with ideas behind quantum mechanics.: xi  QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on QFT. == History == Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. === Theoretical background === Quantum field theory results from the combination of classical field theory, quantum mechanics, and special relativity.: xi  A brief overview of these theoretical precursors follows. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Isaac Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact".: 4  It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector in the case of gravitational field) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.: 18  Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.: 301 : 2  The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.: 19  Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization.: Ch.2  Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles. In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave–particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, Louis de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.: 22–23  In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformations, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred.: 19  It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators. === Quantum electrodynamics === Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.: 1  Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators.: 1  With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.: 22  In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence and non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.: 71  In 1928, Dirac wrote down a wave equation that described relativistic electrons: the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.: 71–72  The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.: 22–23  It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory.: 72 : 23  QFT naturally incorporated antiparticles in its formalism.: 24  === Infinities and renormalization === Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields, suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta.: 25  It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Such achievements were not understood and recognized by the theoretical community. Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.: 26  In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift.: 28  Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations. The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the calculated values of mass and charge, infinite though they may be, by their finite measured values. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory. As Tomonaga said in his Nobel lecture:Since those parts of the modified mass and charge due to field reactions [become infinite], it is impossible to calculate them by the theory. However, the mass and charge observed in experiments are not the original mass and charge but the mass and charge as modified by field reactions, and they are finite. On the other hand, the mass and charge appearing in the theory are… the values modified by field reactions. Since this is so, and particularly since the theory is unable to calculate the modified mass and charge, we may adopt the procedure of substituting experimental values for them phenomenologically... This procedure is called the renormalization of mass and charge… After long, laborious calculations, less skillful than Schwinger's, we obtained a result... which was in agreement with [the] Americans'. By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarization. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities". At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams.: 2  The latter can be used to visually and intuitively organize and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.: 5  It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.: 2  === Non-renormalizability === Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.: 30  The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.: 30  The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant α ≈ 1/137, which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.: 31  With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.: 31  === Source theory === Schwinger, however, took a different route. For more than a decade he and his students had been nearly the only exponents of field theory,: 454  but in 1951 he found a way around the problem of the infinities with a new method using external sources as currents coupled to gauge fields. Motivated by the former findings, Schwinger kept pursuing this approach in order to "quantumly" generalize the classical process of coupling external forces to the configuration space parameters known as Lagrange multipliers. He summarized his source theory in 1966 then expanded the theory's applications to quantum electrodynamics in his three volume-set titled: Particles, Sources, and Fields. Developments in pion physics, in which the new viewpoint was most successfully applied, convinced him of the great advantages of mathematical simplicity and conceptual clarity that its use bestowed. In source theory there are no divergences, and no renormalization. It may be regarded as the calculational tool of field theory, but it is more general. Using source theory, Schwinger was able to calculate the anomalous magnetic moment of the electron, which he had done in 1947, but this time with no ‘distracting remarks’ about infinite quantities.: 467  Schwinger also applied source theory to his QFT theory of gravity, and was able to reproduce all four of Einstein's classic results: gravitational red shift, deflection and slowing of light by gravity, and the perihelion precession of Mercury. The neglect of source theory by the physics community was a major disappointment for Schwinger:The lack of appreciation of these facts by others was depressing, but understandable. -J. SchwingerSee "the shoes incident" between J. Schwinger and S. Weinberg. === Standard model === In 1954, Yang Chen-Ning and Robert Mills generalized the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups.: 5  In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.: 32  Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable. Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.: 5–6  By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored,: 6  until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion. Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) : 11  Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.: 32  These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades.: 3  The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model. === Other developments === The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory.: 4  Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry theories only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973,: 7  but to date have not been widely accepted as part of the Standard Model due to lack of experimental evidence. Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory,: 6  itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity. === Condensed-matter-physics === Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter. Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems. Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect. == Principles == For simplicity, natural units are used in the following sections, in which the reduced Planck constant ħ and the speed of light c are both set to one. === Classical fields === A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity g(x, t) and the electric field E(x, t) and magnetic field B(x, t) in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom. Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantization and path integrals are two common formulations of QFT.: 61  To motivate the fundamentals of QFT, an overview of classical field theory follows. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as ϕ(x, t), where x is the position vector, and t is the time. Suppose the Lagrangian of the field, L {\displaystyle L} , is L = ∫ d 3 x L = ∫ d 3 x [ 1 2 ϕ ˙ 2 − 1 2 ( ∇ ϕ ) 2 − 1 2 m 2 ϕ 2 ] , {\displaystyle L=\int d^{3}x\,{\mathcal {L}}=\int d^{3}x\,\left[{\frac {1}{2}}{\dot {\phi }}^{2}-{\frac {1}{2}}(\nabla \phi )^{2}-{\frac {1}{2}}m^{2}\phi ^{2}\right],} where L {\displaystyle {\mathcal {L}}} is the Lagrangian density, ϕ ˙ {\displaystyle {\dot {\phi }}} is the time-derivative of the field, ∇ is the gradient operator, and m is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian:: 16  ∂ ∂ t [ ∂ L ∂ ( ∂ ϕ / ∂ t ) ] + ∑ i = 1 3 ∂ ∂ x i [ ∂ L ∂ ( ∂ ϕ / ∂ x i ) ] − ∂ L ∂ ϕ = 0 , {\displaystyle {\frac {\partial }{\partial t}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial t)}}\right]+\sum _{i=1}^{3}{\frac {\partial }{\partial x^{i}}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial x^{i})}}\right]-{\frac {\partial {\mathcal {L}}}{\partial \phi }}=0,} we obtain the equations of motion for the field, which describe the way it varies in time and space: ( ∂ 2 ∂ t 2 − ∇ 2 + m 2 ) ϕ = 0. {\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}+m^{2}\right)\phi =0.} This is known as the Klein–Gordon equation.: 17  The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: ϕ ( x , t ) = ∫ d 3 p ( 2 π ) 3 1 2 ω p ( a p e − i ω p t + i p ⋅ x + a p ∗ e i ω p t − i p ⋅ x ) , {\displaystyle \phi (\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left(a_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+a_{\mathbf {p} }^{*}e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right),} where a is a complex number (normalized by convention), * denotes complex conjugation, and ωp is the frequency of the normal mode: ω p = | p | 2 + m 2 . {\displaystyle \omega _{\mathbf {p} }={\sqrt {|\mathbf {p} |^{2}+m^{2}}}.} Thus each normal mode corresponding to a single p can be seen as a classical harmonic oscillator with frequency ωp.: 21,26  === Canonical quantization === The quantization procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by x ( t ) = 1 2 ω a e − i ω t + 1 2 ω a ∗ e i ω t , {\displaystyle x(t)={\frac {1}{\sqrt {2\omega }}}ae^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}a^{*}e^{i\omega t},} where a is a complex number (normalized by convention), and ω is the oscillator's frequency. Note that x is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label x of a quantum field. For a quantum harmonic oscillator, x(t) is promoted to a linear operator x ^ ( t ) {\displaystyle {\hat {x}}(t)} : x ^ ( t ) = 1 2 ω a ^ e − i ω t + 1 2 ω a ^ † e i ω t . {\displaystyle {\hat {x}}(t)={\frac {1}{\sqrt {2\omega }}}{\hat {a}}e^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}{\hat {a}}^{\dagger }e^{i\omega t}.} Complex numbers a and a* are replaced by the annihilation operator a ^ {\displaystyle {\hat {a}}} and the creation operator a ^ † {\displaystyle {\hat {a}}^{\dagger }} , respectively, where † denotes Hermitian conjugation. The commutation relation between the two is [ a ^ , a ^ † ] = 1. {\displaystyle \left[{\hat {a}},{\hat {a}}^{\dagger }\right]=1.} The Hamiltonian of the simple harmonic oscillator can be written as H ^ = ℏ ω a ^ † a ^ + 1 2 ℏ ω . {\displaystyle {\hat {H}}=\hbar \omega {\hat {a}}^{\dagger }{\hat {a}}+{\frac {1}{2}}\hbar \omega .} The vacuum state | 0 ⟩ {\displaystyle |0\rangle } , which is the lowest energy state, is defined by a ^ | 0 ⟩ = 0 {\displaystyle {\hat {a}}|0\rangle =0} and has energy 1 2 ℏ ω . {\displaystyle {\frac {1}{2}}\hbar \omega .} One can easily check that [ H ^ , a ^ † ] = ℏ ω a ^ † , {\displaystyle [{\hat {H}},{\hat {a}}^{\dagger }]=\hbar \omega {\hat {a}}^{\dagger },} which implies that a ^ † {\displaystyle {\hat {a}}^{\dagger }} increases the energy of the simple harmonic oscillator by ℏ ω {\displaystyle \hbar \omega } . For example, the state a ^ † | 0 ⟩ {\displaystyle {\hat {a}}^{\dagger }|0\rangle } is an eigenstate of energy 3 ℏ ω / 2 {\displaystyle 3\hbar \omega /2} . Any energy eigenstate state of a single harmonic oscillator can be obtained from | 0 ⟩ {\displaystyle |0\rangle } by successively applying the creation operator a ^ † {\displaystyle {\hat {a}}^{\dagger }} :: 20  and any state of the system can be expressed as a linear combination of the states | n ⟩ ∝ ( a ^ † ) n | 0 ⟩ . {\displaystyle |n\rangle \propto \left({\hat {a}}^{\dagger }\right)^{n}|0\rangle .} A similar procedure can be applied to the real scalar field ϕ, by promoting it to a quantum field operator ϕ ^ {\displaystyle {\hat {\phi }}} , while the annihilation operator a ^ p {\displaystyle {\hat {a}}_{\mathbf {p} }} , the creation operator a ^ p † {\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }} and the angular frequency ω p {\displaystyle \omega _{\mathbf {p} }} are now for a particular p: ϕ ^ ( x , t ) = ∫ d 3 p ( 2 π ) 3 1 2 ω p ( a ^ p e − i ω p t + i p ⋅ x + a ^ p † e i ω p t − i p ⋅ x ) . {\displaystyle {\hat {\phi }}(\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left({\hat {a}}_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+{\hat {a}}_{\mathbf {p} }^{\dagger }e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right).} Their commutation relations are:: 21  [ a ^ p , a ^ q † ] = ( 2 π ) 3 δ ( p − q ) , [ a ^ p , a ^ q ] = [ a ^ p † , a ^ q † ] = 0 , {\displaystyle \left[{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }^{\dagger }\right]=(2\pi )^{3}\delta (\mathbf {p} -\mathbf {q} ),\quad \left[{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }\right]=\left[{\hat {a}}_{\mathbf {p} }^{\dagger },{\hat {a}}_{\mathbf {q} }^{\dagger }\right]=0,} where δ is the Dirac delta function. The vacuum state | 0 ⟩ {\displaystyle |0\rangle } is defined by a ^ p | 0 ⟩ = 0 , for all p . {\displaystyle {\hat {a}}_{\mathbf {p} }|0\rangle =0,\quad {\text{for all }}\mathbf {p} .} Any quantum state of the field can be obtained from | 0 ⟩ {\displaystyle |0\rangle } by successively applying creation operators a ^ p † {\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }} (or by a linear combination of such states), e.g. : 22  ( a ^ p 3 † ) 3 a ^ p 2 † ( a ^ p 1 † ) 2 | 0 ⟩ . {\displaystyle \left({\hat {a}}_{\mathbf {p} _{3}}^{\dagger }\right)^{3}{\hat {a}}_{\mathbf {p} _{2}}^{\dagger }\left({\hat {a}}_{\mathbf {p} _{1}}^{\dagger }\right)^{2}|0\rangle .} While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantizing an arbitrary number of particles instead of a single particle is often also called second quantization.: 19  The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantize (complex) scalar fields, Dirac fields,: 52  vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:: 77  L = 1 2 ( ∂ μ ϕ ) ( ∂ μ ϕ ) − 1 2 m 2 ϕ 2 − λ 4 ! ϕ 4 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi )\left(\partial ^{\mu }\phi \right)-{\frac {1}{2}}m^{2}\phi ^{2}-{\frac {\lambda }{4!}}\phi ^{4},} where μ is a spacetime index, ∂ 0 = ∂ / ∂ t , ∂ 1 = ∂ / ∂ x 1 {\displaystyle \partial _{0}=\partial /\partial t,\ \partial _{1}=\partial /\partial x^{1}} , etc. The summation over the index μ has been omitted following the Einstein notation. If the parameter λ is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. === Path integrals === The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state | ϕ I ⟩ {\displaystyle |\phi _{I}\rangle } at time t = 0 to some final state | ϕ F ⟩ {\displaystyle |\phi _{F}\rangle } at t = T, the total time T is divided into N small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let H be the Hamiltonian (i.e. generator of time evolution), then: 10  ⟨ ϕ F | e − i H T | ϕ I ⟩ = ∫ d ϕ 1 ∫ d ϕ 2 ⋯ ∫ d ϕ N − 1 ⟨ ϕ F | e − i H T / N | ϕ N − 1 ⟩ ⋯ ⟨ ϕ 2 | e − i H T / N | ϕ 1 ⟩ ⟨ ϕ 1 | e − i H T / N | ϕ I ⟩ . {\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int d\phi _{1}\int d\phi _{2}\cdots \int d\phi _{N-1}\,\langle \phi _{F}|e^{-iHT/N}|\phi _{N-1}\rangle \cdots \langle \phi _{2}|e^{-iHT/N}|\phi _{1}\rangle \langle \phi _{1}|e^{-iHT/N}|\phi _{I}\rangle .} Taking the limit N → ∞, the above product of integrals becomes the Feynman path integral:: 282 : 12  ⟨ ϕ F | e − i H T | ϕ I ⟩ = ∫ D ϕ ( t ) exp ⁡ { i ∫ 0 T d t L } , {\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int {\mathcal {D}}\phi (t)\,\exp \left\{i\int _{0}^{T}dt\,L\right\},} where L is the Lagrangian involving ϕ and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian H via Legendre transformation. The initial and final conditions of the path integral are respectively ϕ ( 0 ) = ϕ I , ϕ ( T ) = ϕ F . {\displaystyle \phi (0)=\phi _{I},\quad \phi (T)=\phi _{F}.} In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. === Two-point correlation function === In calculations, one often encounters expression like ⟨ 0 | T { ϕ ( x ) ϕ ( y ) } | 0 ⟩ or ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ {\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \quad {\text{or}}\quad \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle } in the free or interacting theory, respectively. Here, x {\displaystyle x} and y {\displaystyle y} are position four-vectors, T {\displaystyle T} is the time ordering operator that shuffles its operands so the time-components x 0 {\displaystyle x^{0}} and y 0 {\displaystyle y^{0}} increase from right to left, and | Ω ⟩ {\displaystyle |\Omega \rangle } is the ground state (vacuum state) of the interacting theory, different from the free ground state | 0 ⟩ {\displaystyle |0\rangle } . This expression represents the probability amplitude for the field to propagate from y to x, and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short.: 82  The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantization or path integrals to be: 31,288 : 23  ⟨ 0 | T { ϕ ( x ) ϕ ( y ) } | 0 ⟩ ≡ D F ( x − y ) = lim ϵ → 0 ∫ d 4 p ( 2 π ) 4 i p μ p μ − m 2 + i ϵ e − i p μ ( x μ − y μ ) . {\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \equiv D_{F}(x-y)=\lim _{\epsilon \to 0}\int {\frac {d^{4}p}{(2\pi )^{4}}}{\frac {i}{p_{\mu }p^{\mu }-m^{2}+i\epsilon }}e^{-ip_{\mu }(x^{\mu }-y^{\mu })}.} In an interacting theory, where the Lagrangian or Hamiltonian contains terms L I ( t ) {\displaystyle L_{I}(t)} or H I ( t ) {\displaystyle H_{I}(t)} that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantization formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function. In canonical quantization, the two-point correlation function can be written as:: 87  ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ = lim T → ∞ ( 1 − i ϵ ) ⟨ 0 | T { ϕ I ( x ) ϕ I ( y ) exp ⁡ [ − i ∫ − T T d t H I ( t ) ] } | 0 ⟩ ⟨ 0 | T { exp ⁡ [ − i ∫ − T T d t H I ( t ) ] } | 0 ⟩ , {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\left\langle 0\left|T\left\{\phi _{I}(x)\phi _{I}(y)\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}\right|0\right\rangle }{\left\langle 0\left|T\left\{\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}\right|0\right\rangle }},} where ε is an infinitesimal number and ϕI is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in ϕ 4 {\displaystyle \phi ^{4}} -theory, the interacting term of the Hamiltonian is H I ( t ) = ∫ d 3 x λ 4 ! ϕ I ( x ) 4 {\textstyle H_{I}(t)=\int d^{3}x\,{\frac {\lambda }{4!}}\phi _{I}(x)^{4}} ,: 84  and the expansion of the two-point correlator in terms of λ {\displaystyle \lambda } becomes ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ = ∑ n = 0 ∞ ( − i λ ) n ( 4 ! ) n n ! ∫ d 4 z 1 ⋯ ∫ d 4 z n ⟨ 0 | T { ϕ I ( x ) ϕ I ( y ) ϕ I ( z 1 ) 4 ⋯ ϕ I ( z n ) 4 } | 0 ⟩ ∑ n = 0 ∞ ( − i λ ) n ( 4 ! ) n n ! ∫ d 4 z 1 ⋯ ∫ d 4 z n ⟨ 0 | T { ϕ I ( z 1 ) 4 ⋯ ϕ I ( z n ) 4 } | 0 ⟩ . {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle ={\frac {\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\int d^{4}z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(x)\phi _{I}(y)\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }{\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\int d^{4}z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }}.} This perturbation expansion expresses the interacting two-point function in terms of quantities ⟨ 0 | ⋯ | 0 ⟩ {\displaystyle \langle 0|\cdots |0\rangle } that are evaluated in the free theory. In the path integral formulation, the two-point correlation function can be written: 284  ⟨ Ω | T { ϕ ( x ) ϕ ( y ) } | Ω ⟩ = lim T → ∞ ( 1 − i ϵ ) ∫ D ϕ ϕ ( x ) ϕ ( y ) exp ⁡ [ i ∫ − T T d 4 z L ] ∫ D ϕ exp ⁡ [ i ∫ − T T d 4 z L ] , {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\int {\mathcal {D}}\phi \,\phi (x)\phi (y)\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}{\int {\mathcal {D}}\phi \,\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}},} where L {\displaystyle {\mathcal {L}}} is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in λ, reducing the interacting two-point function to quantities in the free theory. Wick's theorem further reduce any n-point correlation function in the free theory to a sum of products of two-point correlation functions. For example, ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 2 ) ϕ ( x 3 ) ϕ ( x 4 ) } | 0 ⟩ = ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 2 ) } | 0 ⟩ ⟨ 0 | T { ϕ ( x 3 ) ϕ ( x 4 ) } | 0 ⟩ + ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 3 ) } | 0 ⟩ ⟨ 0 | T { ϕ ( x 2 ) ϕ ( x 4 ) } | 0 ⟩ + ⟨ 0 | T { ϕ ( x 1 ) ϕ ( x 4 ) } | 0 ⟩ ⟨ 0 | T { ϕ ( x 2 ) ϕ ( x 3 ) } | 0 ⟩ . {\displaystyle {\begin{aligned}\langle 0|T\{\phi (x_{1})\phi (x_{2})\phi (x_{3})\phi (x_{4})\}|0\rangle &=\langle 0|T\{\phi (x_{1})\phi (x_{2})\}|0\rangle \langle 0|T\{\phi (x_{3})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{3})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{4})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{3})\}|0\rangle .\end{aligned}}} Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory.: 90  This makes the Feynman propagator one of the most important quantities in quantum field theory. === Feynman diagram === Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the λ1 term in the two-point correlation function in the ϕ4 theory is − i λ 4 ! ∫ d 4 z ⟨ 0 | T { ϕ ( x ) ϕ ( y ) ϕ ( z ) ϕ ( z ) ϕ ( z ) ϕ ( z ) } | 0 ⟩ . {\displaystyle {\frac {-i\lambda }{4!}}\int d^{4}z\,\langle 0|T\{\phi (x)\phi (y)\phi (z)\phi (z)\phi (z)\phi (z)\}|0\rangle .} After applying Wick's theorem, one of the terms is 12 ⋅ − i λ 4 ! ∫ d 4 z D F ( x − z ) D F ( y − z ) D F ( z − z ) . {\displaystyle 12\cdot {\frac {-i\lambda }{4!}}\int d^{4}z\,D_{F}(x-z)D_{F}(y-z)D_{F}(z-z).} This term can instead be obtained from the Feynman diagram . The diagram consists of external vertices connected with one edge and represented by dots (here labeled x {\displaystyle x} and y {\displaystyle y} ). internal vertices connected with four edges and represented by dots (here labeled z {\displaystyle z} ). edges connecting the vertices and represented by lines. Every vertex corresponds to a single ϕ {\displaystyle \phi } field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the so-called Feynman rules: For every internal vertex z i {\displaystyle z_{i}} , write down a factor − i λ ∫ d 4 z i {\textstyle -i\lambda \int d^{4}z_{i}} . For every edge that connects two vertices z i {\displaystyle z_{i}} and z j {\displaystyle z_{j}} , write down a factor D F ( z i − z j ) {\displaystyle D_{F}(z_{i}-z_{j})} . Divide by the symmetry factor of the diagram. With the symmetry factor 2 {\displaystyle 2} , following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space.: 91–94  In order to compute the n-point correlation function to the k-th order, list all valid Feynman diagrams with n external points and k or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, ⟨ Ω | T { ϕ ( x 1 ) ⋯ ϕ ( x n ) } | Ω ⟩ {\displaystyle \langle \Omega |T\{\phi (x_{1})\cdots \phi (x_{n})\}|\Omega \rangle } is equal to the sum of (expressions corresponding to) all connected diagrams with n external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the ϕ4 interaction theory discussed above, every vertex must have four legs.: 98  In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method.: 102–115  Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing n loops are referred to as n-loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction.: 44  Lines whose end points are vertices can be thought of as the propagation of virtual particles.: 31  === Renormalization === Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass m and the coupling constant λ, have no physical meaning — m, λ, and the field strength ϕ are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off Λ, obtain expressions for the physical quantities, and then take the limit Λ → ∞. This is an example of regularization, a class of methods to treat divergences in QFT, with Λ being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalized perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of ϕ4 theory, the field strength is first redefined: ϕ = Z 1 / 2 ϕ r , {\displaystyle \phi =Z^{1/2}\phi _{r},} where ϕ is the bare field, ϕr is the renormalized field, and Z is a constant to be determined. The Lagrangian density becomes: L = 1 2 ( ∂ μ ϕ r ) ( ∂ μ ϕ r ) − 1 2 m r 2 ϕ r 2 − λ r 4 ! ϕ r 4 + 1 2 δ Z ( ∂ μ ϕ r ) ( ∂ μ ϕ r ) − 1 2 δ m ϕ r 2 − δ λ 4 ! ϕ r 4 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}m_{r}^{2}\phi _{r}^{2}-{\frac {\lambda _{r}}{4!}}\phi _{r}^{4}+{\frac {1}{2}}\delta _{Z}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}\delta _{m}\phi _{r}^{2}-{\frac {\delta _{\lambda }}{4!}}\phi _{r}^{4},} where mr and λr are the experimentally measurable, renormalized, mass and coupling constant, respectively, and δ Z = Z − 1 , δ m = m 2 Z − m r 2 , δ λ = λ Z 2 − λ r {\displaystyle \delta _{Z}=Z-1,\quad \delta _{m}=m^{2}Z-m_{r}^{2},\quad \delta _{\lambda }=\lambda Z^{2}-\lambda _{r}} are constants to be determined. The first three terms are the ϕ4 Lagrangian density written in terms of the renormalized quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularization scheme (such as the cut-off regularization introduced above or dimensional regularization); call the regulator Λ. Compute Feynman diagrams, in which divergent terms will depend on Λ. Then, define δZ, δm, and δλ such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit Λ → ∞ is taken. In this way, meaningful finite quantities are obtained.: 323–326  It is only possible to eliminate all infinities to obtain a finite result in renormalizable theories, whereas in non-renormalizable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalizable QFT,: 719–727  while quantum gravity is non-renormalizable.: 798 : 421  ==== Renormalization group ==== The renormalization group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales.: 393  The way in which each parameter changes with scale is described by its β function.: 417  Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation.: 410–411  As an example, the coupling constant in QED, namely the elementary charge e, has the following β function: β ( e ) ≡ 1 Λ d e d Λ = e 3 12 π 2 + O ( e 5 ) , {\displaystyle \beta (e)\equiv {\frac {1}{\Lambda }}{\frac {de}{d\Lambda }}={\frac {e^{3}}{12\pi ^{2}}}+O{\mathord {\left(e^{5}\right)}},} where Λ is the energy scale under which the measurement of e is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.: 420  The coupling constant g in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group SU(3), has the following β function: β ( g ) ≡ 1 Λ d g d Λ = g 3 16 π 2 ( − 11 + 2 3 N f ) + O ( g 5 ) , {\displaystyle \beta (g)\equiv {\frac {1}{\Lambda }}{\frac {dg}{d\Lambda }}={\frac {g^{3}}{16\pi ^{2}}}\left(-11+{\frac {2}{3}}N_{f}\right)+O{\mathord {\left(g^{5}\right)}},} where Nf is the number of quark flavours. In the case where Nf ≤ 16 (the Standard Model has Nf = 6), the coupling constant g decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.: 531  Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and N = 4 supersymmetric Yang–Mills theory. According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off Λ, i.e. that the theory is no longer valid at energies higher than Λ, and all degrees of freedom above the scale Λ are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalizable effective field theory.: 402–403  The difference between renormalizable and non-renormalizable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them.: 2  According to this view, non-renormalizable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off Λ from calculations in such a theory merely indicates that new physical phenomena appear at scales above Λ, where a new theory is necessary.: 156  === Other theories === The quantization and renormalization procedures outlined in the preceding sections are performed for the free theory and ϕ4 theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction. As an example, quantum electrodynamics contains a Dirac field ψ representing the electron field and a vector field Aμ representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: L = ψ ¯ ( i γ μ ∂ μ − m ) ψ − 1 4 F μ ν F μ ν − e ψ ¯ γ μ ψ A μ , {\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\gamma ^{\mu }\partial _{\mu }-m\right)\psi -{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }-e{\bar {\psi }}\gamma ^{\mu }\psi A_{\mu },} where γμ are Dirac matrices, ψ ¯ = ψ † γ 0 {\displaystyle {\bar {\psi }}=\psi ^{\dagger }\gamma ^{0}} , and F μ ν = ∂ μ A ν − ∂ ν A μ {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }} is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass m and the (bare) elementary charge e. The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.: 78  Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of electrons, while those pointing backward in time represent the propagation of positrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. ==== Gauge symmetry ==== If the following transformation to the fields is performed at every spacetime point x (a local transformation), then the QED Lagrangian remains unchanged, or invariant: ψ ( x ) → e i α ( x ) ψ ( x ) , A μ ( x ) → A μ ( x ) + i e − 1 e − i α ( x ) ∂ μ e i α ( x ) , {\displaystyle \psi (x)\to e^{i\alpha (x)}\psi (x),\quad A_{\mu }(x)\to A_{\mu }(x)+ie^{-1}e^{-i\alpha (x)}\partial _{\mu }e^{i\alpha (x)},} where α(x) is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory.: 482–483  Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations e i α ( x ) {\displaystyle e^{i\alpha (x)}} and e i α ′ ( x ) {\displaystyle e^{i\alpha '(x)}} is yet another symmetry transformation e i [ α ( x ) + α ′ ( x ) ] {\displaystyle e^{i[\alpha (x)+\alpha '(x)]}} . For any α(x), e i α ( x ) {\displaystyle e^{i\alpha (x)}} is an element of the U(1) group, thus QED is said to have U(1) gauge symmetry.: 496  The photon field Aμ may be referred to as the U(1) gauge boson. U(1) is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories).: 489  Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an SU(3) gauge symmetry. It contains three Dirac fields ψi, i = 1,2,3 representing quark fields as well as eight vector fields Aa,μ, a = 1,...,8 representing gluon fields, which are the SU(3) gauge bosons.: 547  The QCD Lagrangian density is:: 490–491  L = i ψ ¯ i γ μ ( D μ ) i j ψ j − 1 4 F μ ν a F a , μ ν − m ψ ¯ i ψ i , {\displaystyle {\mathcal {L}}=i{\bar {\psi }}^{i}\gamma ^{\mu }(D_{\mu })^{ij}\psi ^{j}-{\frac {1}{4}}F_{\mu \nu }^{a}F^{a,\mu \nu }-m{\bar {\psi }}^{i}\psi ^{i},} where Dμ is the gauge covariant derivative: D μ = ∂ μ − i g A μ a t a , {\displaystyle D_{\mu }=\partial _{\mu }-igA_{\mu }^{a}t^{a},} where g is the coupling constant, ta are the eight generators of SU(3) in the fundamental representation (3×3 matrices), F μ ν a = ∂ μ A ν a − ∂ ν A μ a + g f a b c A μ b A ν c , {\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c},} and fabc are the structure constants of SU(3). Repeated indices i,j,a are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: ψ i ( x ) → U i j ( x ) ψ j ( x ) , A μ a ( x ) t a → U ( x ) [ A μ a ( x ) t a + i g − 1 ∂ μ ] U † ( x ) , {\displaystyle \psi ^{i}(x)\to U^{ij}(x)\psi ^{j}(x),\quad A_{\mu }^{a}(x)t^{a}\to U(x)\left[A_{\mu }^{a}(x)t^{a}+ig^{-1}\partial _{\mu }\right]U^{\dagger }(x),} where U(x) is an element of SU(3) at every spacetime point x: U ( x ) = e i α ( x ) a t a . {\displaystyle U(x)=e^{i\alpha (x)^{a}t^{a}}.} The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantization, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density L [ ϕ , ∂ μ ϕ ] {\displaystyle {\mathcal {L}}[\phi ,\partial _{\mu }\phi ]} under a certain local transformation of the fields, the measure ∫ D ϕ {\textstyle \int {\mathcal {D}}\phi } of the path integral may change.: 243  For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group SU(3) × SU(2) × U(1), in which all anomalies exactly cancel.: 705–707  The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group. Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law.: 17–18 : 73  For example, the U(1) symmetry of QED implies charge conservation. Gauge-transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field Aμ, being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarization. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing Aμ can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description.: 168  To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally.: 512–515  A more rigorous generalization of the Faddeev–Popov procedure is given by BRST quantization.: 517  ==== Spontaneous symmetry-breaking ==== Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.: 347  To illustrate the mechanism, consider a linear sigma model containing N real scalar fields, described by the Lagrangian density: L = 1 2 ( ∂ μ ϕ i ) ( ∂ μ ϕ i ) + 1 2 μ 2 ϕ i ϕ i − λ 4 ( ϕ i ϕ i ) 2 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\phi ^{i}\right)\left(\partial ^{\mu }\phi ^{i}\right)+{\frac {1}{2}}\mu ^{2}\phi ^{i}\phi ^{i}-{\frac {\lambda }{4}}\left(\phi ^{i}\phi ^{i}\right)^{2},} where μ and λ are real parameters. The theory admits an O(N) global symmetry: ϕ i → R i j ϕ j , R ∈ O ( N ) . {\displaystyle \phi ^{i}\to R^{ij}\phi ^{j},\quad R\in \mathrm {O} (N).} The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field ϕ0 satisfying ϕ 0 i ϕ 0 i = μ 2 λ . {\displaystyle \phi _{0}^{i}\phi _{0}^{i}={\frac {\mu ^{2}}{\lambda }}.} Without loss of generality, let the ground state be in the N-th direction: ϕ 0 i = ( 0 , ⋯ , 0 , μ λ ) . {\displaystyle \phi _{0}^{i}=\left(0,\cdots ,0,{\frac {\mu }{\sqrt {\lambda }}}\right).} The original N fields can be rewritten as: ϕ i ( x ) = ( π 1 ( x ) , ⋯ , π N − 1 ( x ) , μ λ + σ ( x ) ) , {\displaystyle \phi ^{i}(x)=\left(\pi ^{1}(x),\cdots ,\pi ^{N-1}(x),{\frac {\mu }{\sqrt {\lambda }}}+\sigma (x)\right),} and the original Lagrangian density as: L = 1 2 ( ∂ μ π k ) ( ∂ μ π k ) + 1 2 ( ∂ μ σ ) ( ∂ μ σ ) − 1 2 ( 2 μ 2 ) σ 2 − λ μ σ 3 − λ μ π k π k σ − λ 2 π k π k σ 2 − λ 4 ( π k π k ) 2 , {\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\pi ^{k}\right)\left(\partial ^{\mu }\pi ^{k}\right)+{\frac {1}{2}}\left(\partial _{\mu }\sigma \right)\left(\partial ^{\mu }\sigma \right)-{\frac {1}{2}}\left(2\mu ^{2}\right)\sigma ^{2}-{\sqrt {\lambda }}\mu \sigma ^{3}-{\sqrt {\lambda }}\mu \pi ^{k}\pi ^{k}\sigma -{\frac {\lambda }{2}}\pi ^{k}\pi ^{k}\sigma ^{2}-{\frac {\lambda }{4}}\left(\pi ^{k}\pi ^{k}\right)^{2},} where k = 1, ..., N − 1. The original O(N) global symmetry is no longer manifest, leaving only the subgroup O(N − 1). The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.: 349–350  Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, O(N) has N(N − 1)/2 continuous symmetries (the dimension of its Lie algebra), while O(N − 1) has (N − 1)(N − 2)/2. The number of broken symmetries is their difference, N − 1, which corresponds to the N − 1 massless fields πk.: 351  On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarized massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson.: 743–744  In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures.: 199  In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.: 690  ==== Supersymmetry ==== All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesized the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.: 795 : 443  The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations Pμ and the Lorentz transformations Jμν.: 58–60  In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators Qα, called supercharges, which themselves transform as Weyl fermions.: 795 : 444  The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, QαI, I = 1, ..., N, which generate the corresponding N = 1 supersymmetry, N = 2 supersymmetry, and so on.: 795 : 450  Supersymmetry can also be constructed in other dimensions, most notably in (1+1) dimensions for its application in superstring theory. The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group.: 448  Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), N = 4 supersymmetric Yang–Mills theory,: 450  and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.: 444  If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity. Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalization) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its super-partner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.: 796–797  Nevertheless, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.: 797 : 443  ==== Other spacetimes ==== The ϕ4 theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime. In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases. In high-energy physics, string theory is a type of (1+1)-dimensional QFT,: 452  while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.: 428–429  In Minkowski space, the flat metric ημν is used to raise and lower spacetime indices in the Lagrangian, e.g. A μ A μ = η μ ν A μ A ν , ∂ μ ϕ ∂ μ ϕ = η μ ν ∂ μ ϕ ∂ ν ϕ , {\displaystyle A_{\mu }A^{\mu }=\eta _{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,} where ημν is the inverse of ημν satisfying ημρηρν = δμν. For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used: A μ A μ = g μ ν A μ A ν , ∂ μ ϕ ∂ μ ϕ = g μ ν ∂ μ ϕ ∂ ν ϕ , {\displaystyle A_{\mu }A^{\mu }=g_{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =g^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,} where gμν is the inverse of gμν. For a real scalar field, the Lagrangian density in a general spacetime background is L = | g | ( 1 2 g μ ν ∇ μ ϕ ∇ ν ϕ − 1 2 m 2 ϕ 2 ) , {\displaystyle {\mathcal {L}}={\sqrt {|g|}}\left({\frac {1}{2}}g^{\mu \nu }\nabla _{\mu }\phi \nabla _{\nu }\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right),} where g = det(gμν), and ∇μ denotes the covariant derivative. The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background. ==== Topological quantum field theory ==== The correlation functions and physical predictions of a QFT depend on the spacetime metric gμν. For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric.: 36  QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers.: 1–5  The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime, which relates the braiding statistics of anyons in physics to the link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond. === Perturbative and non-perturbative methods === Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton. Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory and the Thirring model. == Mathematical rigor == In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined. However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory, can be given a sound mathematical interpretation from their finite-dimensional analogues. Since the 1950s, theoretical physicists and mathematicians have attempted to organize all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics,: 2  which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem, and also to mathematically rigorous constructions of many interacting QFTs in two and three spacetime dimensions, e.g. two-dimensional scalar field theories with arbitrary polynomial interactions, the three-dimensional scalar field theories with a quartic interaction, etc. Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms. Algebraic quantum field theory is another approach to the axiomatization of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms.: 2–3  One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).: 10  Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows. Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on R 4 {\displaystyle \mathbb {R} ^{4}} and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in Streater & Wightman (1964), Osterwalder & Schrader (1973) and Osterwalder & Schrader (1975). == See also == == References == Bibliography Streater, R.; Wightman, A. (1964). PCT, Spin and Statistics and all That. W. A. Benjamin. Osterwalder, K.; Schrader, R. (1973). "Axioms for Euclidean Green's functions". Communications in Mathematical Physics. 31 (2): 83–112. Bibcode:1973CMaPh..31...83O. doi:10.1007/BF01645738. S2CID 189829853. Osterwalder, K.; Schrader, R. (1975). "Axioms for Euclidean Green's functions II". Communications in Mathematical Physics. 42 (3): 281–305. Bibcode:1975CMaPh..42..281O. doi:10.1007/BF01608978. S2CID 119389461. == Further reading == General readers Pais, A. (1994) [1986]. Inward Bound: Of Matter and Forces in the Physical World (reprint ed.). Oxford, New York, Toronto: Oxford University Press. ISBN 978-0198519973. Schweber, S. S. (1994). QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga. Princeton University Press. ISBN 9780691033273. Feynman, R.P. (2001) [1964]. The Character of Physical Law. MIT Press. ISBN 978-0-262-56003-0. Feynman, R.P. (2006) [1985]. QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 978-0-691-12575-6. Gribbin, J. (1998). Q is for Quantum: Particle Physics from A to Z. Weidenfeld & Nicolson. ISBN 978-0-297-81752-9. Carroll, Sean (2024). The Biggest Ideas in the Universe : quanta and fields. Dutton. ISBN 978-0-593-18660-2. Introductory text Pierre van Baal (2016). A Course in Field Theory. CRC Press. ISBN 9780429073601. McMahon, D. (2008). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-154382-8. Bogolyubov, N.; Shirkov, D. (1982). Quantum Fields. Benjamin Cummings. ISBN 978-0-8053-0983-6. Frampton, P.H. (2000). Gauge Field Theories. Frontiers in Physics (2nd ed.). Wiley.; Frampton, Paul H. (22 September 2008). 2008, 3rd edition. John Wiley & Sons. ISBN 978-3527408351. Greiner, W.; Müller, B. (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0. Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0. Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Group. ISBN 978-0-201-11749-3. Kleinert, H.; Schulte-Frohlinde, Verena (2001). Critical Properties of φ4-Theories. World Scientific. ISBN 978-981-02-4658-7. Kleinert, H. (2008). Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation (PDF). World Scientific. ISBN 978-981-279-170-2. Lancaster, Tom; Blundell, Stephen (2014). Quantum field theory for the gifted amateur. Oxford: Oxford University Press. ISBN 978-0-19-969933-9. OCLC 859651399. Loudon, R. (1983). The Quantum Theory of Light. Oxford University Press. ISBN 978-0-19-851155-7. Mandl, F.; Shaw, G. (1993). Quantum Field Theory. John Wiley & Sons. ISBN 978-0-471-94186-6. Ryder, L.H. (1985). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-33859-2. Schwartz, M.D. (2014). Quantum Field Theory and the Standard Model. Cambridge University Press. ISBN 978-1107034730. Archived from the original on 2018-03-22. Retrieved 2020-05-13. Ynduráin, F.J. (1996). Relativistic Quantum Mechanics and Introduction to Field Theory (1st ed.). Springer. Bibcode:1996rqmi.book.....Y. doi:10.1007/978-3-642-61057-8. ISBN 978-3-540-60453-2. Greiner, W.; Reinhardt, J. (1996). Field Quantization. Springer. ISBN 978-3-540-59179-5. Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0-201-50397-5. Scharf, Günter (2014) [1989]. Finite Quantum Electrodynamics: The Causal Approach (third ed.). Dover Publications. ISBN 978-0486492735. Srednicki, M. (2007). Quantum Field Theory. Cambridge University Press. ISBN 978-0521-8644-97. Tong, David (2015). "Lectures on Quantum Field Theory". Retrieved 2016-02-09. Williams, A.G. (2022). Introduction to Quantum Field Theory: Classical Mechanics to Gauge Field Theories. Cambridge University Press. ISBN 978-1108470902. Zee, Anthony (2010). Quantum Field Theory in a Nutshell (2nd ed.). Princeton University Press. ISBN 978-0691140346. Advanced texts Heitler, W. (1953). The Quantum Theory of Radiation. Dover Publications, Inc. ISBN 0-486-64558-4. Umezawa, H. (1956) Quantum Field Theory. North Holland Puplishing. Barton, G. (1963). Introduction to Advanced Field Theory. Intescience Publishers. Brown, Lowell S. (1994). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-46946-3. Bogoliubov, N.; Logunov, A.A.; Oksak, A.I.; Todorov, I.T. (1990). General Principles of Quantum Field Theory. Kluwer Academic Publishers. ISBN 978-0-7923-0540-8. Weinberg, S. (1995). The Quantum Theory of Fields. Vol. 1. Cambridge University Press. ISBN 978-0521550017. == External links == Media related to Quantum field theory at Wikimedia Commons "Quantum field theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Stanford Encyclopedia of Philosophy: "Quantum Field Theory", by Meinard Kuhlmann. Siegel, Warren, 2005. Fields. arXiv:hep-th/9912205 . Quantum Field Theory by P. J. Mulders
Wikipedia/Quantum_Field_Theory
In quantum mechanics, the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture. Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization, whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform. This mapping was originally devised by Hermann Weyl in 1927 in an attempt to map symmetrized classical phase space functions to operators, a procedure known as Weyl quantization. It is now understood that Weyl quantization does not satisfy all the properties one would require for consistent quantization and therefore sometimes yields unphysical answers. On the other hand, some of the nice properties described below suggest that if one seeks a single consistent procedure mapping functions on the classical phase space to operators, the Weyl quantization is the best option: a sort of normal coordinates of such maps. (Groenewold's theorem asserts that no such map can have all the ideal properties one would desire.) Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase-space and operator representations, and yields insight into the workings of quantum mechanics. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix, and, conversely, the density matrix is the Weyl transform of the Wigner function. In contrast to Weyl's original intentions in seeking a consistent quantization scheme, this map merely amounts to a change of representation within quantum mechanics; it need not connect "classical" with "quantum" quantities. For example, the phase-space function may depend explicitly on the reduced Planck constant ħ, as it does in some familiar cases involving angular momentum. This invertible representation change then allows one to express quantum mechanics in phase space, as was appreciated in the 1940s by Hilbrand J. Groenewold and José Enrique Moyal. In more generality, Weyl quantization is studied in cases where the phase space is a symplectic manifold, or possibly a Poisson manifold. Related structures include the Poisson–Lie groups and Kac–Moody algebras. == Definition of the Weyl quantization of a general observable == The following explains the Weyl transformation on the simplest, two-dimensional Euclidean phase space. Let the coordinates on phase space be (q,p), and let f be a function defined everywhere on phase space. In what follows, we fix operators P and Q satisfying the canonical commutation relations, such as the usual position and momentum operators in the Schrödinger representation. We assume that the exponentiated operators e i a Q {\displaystyle e^{iaQ}} and e i b P {\displaystyle e^{ibP}} constitute an irreducible representation of the Weyl relations, so that the Stone–von Neumann theorem (guaranteeing uniqueness of the canonical commutation relations) holds. === Basic formula === The Weyl transform (or Weyl quantization) of the function f is given by the following operator in Hilbert space, Throughout, ħ is the reduced Planck constant. It is instructive to perform the p and q integrals in the above formula first, which has the effect of computing the ordinary Fourier transform f ~ {\displaystyle {\tilde {f}}} of the function f, while leaving the operator e i ( a Q + b P ) {\displaystyle e^{i(aQ+bP)}} . In that case, the Weyl transform can be written as Φ [ f ] = 1 ( 2 π ) 2 ∬ f ~ ( a , b ) e i a Q + i b P d a d b {\displaystyle \Phi [f]={\frac {1}{(2\pi )^{2}}}\iint {\tilde {f}}(a,b)e^{iaQ+ibP}\,da\,db} . We may therefore think of the Weyl map as follows: We take the ordinary Fourier transform of the function f ( p , q ) {\displaystyle f(p,q)} , but then when applying the Fourier inversion formula, we substitute the quantum operators P {\displaystyle P} and Q {\displaystyle Q} for the original classical variables p and q, thus obtaining a "quantum version of f." A less symmetric form, but handy for applications, is the following, Φ [ f ] = 2 ( 2 π ℏ ) 3 / 2 ∬ ∬ d q d p d x ~ d p ~ e i ℏ ( x ~ p ~ − 2 ( p ~ − p ) ( x ~ − q ) ) f ( q , p ) | x ~ ⟩ ⟨ p ~ | . {\displaystyle \Phi [f]={\frac {2}{(2\pi \hbar )^{3/2}}}\iint \!\!\!\iint \!\!dq\,dp\,d{\tilde {x}}\,d{\tilde {p}}\ e^{{\frac {i}{\hbar }}({\tilde {x}}{\tilde {p}}-2({\tilde {p}}-p)({\tilde {x}}-q))}~f(q,p)~|{\tilde {x}}\rangle \langle {\tilde {p}}|.} === In the position representation === The Weyl map may then also be expressed in terms of the integral kernel matrix elements of this operator, ⟨ x | Φ [ f ] | y ⟩ = ∫ − ∞ ∞ d p h e i p ( x − y ) / ℏ f ( x + y 2 , p ) . {\displaystyle \langle x|\Phi [f]|y\rangle =\int _{-\infty }^{\infty }{{\text{d}}p \over h}~e^{ip(x-y)/\hbar }~f\left({x+y \over 2},p\right).} === Inverse map === The inverse of the above Weyl map is the Wigner map (or Wigner transform), which was introduced by Eugene Wigner, which takes the operator Φ back to the original phase-space kernel function f, For example, the Wigner map of the oscillator thermal distribution operator exp ⁡ ( − β ( P 2 + Q 2 ) / 2 ) {\displaystyle \exp(-\beta (P^{2}+Q^{2})/2)} is exp ⋆ ⁡ ( − β ( p 2 + q 2 ) / 2 ) = ( cosh ⁡ ( β ℏ 2 ) ) − 1 exp ⁡ ( − 2 ℏ tanh ⁡ ( β ℏ 2 ) ( p 2 + q 2 ) / 2 ) . {\displaystyle \exp _{\star }\left(-\beta (p^{2}+q^{2})/2\right)=\left(\cosh \left({\frac {\beta \hbar }{2}}\right)\right)^{-1}\exp \left({\frac {-2}{\hbar }}\tanh \left({\frac {\beta \hbar }{2}}\right)(p^{2}+q^{2})/2\right).} If one replaces Φ [ f ] {\displaystyle \Phi [f]} in the above expression with an arbitrary operator, the resulting function f may depend on the reduced Planck constant ħ, and may well describe quantum-mechanical processes, provided it is properly composed through the star product, below. In turn, the Weyl map of the Wigner map is summarized by Groenewold's formula, Φ [ f ] = h ∬ d a d b e i a Q + i b P Tr ⁡ ( e − i a Q − i b P Φ ) . {\displaystyle \Phi [f]=h\iint \,da\,db~e^{iaQ+ibP}\operatorname {Tr} (e^{-iaQ-ibP}\Phi ).} === Weyl quantization of polynomial observables === While the above formulas give a nice understanding of the Weyl quantization of a very general observable on phase space, they are not very convenient for computing on simple observables, such as those that are polynomials in q {\displaystyle q} and p {\displaystyle p} . In later sections, we will see that on such polynomials, the Weyl quantization represents the totally symmetric ordering of the noncommuting operators Q {\displaystyle Q} and P {\displaystyle P} . For example, the Wigner map of the quantum angular-momentum-squared operator L2 is not just the classical angular momentum squared, but it further contains an offset term −3ħ2/2, which accounts for the nonvanishing angular momentum of the ground-state Bohr orbit. == Properties == === Weyl quantization of polynomials === The action of the Weyl quantization on polynomial functions of q {\displaystyle q} and p {\displaystyle p} is completely determined by the following symmetric formula: ( a q + b p ) n ⟼ ( a Q + b P ) n {\displaystyle (aq+bp)^{n}\longmapsto (aQ+bP)^{n}} for all complex numbers a {\displaystyle a} and b {\displaystyle b} . From this formula, it is not hard to show that the Weyl quantization on a function of the form q k p l {\displaystyle q^{k}p^{l}} gives the average of all possible orderings of k {\displaystyle k} factors of Q {\displaystyle Q} and l {\displaystyle l} factors of P {\displaystyle P} : ∏ j = 1 N ξ k j ⟼ 1 N ! ∑ σ ∈ S N ∏ j = 1 N Ξ k σ ( j ) {\displaystyle \prod _{j=1}^{N}\xi _{k_{j}}~~\longmapsto ~~{\frac {1}{N!}}\sum _{\sigma \in S_{N}}\prod _{j=1}^{N}\Xi _{k_{\sigma (j)}}} where ξ j = q j , ξ n + j = p j {\displaystyle \xi _{j}=q_{j},\xi _{n+j}=p_{j}} , and S N {\displaystyle S_{N}} is the set of permutations on N elements. For example, we have 6 p 2 q 2 ⟼ P 2 Q 2 + Q 2 P 2 + P Q P Q + P Q 2 P + Q P Q P + Q P 2 Q . {\displaystyle 6p^{2}q^{2}~~\longmapsto ~~P^{2}Q^{2}+Q^{2}P^{2}+PQPQ+PQ^{2}P+QPQP+QP^{2}Q.} While this result is conceptually natural, it is not convenient for computations when k {\displaystyle k} and l {\displaystyle l} are large. In such cases, we can use instead McCoy's formula p m q n ⟼ 1 2 n ∑ r = 0 n ( n r ) Q r P m Q n − r = 1 2 m ∑ s = 0 m ( m s ) P s Q n P m − s . {\displaystyle p^{m}q^{n}~~\longmapsto ~~{1 \over 2^{n}}\sum _{r=0}^{n}{n \choose r}Q^{r}P^{m}Q^{n-r}={1 \over 2^{m}}\sum _{s=0}^{m}{m \choose s}P^{s}Q^{n}P^{m-s}.} This expression gives an apparently different answer for the case of p 2 q 2 {\displaystyle p^{2}q^{2}} from the totally symmetric expression above. There is no contradiction, however, since the canonical commutation relations allow for more than one expression for the same operator. (The reader may find it instructive to use the commutation relations to rewrite the totally symmetric formula for the case of p 2 q 2 {\displaystyle p^{2}q^{2}} in terms of the operators P 2 Q 2 {\displaystyle P^{2}Q^{2}} , Q P 2 Q {\displaystyle QP^{2}Q} , and Q 2 P 2 {\displaystyle Q^{2}P^{2}} and verify the first expression in McCoy's formula with m = n = 2 {\displaystyle m=n=2} .) It is widely thought that the Weyl quantization, among all quantization schemes, comes as close as possible to mapping the Poisson bracket on the classical side to the commutator on the quantum side. (An exact correspondence is impossible, in light of Groenewold's theorem.) For example, Moyal showed the Theorem: If f ( q , p ) {\displaystyle f(q,p)} is a polynomial of degree at most 2 and g ( q , p ) {\displaystyle g(q,p)} is an arbitrary polynomial, then we have Φ ( { f , g } ) = 1 i ℏ [ Φ ( f ) , Φ ( g ) ] {\displaystyle \Phi (\{f,g\})={\frac {1}{i\hbar }}[\Phi (f),\Phi (g)]} . === Weyl quantization of general functions === If f is a real-valued function, then its Weyl-map image Φ[f] is self-adjoint. If f is an element of Schwartz space, then Φ[f] is trace-class. More generally, Φ[f] is a densely defined unbounded operator. The map Φ[f] is one-to-one on the Schwartz space (as a subspace of the square-integrable functions). == See also == == References == Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, Bibcode:2013qtm..book.....H, ISBN 978-1461471158 == Further reading == Case, William B. (October 2008). "Wigner functions and Weyl transforms for pedestrians". American Journal of Physics. 76 (10): 937–946. Bibcode:2008AmJPh..76..937C. doi:10.1119/1.2957889. (Sections I to IV of this article provide an overview over the Wigner–Weyl transform, the Wigner quasiprobability distribution, the phase space formulation of quantum mechanics and the example of the quantum harmonic oscillator.) "Weyl quantization", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Terence Tao's 2012 notes on Weyl ordering
Wikipedia/Wigner–Weyl_transform
The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity. == Definition == The stress–energy tensor involves the use of superscripted variables (not exponents; see Tensor index notation and Einstein summation notation). If Cartesian coordinates in SI units are used, then the components of the position four-vector x are given by: [ x0, x1, x2, x3 ]. In traditional Cartesian coordinates these are instead customarily written [ t, x, y, z ], where t is coordinate time, and x, y, and z are coordinate distances. The stress–energy tensor is defined as the tensor Tαβ of order two that gives the flux of the αth component of the momentum vector across a surface with constant xβ coordinate. In the theory of relativity, this momentum vector is taken as the four-momentum. In general relativity, the stress–energy tensor is symmetric, T α β = T β α . {\displaystyle T^{\alpha \beta }=T^{\beta \alpha }.} In some alternative theories like Einstein–Cartan theory, the stress–energy tensor may not be perfectly symmetric because of a nonzero spin tensor, which geometrically corresponds to a nonzero torsion tensor. == Components == Because the stress–energy tensor is of order 2, its components can be displayed in 4 × 4 matrix form: T μ ν = ( T 00 T 01 T 02 T 03 T 10 T 11 T 12 T 13 T 20 T 21 T 22 T 23 T 30 T 31 T 32 T 33 ) , {\displaystyle T^{\mu \nu }={\begin{pmatrix}T^{00}&T^{01}&T^{02}&T^{03}\\T^{10}&T^{11}&T^{12}&T^{13}\\T^{20}&T^{21}&T^{22}&T^{23}\\T^{30}&T^{31}&T^{32}&T^{33}\end{pmatrix}}\,,} where the indices μ and ν take on the values 0, 1, 2, 3. In the following, k and ℓ range from 1 through 3: In solid state physics and fluid mechanics, the stress tensor is defined to be the spatial components of the stress–energy tensor in the proper frame of reference. In other words, the stress–energy tensor in engineering differs from the relativistic stress–energy tensor by a momentum-convective term. === Covariant and mixed forms === Most of this article works with the contravariant form, Tμν of the stress–energy tensor. However, it is often convenient to work with the covariant form, T μ ν = T α β g α μ g β ν , {\displaystyle T_{\mu \nu }=T^{\alpha \beta }g_{\alpha \mu }g_{\beta \nu },} or the mixed form, T μ ν = T μ α g α ν . {\displaystyle T^{\mu }{}_{\nu }=T^{\mu \alpha }g_{\alpha \nu }.} This article uses the spacelike sign convention (− + + +) for the metric signature. == Conservation law == === In special relativity === The stress–energy tensor is the conserved Noether current associated with spacetime translations. The divergence of the non-gravitational stress–energy is zero. In other words, non-gravitational energy and momentum are conserved, 0 = T μ ν ; ν ≡ ∇ ν T μ ν . {\displaystyle 0=T^{\mu \nu }{}_{;\nu }\ \equiv \ \nabla _{\nu }T^{\mu \nu }{}~.} When gravity is negligible and using a Cartesian coordinate system for spacetime, this may be expressed in terms of partial derivatives as 0 = T μ ν , ν ≡ ∂ ν T μ ν . {\displaystyle 0=T^{\mu \nu }{}_{,\nu }\ \equiv \ \partial _{\nu }T^{\mu \nu }~.} The integral form of the non-covariant formulation is 0 = ∫ ∂ N T μ ν d 3 s ν {\displaystyle 0=\int _{\partial N}T^{\mu \nu }\mathrm {d} ^{3}s_{\nu }} where N is any compact four-dimensional region of spacetime; ∂ N {\textstyle \partial N} is its boundary, a three-dimensional hypersurface; and d 3 s ν {\textstyle \mathrm {d} ^{3}s_{\nu }} is an element of the boundary regarded as the outward pointing normal. In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show that angular momentum is also conserved: 0 = ( x α T μ ν − x μ T α ν ) , ν . {\displaystyle 0=(x^{\alpha }T^{\mu \nu }-x^{\mu }T^{\alpha \nu })_{,\nu }\,.} === In general relativity === When gravity is non-negligible or when using arbitrary coordinate systems, the divergence of the stress–energy still vanishes. But in this case, a coordinate-free definition of the divergence is used which incorporates the covariant derivative 0 = div ⁡ T = T μ ν ; ν = ∇ ν T μ ν = T μ ν , ν + Γ μ σ ν T σ ν + Γ ν σ ν T μ σ {\displaystyle 0=\operatorname {div} T=T^{\mu \nu }{}_{;\nu }=\nabla _{\nu }T^{\mu \nu }=T^{\mu \nu }{}_{,\nu }+\Gamma ^{\mu }{}_{\sigma \nu }T^{\sigma \nu }+\Gamma ^{\nu }{}_{\sigma \nu }T^{\mu \sigma }} where Γ μ σ ν {\textstyle \Gamma ^{\mu }{}_{\sigma \nu }} is the Christoffel symbol, which is the gravitational force field. Consequently, if ξ μ {\textstyle \xi ^{\mu }} is any Killing vector field, then the conservation law associated with the symmetry generated by the Killing vector field may be expressed as 0 = ∇ ν ( ξ μ T ν μ ) = 1 − g ∂ ν ( − g ξ μ T μ ν ) {\displaystyle 0=\nabla _{\nu }\left(\xi ^{\mu }T^{\nu }{}_{\mu }\right)={\frac {1}{\sqrt {-g}}}\partial _{\nu }\left({\sqrt {-g}}\ \xi ^{\mu }T_{\mu }^{\nu }\right)} The integral form of this is 0 = ∫ ∂ N ξ μ T ν μ − g d 3 s ν . {\displaystyle 0=\int _{\partial N}\xi ^{\mu }T^{\nu }{}_{\mu }{\sqrt {-g}}\ \mathrm {d} ^{3}s_{\nu }\,.} == In special relativity == In special relativity, the stress–energy tensor contains information about the energy and momentum densities of a given system, in addition to the momentum and energy flux densities. Given a Lagrangian density L {\textstyle {\mathcal {L}}} that is a function of a set of fields ϕ α {\textstyle \phi _{\alpha }} and their derivatives, but explicitly not of any of the spacetime coordinates, we can construct the canonical stress–energy tensor by looking at the total derivative with respect to one of the generalized coordinates of the system. So, with our condition ∂ L ∂ x ν = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial x^{\nu }}}=0} By using the chain rule, we then have d L d x ν = d ν L = ∂ L ∂ ( ∂ μ ϕ α ) ∂ ( ∂ μ ϕ α ) ∂ x ν + ∂ L ∂ ϕ α ∂ ϕ α ∂ x ν {\displaystyle {\frac {d{\mathcal {L}}}{dx^{\nu }}}=d_{\nu }{\mathcal {L}}={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}{\frac {\partial (\partial _{\mu }\phi _{\alpha })}{\partial x^{\nu }}}+{\frac {\partial {\mathcal {L}}}{\partial \phi _{\alpha }}}{\frac {\partial \phi _{\alpha }}{\partial x^{\nu }}}} Written in useful shorthand, d ν L = ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ∂ μ ϕ α + ∂ L ∂ ϕ α ∂ ν ϕ α {\displaystyle d_{\nu }{\mathcal {L}}={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\partial _{\mu }\phi _{\alpha }+{\frac {\partial {\mathcal {L}}}{\partial \phi _{\alpha }}}\partial _{\nu }\phi _{\alpha }} Then, we can use the Euler–Lagrange Equation: ∂ μ ( ∂ L ∂ ( ∂ μ ϕ α ) ) = ∂ L ∂ ϕ α {\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\right)={\frac {\partial {\mathcal {L}}}{\partial \phi _{\alpha }}}} And then use the fact that partial derivatives commute so that we now have d ν L = ∂ L ∂ ( ∂ μ ϕ α ) ∂ μ ∂ ν ϕ α + ∂ μ ( ∂ L ∂ ( ∂ μ ϕ α ) ) ∂ ν ϕ α {\displaystyle d_{\nu }{\mathcal {L}}={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\mu }\partial _{\nu }\phi _{\alpha }+\partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\right)\partial _{\nu }\phi _{\alpha }} We can recognize the right hand side as a product rule. Writing it as the derivative of a product of functions tells us that d ν L = ∂ μ [ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α ] {\displaystyle d_{\nu }{\mathcal {L}}=\partial _{\mu }\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }\right]} Now, in flat space, one can write d ν L = ∂ μ [ δ ν μ L ] {\textstyle d_{\nu }{\mathcal {L}}=\partial _{\mu }[\delta _{\nu }^{\mu }{\mathcal {L}}]} . Doing this and moving it to the other side of the equation tells us that ∂ μ [ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α ] − ∂ μ ( δ ν μ L ) = 0 {\displaystyle \partial _{\mu }\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }\right]-\partial _{\mu }\left(\delta _{\nu }^{\mu }{\mathcal {L}}\right)=0} And upon regrouping terms, ∂ μ [ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α − δ ν μ L ] = 0 {\displaystyle \partial _{\mu }\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }-\delta _{\nu }^{\mu }{\mathcal {L}}\right]=0} This is to say that the divergence of the tensor in the brackets is 0. Indeed, with this, we define the stress–energy tensor: T μ ν ≡ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α − δ ν μ L {\displaystyle T^{\mu }{}_{\nu }\equiv {\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }-\delta _{\nu }^{\mu }{\mathcal {L}}} By construction it has the property that ∂ μ T μ ν = 0 {\displaystyle \partial _{\mu }T^{\mu }{}_{\nu }=0} Note that this divergenceless property of this tensor is equivalent to four continuity equations. That is, fields have at least four sets of quantities that obey the continuity equation. As an example, it can be seen that T 0 0 {\textstyle T^{0}{}_{0}} is the energy density of the system and that it is thus possible to obtain the Hamiltonian density from the stress–energy tensor. Indeed, since this is the case, observing that ∂ μ T μ 0 = 0 {\textstyle \partial _{\mu }T^{\mu }{}_{0}=0} , we then have ∂ H ∂ t + ∇ ⋅ ( ∂ L ∂ ∇ ϕ α ϕ ˙ α ) = 0 {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial t}}+\nabla \cdot \left({\frac {\partial {\mathcal {L}}}{\partial \nabla \phi _{\alpha }}}{\dot {\phi }}_{\alpha }\right)=0} We can then conclude that the terms of ∂ L ∂ ∇ ϕ α ϕ ˙ α {\textstyle {\frac {\partial {\mathcal {L}}}{\partial \nabla \phi _{\alpha }}}{\dot {\phi }}_{\alpha }} represent the energy flux density of the system. === Trace === The trace of the stress–energy tensor is defined to be ⁠ T μ μ {\displaystyle T^{\mu }{}_{\mu }} ⁠, so T μ μ = ∂ L ∂ ( ∂ μ ϕ α ) ∂ μ ϕ α − δ μ μ L . {\displaystyle T^{\mu }{}_{\mu }={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\mu }\phi _{\alpha }-\delta _{\mu }^{\mu }{\mathcal {L}}.} Since ⁠ δ μ μ = 4 {\displaystyle \delta _{\mu }^{\mu }=4} ⁠, T μ μ = ∂ L ∂ ( ∂ μ ϕ α ) ∂ μ ϕ α − 4 L . {\displaystyle T^{\mu }{}_{\mu }={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\mu }\phi _{\alpha }-4{\mathcal {L}}.} == In general relativity == In general relativity, the symmetric stress–energy tensor acts as the source of spacetime curvature, and is the current density associated with gauge transformations of gravity which are general curvilinear coordinate transformations. (If there is torsion, then the tensor is no longer symmetric. This corresponds to the case with a nonzero spin tensor in Einstein–Cartan gravity theory.) In general relativity, the partial derivatives used in special relativity are replaced by covariant derivatives. What this means is that the continuity equation no longer implies that the non-gravitational energy and momentum expressed by the tensor are absolutely conserved, i.e. the gravitational field can do work on matter and vice versa. In the classical limit of Newtonian gravity, this has a simple interpretation: kinetic energy is being exchanged with gravitational potential energy, which is not included in the tensor, and momentum is being transferred through the field to other bodies. In general relativity the Landau–Lifshitz pseudotensor is a unique way to define the gravitational field energy and momentum densities. Any such stress–energy pseudotensor can be made to vanish locally by a coordinate transformation. In curved spacetime, the spacelike integral now depends on the spacelike slice, in general. There is in fact no way to define a global energy–momentum vector in a general curved spacetime. === Einstein field equations === In general relativity, the stress–energy tensor is studied in the context of the Einstein field equations which are often written as G μ ν + Λ g μ ν = κ T μ ν , {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },} where G μ ν = R μ ν − 1 2 R g μ ν {\textstyle G_{\mu \nu }=R_{\mu \nu }-{\tfrac {1}{2}}R\,g_{\mu \nu }} is the Einstein tensor, R μ ν {\textstyle R_{\mu \nu }} is the Ricci tensor, R = g α β R α β {\textstyle R=g^{\alpha \beta }R_{\alpha \beta }} is the scalar curvature, g μ ν {\textstyle g_{\mu \nu }\,} is the metric tensor, Λ is the cosmological constant (negligible at the scale of a galaxy or smaller), and κ = 8 π G / c 4 {\textstyle \kappa =8\pi G/c^{4}} is the Einstein gravitational constant. == Stress–energy in special situations == === Isolated particle === In special relativity, the stress–energy of a non-interacting particle with rest mass m and trajectory x p ( t ) {\textstyle \mathbf {x} _{\text{p}}(t)} is: T α β ( x , t ) = m v α ( t ) v β ( t ) 1 − ( v / c ) 2 δ ( x − x p ( t ) ) = E c 2 v α ( t ) v β ( t ) δ ( x − x p ( t ) ) {\displaystyle T^{\alpha \beta }(\mathbf {x} ,t)={\frac {m\,v^{\alpha }(t)v^{\beta }(t)}{\sqrt {1-(v/c)^{2}}}}\;\,\delta \left(\mathbf {x} -\mathbf {x} _{\text{p}}(t)\right)={\frac {E}{c^{2}}}\;v^{\alpha }(t)v^{\beta }(t)\;\,\delta (\mathbf {x} -\mathbf {x} _{\text{p}}(t))} where v α {\textstyle v^{\alpha }} is the velocity vector (which should not be confused with four-velocity, since it is missing a γ {\textstyle \gamma } ) v α = ( 1 , d x p d t ( t ) ) , {\displaystyle v^{\alpha }=\left(1,{\frac {d\mathbf {x} _{\text{p}}}{dt}}(t)\right)\,,} δ {\textstyle \delta } is the Dirac delta function and E = p 2 c 2 + m 2 c 4 {\textstyle E={\sqrt {p^{2}c^{2}+m^{2}c^{4}}}} is the energy of the particle. Written in the language of classical physics, the stress–energy tensor would be (relativistic mass, momentum, the dyadic product of momentum and velocity) ( E c 2 , p , p v ) . {\displaystyle \left({\frac {E}{c^{2}}},\,\mathbf {p} ,\,\mathbf {p} \,\mathbf {v} \right)\,.} === Stress–energy of a fluid in equilibrium === For a perfect fluid in thermodynamic equilibrium, the stress–energy tensor takes on a particularly simple form T α β = ( ρ + p c 2 ) u α u β + p g α β {\displaystyle T^{\alpha \beta }\,=\left(\rho +{p \over c^{2}}\right)u^{\alpha }u^{\beta }+pg^{\alpha \beta }} where ρ {\textstyle \rho } is the mass–energy density (kilograms per cubic meter), p {\textstyle p} is the hydrostatic pressure (pascals), u α {\textstyle u^{\alpha }} is the fluid's four-velocity, and g α β {\textstyle g^{\alpha \beta }} is the matrix inverse of the metric tensor. Therefore, the trace is given by T α α = g α β T β α = 3 p − ρ c 2 . {\displaystyle T^{\alpha }{}_{\,\alpha }=g_{\alpha \beta }T^{\beta \alpha }=3p-\rho c^{2}\,.} The four-velocity satisfies u α u β g α β = − c 2 . {\displaystyle u^{\alpha }u^{\beta }g_{\alpha \beta }=-c^{2}\,.} In an inertial frame of reference comoving with the fluid, better known as the fluid's proper frame of reference, the four-velocity is u α = ( 1 , 0 , 0 , 0 ) , {\displaystyle u^{\alpha }=(1,0,0,0)\,,} the matrix inverse of the metric tensor is simply g α β = ( − 1 c 2 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle g^{\alpha \beta }\,=\left({\begin{matrix}-{\frac {1}{c^{2}}}&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right)} and the stress–energy tensor is a diagonal matrix T α β = ( ρ 0 0 0 0 p 0 0 0 0 p 0 0 0 0 p ) . {\displaystyle T^{\alpha \beta }=\left({\begin{matrix}\rho &0&0&0\\0&p&0&0\\0&0&p&0\\0&0&0&p\end{matrix}}\right).} === Electromagnetic stress–energy tensor === The Hilbert stress–energy tensor of a source-free electromagnetic field is T μ ν = 1 μ 0 ( F μ α g α β F ν β − 1 4 g μ ν F δ γ F δ γ ) {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left(F^{\mu \alpha }g_{\alpha \beta }F^{\nu \beta }-{\frac {1}{4}}g^{\mu \nu }F_{\delta \gamma }F^{\delta \gamma }\right)} where F μ ν {\textstyle F_{\mu \nu }} is the electromagnetic field tensor. === Scalar field === The stress–energy tensor for a complex scalar field ϕ {\textstyle \phi } that satisfies the Klein–Gordon equation is T μ ν = ℏ 2 m ( g μ α g ν β + g μ β g ν α − g μ ν g α β ) ∂ α ϕ ¯ ∂ β ϕ − g μ ν m c 2 ϕ ¯ ϕ , {\displaystyle T^{\mu \nu }={\frac {\hbar ^{2}}{m}}\left(g^{\mu \alpha }g^{\nu \beta }+g^{\mu \beta }g^{\nu \alpha }-g^{\mu \nu }g^{\alpha \beta }\right)\partial _{\alpha }{\bar {\phi }}\partial _{\beta }\phi -g^{\mu \nu }mc^{2}{\bar {\phi }}\phi ,} and when the metric is flat (Minkowski in Cartesian coordinates) its components work out to be: T 00 = ℏ 2 m c 4 ( ∂ 0 ϕ ¯ ∂ 0 ϕ + c 2 ∂ k ϕ ¯ ∂ k ϕ ) + m ϕ ¯ ϕ , T 0 i = T i 0 = − ℏ 2 m c 2 ( ∂ 0 ϕ ¯ ∂ i ϕ + ∂ i ϕ ¯ ∂ 0 ϕ ) , a n d T i j = ℏ 2 m ( ∂ i ϕ ¯ ∂ j ϕ + ∂ j ϕ ¯ ∂ i ϕ ) − δ i j ( ℏ 2 m η α β ∂ α ϕ ¯ ∂ β ϕ + m c 2 ϕ ¯ ϕ ) . {\displaystyle {\begin{aligned}T^{00}&={\frac {\hbar ^{2}}{mc^{4}}}\left(\partial _{0}{\bar {\phi }}\partial _{0}\phi +c^{2}\partial _{k}{\bar {\phi }}\partial _{k}\phi \right)+m{\bar {\phi }}\phi ,\\T^{0i}=T^{i0}&=-{\frac {\hbar ^{2}}{mc^{2}}}\left(\partial _{0}{\bar {\phi }}\partial _{i}\phi +\partial _{i}{\bar {\phi }}\partial _{0}\phi \right),\ \mathrm {and} \\T^{ij}&={\frac {\hbar ^{2}}{m}}\left(\partial _{i}{\bar {\phi }}\partial _{j}\phi +\partial _{j}{\bar {\phi }}\partial _{i}\phi \right)-\delta _{ij}\left({\frac {\hbar ^{2}}{m}}\eta ^{\alpha \beta }\partial _{\alpha }{\bar {\phi }}\partial _{\beta }\phi +mc^{2}{\bar {\phi }}\phi \right).\end{aligned}}} == Variant definitions of stress–energy == There are a number of inequivalent definitions of non-gravitational stress–energy: === Hilbert stress–energy tensor === The Hilbert stress–energy tensor is defined as the functional derivative T μ ν = − 2 − g δ S m a t t e r δ g μ ν = − 2 − g ∂ ( − g L m a t t e r ) ∂ g μ ν = − 2 ∂ L m a t t e r ∂ g μ ν + g μ ν L m a t t e r , {\displaystyle T_{\mu \nu }={\frac {-2}{\sqrt {-g}}}{\frac {\delta S_{\mathrm {matter} }}{\delta g^{\mu \nu }}}={\frac {-2}{\sqrt {-g}}}{\frac {\partial \left({\sqrt {-g}}{\mathcal {L}}_{\mathrm {matter} }\right)}{\partial g^{\mu \nu }}}=-2{\frac {\partial {\mathcal {L}}_{\mathrm {matter} }}{\partial g^{\mu \nu }}}+g_{\mu \nu }{\mathcal {L}}_{\mathrm {matter} },} where S m a t t e r {\textstyle S_{\mathrm {matter} }} is the nongravitational part of the action, L m a t t e r {\textstyle {\mathcal {L}}_{\mathrm {matter} }} is the nongravitational part of the Lagrangian density, and the Euler–Lagrange equation has been used. This is symmetric and gauge-invariant. See Einstein–Hilbert action for more information. === Canonical stress–energy tensor === Noether's theorem implies that there is a conserved current associated with translations through space and time; for details see the section above on the stress–energy tensor in special relativity. This is called the canonical stress–energy tensor. Generally, this is not symmetric and if we have some gauge theory, it may not be gauge invariant because space-dependent gauge transformations do not commute with spatial translations. In general relativity, the translations are with respect to the coordinate system and as such, do not transform covariantly. See the section below on the gravitational stress–energy pseudotensor. === Belinfante–Rosenfeld stress–energy tensor === In the presence of spin or other intrinsic angular momentum, the canonical Noether stress–energy tensor fails to be symmetric. The Belinfante–Rosenfeld stress–energy tensor is constructed from the canonical stress–energy tensor and the spin current in such a way as to be symmetric and still conserved. In general relativity, this modified tensor agrees with the Hilbert stress–energy tensor. == Gravitational stress–energy == By the equivalence principle, gravitational stress–energy will always vanish locally at any chosen point in some chosen frame, therefore gravitational stress–energy cannot be expressed as a non-zero tensor; instead we have to use a pseudotensor. In general relativity, there are many possible distinct definitions of the gravitational stress–energy–momentum pseudotensor. These include the Einstein pseudotensor and the Landau–Lifshitz pseudotensor. The Landau–Lifshitz pseudotensor can be reduced to zero at any event in spacetime by choosing an appropriate coordinate system. == See also == == Notes == == References == == Further reading == Wyss, Walter (14 July 2005). "The energy–momentum tensor in classical field theory" (PDF). Universal Journal of Physics and Applications. Old and New Concepts of Physics [prior journal name]. II (3–4): 295–310. ISSN 2331-6543. ... classical field theory and in particular in the role that a divergence term plays in a lagrangian ... == External links == Lecture, Stephan Waner Caltech Tutorial on Relativity — A simple discussion of the relation between the stress–energy tensor of general relativity and the metric
Wikipedia/Stress-energy_tensor
In physics, specifically general relativity, the Mathisson–Papapetrou–Dixon equations describe the motion of a massive spinning body moving in a gravitational field. Other equations with similar names and mathematical forms are the Mathisson–Papapetrou equations and Papapetrou–Dixon equations. All three sets of equations describe the same physics. These equations are named after Myron Mathisson, William Graham Dixon, and Achilles Papapetrou, who worked on them. Throughout, this article uses the natural units c = G = 1, and tensor index notation. == Mathisson–Papapetrou–Dixon equations == The Mathisson–Papapetrou–Dixon (MPD) equations for a mass m {\displaystyle m} spinning body are D k ν D τ + 1 2 S λ μ R λ μ ν ρ V ρ = 0 , D S λ μ D τ + V λ k μ − V μ k λ = 0. {\displaystyle {\begin{aligned}{\frac {Dk_{\nu }}{D\tau }}+{\frac {1}{2}}S^{\lambda \mu }R_{\lambda \mu \nu \rho }V^{\rho }&=0,\\{\frac {DS^{\lambda \mu }}{D\tau }}+V^{\lambda }k^{\mu }-V^{\mu }k^{\lambda }&=0.\end{aligned}}} Here τ {\displaystyle \tau } is the proper time along the trajectory, k ν {\displaystyle k_{\nu }} is the body's four-momentum k ν = ∫ t = const T 0 ν g d 3 x , {\displaystyle k_{\nu }=\int _{t={\text{const}}}{T^{0}}_{\nu }{\sqrt {g}}d^{3}x,} the vector V μ {\displaystyle V^{\mu }} is the four-velocity of some reference point X μ {\displaystyle X^{\mu }} in the body, and the skew-symmetric tensor S μ ν {\displaystyle S^{\mu \nu }} is the angular momentum S μ ν = ∫ t = const { ( x μ − X μ ) T 0 ν − ( x ν − X ν ) T 0 μ } g d 3 x {\displaystyle S^{\mu \nu }=\int _{t={\text{const}}}\left\{\left(x^{\mu }-X^{\mu }\right)T^{0\nu }-\left(x^{\nu }-X^{\nu }\right)T^{0\mu }\right\}{\sqrt {g}}d^{3}x} of the body about this point. In the time-slice integrals we are assuming that the body is compact enough that we can use flat coordinates within the body where the energy-momentum tensor T μ ν {\displaystyle T^{\mu \nu }} is non-zero. As they stand, there are only ten equations to determine thirteen quantities. These quantities are the six components of S λ μ {\displaystyle S^{\lambda \mu }} , the four components of k ν {\displaystyle k_{\nu }} and the three independent components of V μ {\displaystyle V^{\mu }} . The equations must therefore be supplemented by three additional constraints which serve to determine which point in the body has velocity V μ {\displaystyle V^{\mu }} . Mathison and Pirani originally chose to impose the condition V μ S μ ν = 0 {\displaystyle V^{\mu }S_{\mu \nu }=0} which, although involving four components, contains only three constraints because V μ S μ ν V ν {\displaystyle V^{\mu }S_{\mu \nu }V^{\nu }} is identically zero. This condition, however, does not lead to a unique solution and can give rise to the mysterious "helical motions". The Tulczyjew–Dixon condition k μ S μ ν = 0 {\displaystyle k_{\mu }S^{\mu \nu }=0} does lead to a unique solution as it selects the reference point X μ {\displaystyle X^{\mu }} to be the body's center of mass in the frame in which its momentum is ( k 0 , k 1 , k 2 , k 3 ) = ( m , 0 , 0 , 0 ) {\displaystyle (k_{0},k_{1},k_{2},k_{3})=(m,0,0,0)} . Accepting the Tulczyjew–Dixon condition k μ S μ ν = 0 {\displaystyle k_{\mu }S^{\mu \nu }=0} , we can manipulate the second of the MPD equations into the form D S λ μ D τ + 1 m 2 ( S λ ρ k μ D k ρ D τ + S ρ μ k λ D k ρ D τ ) = 0 , {\displaystyle {\frac {DS_{\lambda \mu }}{D\tau }}+{\frac {1}{m^{2}}}\left(S_{\lambda \rho }k_{\mu }{\frac {Dk^{\rho }}{D\tau }}+S_{\rho \mu }k_{\lambda }{\frac {Dk^{\rho }}{D\tau }}\right)=0,} This is a form of Fermi–Walker transport of the spin tensor along the trajectory – but one preserving orthogonality to the momentum vector k μ {\displaystyle k^{\mu }} rather than to the tangent vector V μ = d X μ / d τ {\displaystyle V^{\mu }=dX^{\mu }/d\tau } . Dixon calls this M-transport. == See also == Introduction to the mathematics of general relativity Geodesic equation Pauli–Lubanski pseudovector Test particle Relativistic angular momentum Center of mass (relativistic) == References == === Notes === === Selected papers === C. Chicone; B. Mashhoon; B. Punsly (2005). "Relativistic motion of spinning particles in a gravitational field". Physics Letters A. 343 (1–3): 1–7. arXiv:gr-qc/0504146. Bibcode:2005PhLA..343....1C. doi:10.1016/j.physleta.2005.05.072. hdl:10355/8357. S2CID 56132009. N. Messios (2007). "Spinning Particles in Spacetimes with Torsion". International Journal of Theoretical Physics. General Relativity and Gravitation. 46 (3). Springer: 562–575. Bibcode:2007IJTP...46..562M. doi:10.1007/s10773-006-9146-8. S2CID 119514028. D. Singh (2008). "An analytic perturbation approach for classical spinning particle dynamics". International Journal of Theoretical Physics. General Relativity and Gravitation. 40 (6). Springer: 1179–1192. arXiv:0706.0928. Bibcode:2008GReGr..40.1179S. doi:10.1007/s10714-007-0597-x. S2CID 7255389. L. F. O. Costa; J. Natário; M. Zilhão (2012). "Mathisson's helical motions demystified". AIP Conf. Proc. AIP Conference Proceedings. 1458: 367–370. arXiv:1206.7093. Bibcode:2012AIPC.1458..367C. doi:10.1063/1.4734436. S2CID 119306409. R. M. Plyatsko (1985). "Addition of the Pirani condition to the Mathisson-Papapetrou equations in a Schwarzschild field". Soviet Physics Journal. 28 (7). Springer: 601–604. Bibcode:1985SvPhJ..28..601P. doi:10.1007/BF00896195. S2CID 121704297. R.R. Lompay (2005). "Deriving Mathisson-Papapetrou equations from relativistic pseudomechanics". arXiv:gr-qc/0503054. R. Plyatsko (2011). "Can Mathisson-Papapetrou equations give clue to some problems in astrophysics?". arXiv:1110.2386 [gr-qc]. M. Leclerc (2005). "Mathisson-Papapetrou equations in metric and gauge theories of gravity in a Lagrangian formulation". Classical and Quantum Gravity. 22 (16): 3203–3221. arXiv:gr-qc/0505021. Bibcode:2005CQGra..22.3203L. doi:10.1088/0264-9381/22/16/006. S2CID 2569951. R. Plyatsko; O. Stefanyshyn; M. Fenyk (2011). "Mathisson-Papapetrou-Dixon equations in the Schwarzschild and Kerr backgrounds". Classical and Quantum Gravity. 28 (19): 195025. arXiv:1110.1967. Bibcode:2011CQGra..28s5025P. doi:10.1088/0264-9381/28/19/195025. S2CID 119213540. R. Plyatsko; O. Stefanyshyn (2008). "On common solutions of Mathisson equations under different conditions". arXiv:0803.0121. Bibcode:2008arXiv0803.0121P. {{cite journal}}: Cite journal requires |journal= (help) R. M. Plyatsko; A. L. Vynar; Ya. N. Pelekh (1985). "Conditions for the appearance of gravitational ultrarelativistic spin-orbital interaction". Soviet Physics Journal. 28 (10). Springer: 773–776. Bibcode:1985SvPhJ..28..773P. doi:10.1007/BF00897946. S2CID 119799125. K. Svirskas; K. Pyragas (1991). "The spherically-symmetrical trajectories of spin particles in the Schwarzschild field". Astrophysics and Space Science. 179 (2). Springer: 275–283. Bibcode:1991Ap&SS.179..275S. doi:10.1007/BF00646947. S2CID 120108333.
Wikipedia/Mathisson-Papapetrou-Dixon_equations
The universal wavefunction or the wavefunction of the universe is the wavefunction or quantum state of the entire universe. It is regarded as the basic physical entity in the many-worlds interpretation of quantum mechanics, and finds applications in quantum cosmology. It evolves deterministically according to a wave equation. The concept of universal wavefunction was introduced by Hugh Everett in his 1956 PhD thesis draft The Theory of the Universal Wave Function. It later received investigation from James Hartle and Stephen Hawking who derived the Hartle–Hawking solution to the Wheeler–DeWitt equation to explain the initial conditions of the Big Bang cosmology. == Role of observers == Hugh Everett's universal wavefunction supports the idea that observed and observer are all mixed together: If we try to limit the applicability so as to exclude the measuring apparatus, or in general systems of macroscopic size, we are faced with the difficulty of sharply defining the region of validity. For what n might a group of n particles be construed as forming a measuring device so that the quantum description fails? And to draw the line at human or animal observers, i.e., to assume that all mechanical apparata obey the usual laws, but that they are not valid for living observers, does violence to the so-called principle of psycho-physical parallelism. Eugene Wigner and John Archibald Wheeler take issue with this stance. Wigner wrote: The state vector of my mind, even if it were completely known, would not give its impressions. A translation from state vector to impressions would be necessary; without such a translation the state vector would be meaningless. Wheeler wrote: One is led to recognize that a wave function 'encompassing the whole universe' is an idealization, formalistically perhaps a convenient idealization, but an idealization so strained that it can be used only in part in any forecast of correlations that makes physical sense. For making sense it seems essential most of all to 'leave the observer out of the wave function'. == See also == Heisenberg cut == References ==
Wikipedia/Universal_wavefunction
Quantum characteristics are phase-space trajectories that arise in the phase space formulation of quantum mechanics through the Wigner transform of Heisenberg operators of canonical coordinates and momenta. These trajectories obey the Hamilton equations in quantum form and play the role of characteristics in terms of which time-dependent Weyl's symbols of quantum operators can be expressed. In the classical limit, quantum characteristics reduce to classical trajectories. The knowledge of quantum characteristics is equivalent to the knowledge of quantum dynamics. == Weyl–Wigner association rule == In Hamiltonian dynamics, classical systems with n {\displaystyle n} degrees of freedom are described by 2 n {\displaystyle 2n} canonical coordinates and momenta ξ i = ( x 1 , … , x n , p 1 , … , p n ) ∈ R 2 n , {\displaystyle \xi ^{i}=(x^{1},\ldots ,x^{n},p_{1},\ldots ,p_{n})\in \mathbb {R} ^{2n},} that form a coordinate system in the phase space. These variables satisfy the Poisson bracket relations { ξ k , ξ l } = − I k l . {\displaystyle \{\xi ^{k},\xi ^{l}\}=-I^{kl}.} The skew-symmetric matrix I k l {\displaystyle I^{kl}} , ‖ I ‖ = ‖ 0 − E n E n 0 ‖ , {\displaystyle \left\|I\right\|={\begin{Vmatrix}0&-E_{n}\\E_{n}&0\end{Vmatrix}},} where E n {\displaystyle E_{n}} is the n × n {\displaystyle n\times n} identity matrix, defines nondegenerate 2-form in the phase space. The phase space acquires thereby the structure of a symplectic manifold. The phase space is not metric space, so distance between two points is not defined. The Poisson bracket of two functions can be interpreted as the oriented area of a parallelogram whose adjacent sides are gradients of these functions. Rotations in Euclidean space leave the distance between two points invariant. Canonical transformations in symplectic manifold leave the areas invariant. In quantum mechanics, the canonical variables ξ {\displaystyle \xi } are associated to operators of canonical coordinates and momenta ξ ^ i = ( x ^ 1 , … , x ^ n , p ^ 1 , … , p ^ n ) ∈ Op ⁡ ( L 2 ( R n ) ) . {\displaystyle {\hat {\xi }}^{i}=({\hat {x}}^{1},\ldots ,{\hat {x}}^{n},{\hat {p}}_{1},\ldots ,{\hat {p}}_{n})\in \operatorname {Op} (L^{2}(\mathbb {R} ^{n})).} These operators act in Hilbert space and obey commutation relations [ ξ ^ k , ξ ^ l ] = − i ℏ I k l . {\displaystyle [{\hat {\xi }}^{k},{\hat {\xi }}^{l}]=-i\hbar I^{kl}.} Weyl’s association rule extends the correspondence ξ i → ξ ^ i {\displaystyle \xi ^{i}\rightarrow {\hat {\xi }}^{i}} to arbitrary phase-space functions and operators. === Taylor expansion === A one-sided association rule f ( ξ ) → f ^ {\displaystyle f(\xi )\to {\hat {f}}} was formulated by Weyl initially with the help of Taylor expansion of functions of operators of the canonical variables f ^ = f ( ξ ^ ) ≡ ∑ s = 0 ∞ 1 s ! ∂ s f ( 0 ) ∂ ξ i 1 … ∂ ξ i s ξ ^ i 1 … ξ ^ i s . {\displaystyle {\hat {f}}=f({\hat {\xi }})\equiv \sum _{s=0}^{\infty }{\frac {1}{s!}}{\frac {\partial ^{s}f(0)}{\partial \xi ^{i_{1}}\ldots \partial \xi ^{i_{s}}}}{\hat {\xi }}^{i_{1}}\ldots {\hat {\xi }}^{i_{s}}.} The operators ξ ^ {\displaystyle {\hat {\xi }}} do not commute, so the Taylor expansion is not defined uniquely. The above prescription uses the symmetrized products of the operators. The real functions correspond to the Hermitian operators. The function f ( ξ ) {\displaystyle f(\xi )} is called Weyl's symbol of operator f ^ {\displaystyle {\hat {f}}} . Under the reverse association f ( ξ ) ← f ^ {\displaystyle f(\xi )\leftarrow {\hat {f}}} , the density matrix turns to the Wigner function. Wigner functions have numerous applications in quantum many-body physics, kinetic theory, collision theory, quantum chemistry. A refined version of the Weyl–Wigner association rule was proposed by Groenewold and Stratonovich. === Operator basis === The set of operators acting in the Hilbert space is closed under multiplication of operators by c {\displaystyle c} -numbers and summation. Such a set constitutes a vector space V {\displaystyle \mathbb {V} } . The association rule formulated with the use of the Taylor expansion preserves operations on the operators. The correspondence can be illustrated with the following diagram: f ( ξ ) ⟷ f ^ g ( ξ ) ⟷ g ^ c × f ( ξ ) ⟷ c × f ^ f ( ξ ) + g ( ξ ) ⟷ f ^ + g ^ } vector space V f ( ξ ) ⋆ g ( ξ ) ⟷ f ^ g ^ } algebra {\displaystyle \left.{\begin{array}{c}{\begin{array}{c}\left.{\begin{array}{ccc}f(\xi )&\longleftrightarrow &{\hat {f}}\\g(\xi )&\longleftrightarrow &{\hat {g}}\\c\times f(\xi )&\longleftrightarrow &c\times {\hat {f}}\\f(\xi )+g(\xi )&\longleftrightarrow &{\hat {f}}+{\hat {g}}\end{array}}\right\}\;{\text{vector space}}\;\;\mathbb {V} \end{array}}\\{\begin{array}{ccc}{f(\xi )\star g(\xi )}&{\longleftrightarrow }&\;\;{{\hat {f}}{\hat {g}}}\end{array}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\end{array}}\right\}{\text{algebra}}} Here, f ( ξ ) {\displaystyle f(\xi )} and g ( ξ ) {\displaystyle g(\xi )} are functions and f ^ {\displaystyle {\hat {f}}} and g ^ {\displaystyle {\hat {g}}} are the associated operators. The elements of basis of V {\displaystyle \mathbb {V} } are labelled by canonical variables ξ i ∈ ( − ∞ , + ∞ ) {\displaystyle \xi ^{i}\in (-\infty ,+\infty )} . The commonly used Groenewold-Stratonovich basis looks like B ^ ( ξ ) = ∫ d 2 n η ( 2 π ℏ ) n exp ⁡ ( − i ℏ η k ( ξ − ξ ^ ) k ) ∈ V . {\displaystyle {\hat {B}}(\xi )=\int {\frac {d^{2n}\eta }{(2\pi \hbar )^{n}}}\exp(-{\frac {i}{\hbar }}\eta _{k}(\xi -{\hat {\xi }})^{k})\in \mathbb {V} .} The Weyl–Wigner two-sided association rule for function f ( ξ ) {\displaystyle f(\xi )} and operator f ^ {\displaystyle {\hat {f}}} has the form f ( ξ ) = Tr ⁡ [ B ^ ( ξ ) f ^ ] , {\displaystyle f(\xi )=\operatorname {Tr} [{\hat {B}}(\xi ){\hat {f}}],} f ^ = ∫ d 2 n ξ ( 2 π ℏ ) n f ( ξ ) B ^ ( ξ ) . {\displaystyle {\hat {f}}=\int {\frac {d^{2n}\xi }{(2\pi \hbar )^{n}}}f(\xi ){\hat {B}}(\xi ).} The function f ( ξ ) {\displaystyle f(\xi )} provides coordinates of the operator f ^ {\displaystyle {\hat {f}}} in the basis B ^ ( ξ ) {\displaystyle {\hat {B}}(\xi )} . The basis is complete and orthogonal: ∫ d 2 n ξ ( 2 π ℏ ) n B ^ ( ξ ) Tr ⁡ [ B ^ ( ξ ) f ^ ] = f ^ , {\displaystyle \int {\frac {d^{2n}\xi }{(2\pi \hbar )^{n}}}{\hat {B}}(\xi )\operatorname {Tr} [{\hat {B}}(\xi ){\hat {f}}]={\hat {f}},} Tr ⁡ [ B ^ ( ξ ) B ^ ( ξ ′ ) ] = ( 2 π ℏ ) n δ 2 n ( ξ − ξ ′ ) . {\displaystyle \operatorname {Tr} [{\hat {B}}(\xi ){\hat {B}}(\xi ^{\prime })]=(2\pi \hbar )^{n}\delta ^{2n}(\xi -\xi ^{\prime }).} Alternative operator bases are discussed also. The freedom in choice of the operator basis is better known as the operator ordering problem. The coordinates of particle trajectories in phase space depend on the operator basis. == Star-product == The set of operators Op(L2(Rn)) is closed under the multiplication of operators. The vector space V {\displaystyle \mathbb {V} } is endowed thereby with an associative algebra structure. Given two functions f ( ξ ) = T r [ B ^ ( ξ ) f ^ ] a n d g ( ξ ) = T r [ B ^ ( ξ ) g ^ ] , {\displaystyle f(\xi )=\mathrm {Tr} [{\hat {B}}(\xi ){\hat {f}}]~~\mathrm {and} ~~g(\xi )=\mathrm {Tr} [{\hat {B}}(\xi ){\hat {g}}],} one can construct a third function, f ( ξ ) ⋆ g ( ξ ) = T r [ B ^ ( ξ ) f ^ g ^ ] {\displaystyle f(\xi )\star g(\xi )=\mathrm {Tr} [{\hat {B}}(\xi ){\hat {f}}{\hat {g}}]} called the ⋆ {\displaystyle \star } -product. It is given explicitly by f ( ξ ) ⋆ g ( ξ ) = f ( ξ ) exp ⁡ ( i ℏ 2 P ) g ( ξ ) , {\displaystyle f(\xi )\star g(\xi )=f(\xi )\exp({\frac {i\hbar }{2}}{\mathcal {P}})g(\xi ),} where P = − I k l ∂ ∂ ξ k ← ∂ ∂ ξ l → {\displaystyle {\mathcal {P}}=-{I}^{kl}{\overleftarrow {\frac {\partial }{\partial \xi ^{k}}}}{\overrightarrow {\frac {\partial }{\partial \xi ^{l}}}}} is the Poisson operator. The ⋆ {\displaystyle \star } -product splits into symmetric and skew-symmetric parts, f ⋆ g = f ∘ g + i ℏ 2 f ∧ g . {\displaystyle f\star g=f\circ g+{\frac {i\hbar }{2}}f\wedge g.} In the classical limit, the ∘ {\displaystyle \circ } -product becomes the dot product. The skew-symmetric part f ∧ g {\displaystyle f\wedge g} is known as the Moyal bracket. This is the Weyl symbol of the commutator. In the classical limit, the Moyal bracket becomes the Poisson bracket. The Moyal bracket is a quantum deformation of the Poisson bracket. The ⋆ {\displaystyle \star } -product is associative, whereas the ∘ {\displaystyle \circ } -product and the Moyal bracket are not associative. == Quantum characteristics == The correspondence ξ ↔ ξ ^ {\displaystyle \xi \leftrightarrow {\hat {\xi }}} shows that coordinate transformations in the phase space are accompanied by transformations of operators of the canonical coordinates and momenta and vice versa. Let U ^ {\displaystyle \mathbf {\hat {U}} } be the evolution operator, U ^ = exp ⁡ ( − i ℏ H ^ τ ) , {\displaystyle {\hat {U}}=\exp {\Bigl (}-{\frac {i}{\hbar }}{\hat {H}}\tau {\Bigr )},} and H ^ {\displaystyle {\hat {H}}} be the Hamiltonian. Consider the following scheme, ξ ⟶ q ξ ´ ↕ ↕ ξ ^ ⟶ U ^ ξ ^ ´ {\displaystyle {\begin{aligned}&{}\,\xi {\stackrel {q}{\longrightarrow }}\,{\acute {\xi }}\\&{}\updownarrow \;\;\;\;\;\;\updownarrow \\&{}\,{\hat {\xi }}{\stackrel {\hat {U}}{\longrightarrow }}{\acute {\hat {\xi }}}\end{aligned}}} Quantum evolution transforms vectors in the Hilbert space and, under the Wigner association map, coordinates in the phase space. In the Heisenberg representation, the operators of the canonical variables transform as ξ ^ i → ξ ^ i ´ = U ^ † ξ ^ i U ^ . {\displaystyle {\hat {\xi }}^{i}\rightarrow {\acute {{\hat {\xi }}^{i}}}={\hat {U}}^{\dagger }{\hat {\xi }}^{i}{\hat {U}}.} The phase-space coordinates ξ ´ i {\displaystyle {\acute {\xi }}^{i}} that correspond to new operators ξ ^ i ´ {\displaystyle {\acute {{\hat {\xi }}^{i}}}} in the old basis B ^ ( ξ ) {\displaystyle {\hat {B}}(\xi )} are given by ξ i → ξ ´ i = q i ( ξ , τ ) = T r [ B ^ ( ξ ) U ^ † ξ ^ i U ^ ] , {\displaystyle \xi ^{i}\rightarrow {\acute {\xi }}^{i}=q^{i}(\xi ,\tau )=\mathrm {Tr} [{\hat {B}}(\xi ){\hat {U}}^{\dagger }{\hat {\xi }}^{i}{\hat {U}}],} with the initial conditions q i ( ξ , 0 ) = ξ i . {\displaystyle q^{i}(\xi ,0)=\xi ^{i}.} The functions q i ( ξ , τ ) {\displaystyle q^{i}(\xi ,\tau )} specify the quantum phase flow. In the general case, it is canonical to first order in τ. === Star-functions === The set of operators of canonical variables is complete in the sense that any operator can be represented as a function of operators ξ ^ {\displaystyle {\hat {\xi }}} . Transformations f ^ → f ^ ´ = U ^ † f ^ U ^ {\displaystyle {\hat {f}}\rightarrow {\acute {\hat {f}}}={\hat {U}}^{\dagger }{\hat {f}}{\hat {U}}} induce, under the Wigner association rule, transformations of phase-space functions, f ( ξ ) ⟶ q f ´ ( ξ ) = T r [ B ^ ( ξ ) U ^ † f ^ U ^ ] ↕ ↕ f ^ ⟶ U ^ f ^ ´ = U ^ † f ^ U ^ {\displaystyle {\begin{aligned}&{}f(\xi ){\stackrel {q}{\longrightarrow }}{\acute {f}}(\xi )=\mathrm {Tr} [{\hat {B}}(\xi ){\hat {U}}^{\dagger }{\hat {f}}{\hat {U}}]\\&{}\updownarrow \;\;\;\;\;\;\;\;\;\;\,\updownarrow \\&{}{\hat {f}}\;\;\;\;{\stackrel {\hat {U}}{\longrightarrow }}\,{\acute {\hat {f}}}\;\;\;\;\;={\hat {U}}^{\dagger }{\hat {f}}{\hat {U}}\end{aligned}}} Using the Taylor expansion, the transformation of function f ( ξ ) {\displaystyle f(\xi )} under evolution can be found to be f ( ξ ) → f ´ ( ξ ) ≡ T r [ B ^ ( ξ ) U † ^ f ( ξ ^ ) U ^ ] = ∑ s = 0 ∞ 1 s ! ∂ s f ( 0 ) ∂ ξ i 1 … ∂ ξ i s q i 1 ( ξ , τ ) ⋆ … ⋆ q i s ( ξ , τ ) ≡ f ( ⋆ q ( ξ , τ ) ) . {\displaystyle f(\xi )\rightarrow {\acute {f}}(\xi )\equiv \mathrm {Tr} [{\hat {B}}(\xi ){\hat {U^{\dagger }}}f({\hat {\xi }}){\hat {U}}]=\sum _{s=0}^{\infty }{\frac {1}{s!}}{\frac {\partial ^{s}f(0)}{\partial \xi ^{i_{1}}\ldots \partial \xi ^{i_{s}}}}q^{i_{1}}(\xi ,\tau )\star \ldots \star q^{i_{s}}(\xi ,\tau )\equiv f(\star q(\xi ,\tau )).} The composite function defined in such a way is called ⋆ {\displaystyle \star } -function. The composition law differs from the classical one. However, the semiclassical expansion of f ( ⋆ q ( ξ , τ ) ) {\displaystyle f(\star q(\xi ,\tau ))} around f ( q ( ξ , τ ) ) {\displaystyle f(q(\xi ,\tau ))} is formally well defined and involves even powers of ℏ {\displaystyle \hbar } only. This equation shows that, given how quantum characteristics are constructed, the physical observables can be found without further reference to the Hamiltonian. The functions q i ( ξ , τ ) {\displaystyle q^{i}(\xi ,\tau )} play the role of characteristics, similarly to the classical characteristics used to solve the classical Liouville equation. === The quantum Liouville equation === The Wigner transform of the evolution equation for the density matrix in the Schrödinger representation leads to a quantum Liouville equation for the Wigner function. The Wigner transform of the evolution equation for operators in the Heisenberg representation, ∂ ∂ τ f ^ = − i ℏ [ f ^ , H ^ ] , {\displaystyle {\frac {\partial }{\partial \tau }}{\hat {f}}=-{\frac {i}{\hbar }}[{\hat {f}},{\hat {H}}],} leads to the same equation with the opposite (plus) sign in the right-hand side: ∂ ∂ τ f ( ξ , τ ) = f ( ξ , τ ) ∧ H ( ξ ) . {\displaystyle {\frac {\partial }{\partial \tau }}f(\xi ,\tau )=f(\xi ,\tau )\wedge H(\xi ).} ⋆ {\displaystyle \star } -function solves this equation in terms of quantum characteristics: f ( ξ , τ ) = f ( ⋆ q ( ξ , τ ) , 0 ) . {\displaystyle f(\xi ,\tau )=f(\star q(\xi ,\tau ),0).} Similarly, the evolution of the Wigner function in the Schrödinger representation is given by W ( ξ , τ ) = W ( ⋆ q ( ξ , − τ ) , 0 ) . {\displaystyle W(\xi ,\tau )=W(\star q(\xi ,-\tau ),0).} The Liouville theorem of classical mechanics fails, to the extent that, locally, the phase space volume is not preserved in time. In fact, the quantum phase flow does not preserve all differential forms ω 2 s {\displaystyle \omega ^{2s}} defined by exterior powers of ω 2 = I k l d ξ k ⋏ d ξ l {\displaystyle \omega ^{2}=I^{kl}d\xi _{k}\curlywedge d\xi _{l}} . The Wigner function represents a quantum system in a more general form than the wave function. Wave functions describe pure states, while the Wigner function characterizes ensembles of quantum states. Any Hermitian operator can be diagonalized: f ^ = ∑ s λ s | s ⟩ ⟨ s | {\displaystyle {\hat {f}}=\sum _{s}\lambda _{s}|s\rangle \langle s|} . Those operators whose eigenvalues λ s {\displaystyle \lambda _{s}} are non-negative and sum to a finite number can be mapped to density matrices, i.e., to some physical states. The Wigner function is an image of the density matrix, so the Wigner functions admit a similar decomposition: W ( ξ ) = ∑ s λ s W s ( ξ ) , {\displaystyle W(\xi )=\sum _{s}\lambda _{s}W_{s}(\xi ),} with λ s ≥ 0 {\displaystyle \lambda _{s}\geq 0} and W s ( ξ ) ⋆ W r ( ξ ) = δ s r W s ( ξ ) {\displaystyle W_{s}(\xi )\star W_{r}(\xi )=\delta _{sr}W_{s}(\xi )} . === Quantum Hamilton's equations === The Quantum Hamilton's equations can be obtained applying the Wigner transform to the evolution equations for Heisenberg operators of canonical coordinates and momenta, ∂ ∂ τ q i ( ξ , τ ) = { ζ i , H ( ζ ) } | ζ = ⋆ q ( ξ , τ ) . {\displaystyle {\frac {\partial }{\partial \tau }}q^{i}(\xi ,\tau )=\{\zeta ^{i},H(\zeta )\}|_{\zeta =\star q(\xi ,\tau )}.} The right-hand side is calculated like in the classical mechanics. The composite function is, however, ⋆ {\displaystyle \star } -function. The ⋆ {\displaystyle \star } -product violates canonicity of the phase flow beyond the first order in τ {\displaystyle \tau } . === Conservation of Moyal bracket === The antisymmetrized products of even number of operators of canonical variables are c-numbers as a consequence of the commutation relations. These products are left invariant by unitary transformations, which leads, in particular, to the relation q i ( ξ , τ ) ∧ q j ( ξ , τ ) = ξ i ∧ ξ j = − I i j . {\displaystyle q^{i}(\xi ,\tau )\wedge q^{j}(\xi ,\tau )=\xi ^{i}\wedge \xi ^{j}=-I^{ij}.} In general, the antisymmetrized product q [ i 1 ( ξ , τ ) ⋆ q i 2 ( ξ , τ ) ⋆ … ⋆ q i 2 s ] ( ξ , τ ) {\displaystyle q^{[i_{1}}(\xi ,\tau )\star q^{i_{2}}(\xi ,\tau )\star \ldots \star q^{i_{2s}]}(\xi ,\tau )} is also invariant, that is, it does not depend on time, and moreover does not depend on the coordinate. Phase-space transformations induced by the evolution operator preserve the Moyal bracket and do not preserve the Poisson bracket, so the evolution map ξ → ξ ´ = q ( ξ , τ ) , {\displaystyle \xi \rightarrow {\acute {\xi }}=q(\xi ,\tau ),} is not canonical beyond O(τ). The first order in τ defines the algebra of the transformation group. As previously noted, the algebra of canonical transformations of classical mechanics coincides with the algebra of unitary transformations of quantum mechanics. These two groups, however, are different because the multiplication operations in classical and quantum mechanics are different. Transformation properties of canonical variables and phase-space functions under unitary transformations in the Hilbert space have important distinctions from the case of canonical transformations in the phase space. === Composition law === Quantum characteristics can hardly be treated visually as trajectories along which physical particles move. The reason lies in the star-composition law q ( ξ , τ 1 + τ 2 ) = q ( ⋆ q ( ξ , τ 1 ) , τ 2 ) , {\displaystyle q(\xi ,\tau _{1}+\tau _{2})=q(\star q(\xi ,\tau _{1}),\tau _{2}),} which is non-local and is distinct from the dot-composition law of classical mechanics. === Energy conservation === The energy conservation implies H ( ξ ) = H ( ⋆ q ( ξ , τ ) ) , {\displaystyle H(\xi )=H(\star q(\xi ,\tau )),} where H ( ξ ) = T r [ B ^ ( ξ ) H ^ ] {\displaystyle H(\xi )=\mathrm {Tr} [{\hat {B}}(\xi ){\hat {H}}]} is Hamilton's function. In the usual geometric sense, H ( ξ ) {\displaystyle H(\xi )} is not conserved along quantum characteristics. == Summary == The origin of the method of characteristics can be traced back to Heisenberg’s matrix mechanics. Suppose that we have solved in the matrix mechanics the evolution equations for the operators of the canonical coordinates and momenta in the Heisenberg representation. These operators evolve according to ξ ^ i → ξ ^ i ( τ ) = U ^ † ξ ^ i U ^ . {\displaystyle {\hat {\xi }}^{i}\rightarrow {\hat {\xi }}^{i}(\tau )={\hat {U}}^{\dagger }{\hat {\xi }}^{i}{\hat {U}}.} It is known that for any operator f ^ {\displaystyle {\hat {f}}} one can find a function f (ξ) through which f ^ {\displaystyle {\hat {f}}} is represented in the form f ( ξ ^ ) {\displaystyle f({\hat {\xi }})} . The same operator f ^ {\displaystyle {\hat {f}}} at time τ is equal to f ^ ( τ ) = U ^ † f ^ U ^ = U ^ † f ( ξ ^ ) U ^ = f ( U ^ † ξ ^ U ^ ) = f ( ξ ^ ( τ ) ) . {\displaystyle {\hat {f}}(\tau )={\hat {U}}^{\dagger }{\hat {f}}{\hat {U}}={\hat {U}}^{\dagger }f({\hat {\xi }}){\hat {U}}=f({\hat {U}}^{\dagger }{\hat {\xi }}{\hat {U}})=f({\hat {\xi }}(\tau )).} This equation shows that ξ ^ ( τ ) {\displaystyle {\hat {\xi }}(\tau )} are characteristics that determine the evolution for all of the operators in Op(L2(Rn)). This property is fully transferred to the phase space upon deformation quantization and, in the limit of ħ → 0, to the classical mechanics. Table compares properties of characteristics in classical and quantum mechanics. PDE and ODE indicate partial differential equations and ordinary differential equations, respectively. The quantum Liouville equation is the Weyl–Wigner transform of the von Neumann evolution equation for the density matrix in the Schrödinger representation. The quantum Hamilton equations are the Weyl–Wigner transforms of the evolution equations for operators of the canonical coordinates and momenta in the Heisenberg representation. In classical systems, characteristics c i ( ξ , τ ) {\displaystyle c^{i}(\xi ,\tau )} usually satisfy first-order ODEs, e.g., classical Hamilton's equations, and solve first-order PDEs, e.g., the classical Liouville equation. Functions q i ( ξ , τ ) {\displaystyle q^{i}(\xi ,\tau )} are also characteristics, despite both q i ( ξ , τ ) {\displaystyle q^{i}(\xi ,\tau )} and f ( ξ , τ ) {\displaystyle f(\xi ,\tau )} obeying infinite-order PDEs. The quantum phase flow contains all of the information about the quantum evolution. Semiclassical expansion of quantum characteristics and ⋆ {\displaystyle \star } -functions of quantum characteristics in a power series in ħ allows calculation of the average values of time-dependent physical observables by solving a finite-order coupled system of ODEs for phase space trajectories and Jacobi fields. The order of the system of ODEs depends on the truncation of the power series. The tunneling effect is nonperturbative in ħ and is not captured by the expansion. The density of the quantum probability fluid is not preserved in phase-space, as the quantum fluid diffuses. Quantum characteristics must be distinguished from the trajectories of the De Broglie–Bohm theory, the trajectories of the path-integral method in phase space for the amplitudes and the Wigner function, and the Wigner trajectories. Thus far, only a few quantum systems have been explicitly solved using the method of quantum characteristics. == See also == Method of characteristics Wigner–Weyl transform Deformation theory Wigner distribution function Modified Wigner distribution function Wigner quasiprobability distribution Negative probability == References == == Textbooks == H. Weyl, The Theory of Groups and Quantum Mechanics, (Dover Publications, New York Inc., 1931). V. I. Arnold, Mathematical Methods of Classical Mechanics, (2-nd ed. Springer-Verlag, New York Inc., 1989). M. V. Karasev and V. P. Maslov, Nonlinear Poisson brackets. Geometry and quantization. Translations of Mathematical Monographs, 119. (American Mathematical Society, Providence, RI, 1993).
Wikipedia/Method_of_quantum_characteristics
In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables X {\displaystyle X} and Y {\displaystyle Y} , the conditional probability distribution of Y {\displaystyle Y} given X {\displaystyle X} is the probability distribution of Y {\displaystyle Y} when X {\displaystyle X} is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value x {\displaystyle x} of X {\displaystyle X} as a parameter. When both X {\displaystyle X} and Y {\displaystyle Y} are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable. If the conditional distribution of Y {\displaystyle Y} given X {\displaystyle X} is a continuous distribution, then its probability density function is known as the conditional density function. The properties of a conditional distribution, such as the moments, are often referred to by corresponding names such as the conditional mean and conditional variance. More generally, one can refer to the conditional distribution of a subset of a set of more than two variables; this conditional distribution is contingent on the values of all the remaining variables, and if more than one variable is included in the subset then this conditional distribution is the conditional joint distribution of the included variables. == Conditional discrete distributions == For discrete random variables, the conditional probability mass function of Y {\displaystyle Y} given X = x {\displaystyle X=x} can be written according to its definition as: Due to the occurrence of P ( X = x ) {\displaystyle P(X=x)} in the denominator, this is defined only for non-zero (hence strictly positive) P ( X = x ) . {\displaystyle P(X=x).} The relation with the probability distribution of X {\displaystyle X} given Y {\displaystyle Y} is: P ( Y = y ∣ X = x ) P ( X = x ) = P ( { X = x } ∩ { Y = y } ) = P ( X = x ∣ Y = y ) P ( Y = y ) . {\displaystyle P(Y=y\mid X=x)P(X=x)=P(\{X=x\}\cap \{Y=y\})=P(X=x\mid Y=y)P(Y=y).} === Example === Consider the roll of a fair die and let X = 1 {\displaystyle X=1} if the number is even (i.e., 2, 4, or 6) and X = 0 {\displaystyle X=0} otherwise. Furthermore, let Y = 1 {\displaystyle Y=1} if the number is prime (i.e., 2, 3, or 5) and Y = 0 {\displaystyle Y=0} otherwise. Then the unconditional probability that X = 1 {\displaystyle X=1} is 3/6 = 1/2 (since there are six possible rolls of the dice, of which three are even), whereas the probability that X = 1 {\displaystyle X=1} conditional on Y = 1 {\displaystyle Y=1} is 1/3 (since there are three possible prime number rolls—2, 3, and 5—of which one is even). == Conditional continuous distributions == Similarly for continuous random variables, the conditional probability density function of Y {\displaystyle Y} given the occurrence of the value x {\displaystyle x} of X {\displaystyle X} can be written as where f X , Y ( x , y ) {\displaystyle f_{X,Y}(x,y)} gives the joint density of X {\displaystyle X} and Y {\displaystyle Y} , while f X ( x ) {\displaystyle f_{X}(x)} gives the marginal density for X {\displaystyle X} . Also in this case it is necessary that f X ( x ) > 0 {\displaystyle f_{X}(x)>0} . The relation with the probability distribution of X {\displaystyle X} given Y {\displaystyle Y} is given by: f Y ∣ X ( y ∣ x ) f X ( x ) = f X , Y ( x , y ) = f X | Y ( x ∣ y ) f Y ( y ) . {\displaystyle f_{Y\mid X}(y\mid x)f_{X}(x)=f_{X,Y}(x,y)=f_{X|Y}(x\mid y)f_{Y}(y).} The concept of the conditional distribution of a continuous random variable is not as intuitive as it might seem: Borel's paradox shows that conditional probability density functions need not be invariant under coordinate transformations. === Example === The graph shows a bivariate normal joint density for random variables X {\displaystyle X} and Y {\displaystyle Y} . To see the distribution of Y {\displaystyle Y} conditional on X = 70 {\displaystyle X=70} , one can first visualize the line X = 70 {\displaystyle X=70} in the X , Y {\displaystyle X,Y} plane, and then visualize the plane containing that line and perpendicular to the X , Y {\displaystyle X,Y} plane. The intersection of that plane with the joint normal density, once rescaled to give unit area under the intersection, is the relevant conditional density of Y {\displaystyle Y} . Y ∣ X = 70 ∼ N ( μ Y + σ Y σ X ρ ( 70 − μ X ) , ( 1 − ρ 2 ) σ Y 2 ) . {\displaystyle Y\mid X=70\ \sim \ {\mathcal {N}}\left(\mu _{Y}+{\frac {\sigma _{Y}}{\sigma _{X}}}\rho (70-\mu _{X}),\,(1-\rho ^{2})\sigma _{Y}^{2}\right).} == Relation to independence == Random variables X {\displaystyle X} , Y {\displaystyle Y} are independent if and only if the conditional distribution of Y {\displaystyle Y} given X {\displaystyle X} is, for all possible realizations of X {\displaystyle X} , equal to the unconditional distribution of Y {\displaystyle Y} . For discrete random variables this means P ( Y = y | X = x ) = P ( Y = y ) {\displaystyle P(Y=y|X=x)=P(Y=y)} for all possible y {\displaystyle y} and x {\displaystyle x} with P ( X = x ) > 0 {\displaystyle P(X=x)>0} . For continuous random variables X {\displaystyle X} and Y {\displaystyle Y} , having a joint density function, it means f Y ( y | X = x ) = f Y ( y ) {\displaystyle f_{Y}(y|X=x)=f_{Y}(y)} for all possible y {\displaystyle y} and x {\displaystyle x} with f X ( x ) > 0 {\displaystyle f_{X}(x)>0} . == Properties == Seen as a function of y {\displaystyle y} for given x {\displaystyle x} , P ( Y = y | X = x ) {\displaystyle P(Y=y|X=x)} is a probability mass function and so the sum over all y {\displaystyle y} (or integral if it is a conditional probability density) is 1. Seen as a function of x {\displaystyle x} for given y {\displaystyle y} , it is a likelihood function, so that the sum (or integral) over all x {\displaystyle x} need not be 1. Additionally, a marginal of a joint distribution can be expressed as the expectation of the corresponding conditional distribution. For instance, p X ( x ) = E Y [ p X | Y ( x | Y ) ] {\displaystyle p_{X}(x)=E_{Y}[p_{X|Y}(x\ |\ Y)]} . == Measure-theoretic formulation == Let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} be a probability space, G ⊆ F {\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}} a σ {\displaystyle \sigma } -field in F {\displaystyle {\mathcal {F}}} . Given A ∈ F {\displaystyle A\in {\mathcal {F}}} , the Radon-Nikodym theorem implies that there is a G {\displaystyle {\mathcal {G}}} -measurable random variable P ( A ∣ G ) : Ω → R {\displaystyle P(A\mid {\mathcal {G}}):\Omega \to \mathbb {R} } , called the conditional probability, such that ∫ G P ( A ∣ G ) ( ω ) d P ( ω ) = P ( A ∩ G ) {\displaystyle \int _{G}P(A\mid {\mathcal {G}})(\omega )dP(\omega )=P(A\cap G)} for every G ∈ G {\displaystyle G\in {\mathcal {G}}} , and such a random variable is uniquely defined up to sets of probability zero. A conditional probability is called regular if P ⁡ ( ⋅ ∣ G ) ( ω ) {\displaystyle \operatorname {P} (\cdot \mid {\mathcal {G}})(\omega )} is a probability measure on ( Ω , F ) {\displaystyle (\Omega ,{\mathcal {F}})} for all ω ∈ Ω {\displaystyle \omega \in \Omega } a.e. Special cases: For the trivial sigma algebra G = { ∅ , Ω } {\displaystyle {\mathcal {G}}=\{\emptyset ,\Omega \}} , the conditional probability is the constant function P ( A ∣ { ∅ , Ω } ) = P ⁡ ( A ) . {\displaystyle \operatorname {P} \!\left(A\mid \{\emptyset ,\Omega \}\right)=\operatorname {P} (A).} If A ∈ G {\displaystyle A\in {\mathcal {G}}} , then P ⁡ ( A ∣ G ) = 1 A {\displaystyle \operatorname {P} (A\mid {\mathcal {G}})=1_{A}} , the indicator function (defined below). Let X : Ω → E {\displaystyle X:\Omega \to E} be a ( E , E ) {\displaystyle (E,{\mathcal {E}})} -valued random variable. For each B ∈ E {\displaystyle B\in {\mathcal {E}}} , define μ X | G ( B | G ) = P ( X − 1 ( B ) | G ) . {\displaystyle \mu _{X\,|\,{\mathcal {G}}}(B\,|\,{\mathcal {G}})=\mathrm {P} (X^{-1}(B)\,|\,{\mathcal {G}}).} For any ω ∈ Ω {\displaystyle \omega \in \Omega } , the function μ X | G ( ⋅ | G ) ( ω ) : E → R {\displaystyle \mu _{X\,|{\mathcal {G}}}(\cdot \,|{\mathcal {G}})(\omega ):{\mathcal {E}}\to \mathbb {R} } is called the conditional probability distribution of X {\displaystyle X} given G {\displaystyle {\mathcal {G}}} . If it is a probability measure on ( E , E ) {\displaystyle (E,{\mathcal {E}})} , then it is called regular. For a real-valued random variable (with respect to the Borel σ {\displaystyle \sigma } -field R 1 {\displaystyle {\mathcal {R}}^{1}} on R {\displaystyle \mathbb {R} } ), every conditional probability distribution is regular. In this case, E [ X ∣ G ] = ∫ − ∞ ∞ x μ X ∣ G ( d x , ⋅ ) {\displaystyle E[X\mid {\mathcal {G}}]=\int _{-\infty }^{\infty }x\,\mu _{X\mid {\mathcal {G}}}(dx,\cdot )} almost surely. === Relation to conditional expectation === For any event A ∈ F {\displaystyle A\in {\mathcal {F}}} , define the indicator function: 1 A ( ω ) = { 1 if ω ∈ A , 0 if ω ∉ A , {\displaystyle \mathbf {1} _{A}(\omega )={\begin{cases}1\;&{\text{if }}\omega \in A,\\0\;&{\text{if }}\omega \notin A,\end{cases}}} which is a random variable. Note that the expectation of this random variable is equal to the probability of A itself: E ⁡ ( 1 A ) = P ⁡ ( A ) . {\displaystyle \operatorname {E} (\mathbf {1} _{A})=\operatorname {P} (A).\;} Given a σ {\displaystyle \sigma } -field G ⊆ F {\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}} , the conditional probability P ⁡ ( A ∣ G ) {\displaystyle \operatorname {P} (A\mid {\mathcal {G}})} is a version of the conditional expectation of the indicator function for A {\displaystyle A} : P ⁡ ( A ∣ G ) = E ⁡ ( 1 A ∣ G ) {\displaystyle \operatorname {P} (A\mid {\mathcal {G}})=\operatorname {E} (\mathbf {1} _{A}\mid {\mathcal {G}})\;} An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation. === Interpretation of conditioning on a Sigma Field === Consider the probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} and a sub-sigma field A ⊂ F {\displaystyle {\mathcal {A}}\subset {\mathcal {F}}} . The sub-sigma field A {\displaystyle {\mathcal {A}}} can be loosely interpreted as containing a subset of the information in F {\displaystyle {\mathcal {F}}} . For example, we might think of P ( B | A ) {\displaystyle \mathbb {P} (B|{\mathcal {A}})} as the probability of the event B {\displaystyle B} given the information in A {\displaystyle {\mathcal {A}}} . Also recall that an event B {\displaystyle B} is independent of a sub-sigma field A {\displaystyle {\mathcal {A}}} if P ( B | A ) = P ( B ) {\displaystyle \mathbb {P} (B|A)=\mathbb {P} (B)} for all A ∈ A {\displaystyle A\in {\mathcal {A}}} . It is incorrect to conclude in general that the information in A {\displaystyle {\mathcal {A}}} does not tell us anything about the probability of event B {\displaystyle B} occurring. This can be shown with a counter-example: Consider a probability space on the unit interval, Ω = [ 0 , 1 ] {\displaystyle \Omega =[0,1]} . Let G {\displaystyle {\mathcal {G}}} be the sigma-field of all countable sets and sets whose complement is countable. So each set in G {\displaystyle {\mathcal {G}}} has measure 0 {\displaystyle 0} or 1 {\displaystyle 1} and so is independent of each event in F {\displaystyle {\mathcal {F}}} . However, notice that G {\displaystyle {\mathcal {G}}} also contains all the singleton events in F {\displaystyle {\mathcal {F}}} (those sets which contain only a single ω ∈ Ω {\displaystyle \omega \in \Omega } ). So knowing which of the events in G {\displaystyle {\mathcal {G}}} occurred is equivalent to knowing exactly which ω ∈ Ω {\displaystyle \omega \in \Omega } occurred! So in one sense, G {\displaystyle {\mathcal {G}}} contains no information about F {\displaystyle {\mathcal {F}}} (it is independent of it), and in another sense it contains all the information in F {\displaystyle {\mathcal {F}}} . == See also == Conditioning (probability) Conditional probability Regular conditional probability Bayes' theorem == References == === Citations === === Sources ===
Wikipedia/Conditional_probability_density_function
Eternal inflation is a hypothetical inflationary universe model, which is itself an outgrowth or extension of the Big Bang theory. According to eternal inflation, the inflationary phase of the universe's expansion lasts forever throughout most of the universe. Because the regions expand exponentially rapidly, most of the volume of the universe at any given time is inflating. Eternal inflation, therefore, produces a hypothetically infinite multiverse, in which only an insignificant fractal volume ends inflation. Paul Steinhardt, one of the original researchers of the inflationary model, introduced the first example of eternal inflation in 1983, and Alexander Vilenkin showed that it is generic. Alan Guth's 2007 paper, "Eternal inflation and its implications", states that under reasonable assumptions "Although inflation is generically eternal into the future, it is not eternal into the past". Guth detailed what was known about the subject at the time, and demonstrated that eternal inflation was still considered the likely outcome of inflation, more than 20 years after eternal inflation was first introduced by Steinhardt. == Overview == === Development of the theory === Inflation, or the inflationary universe theory, was originally developed as a way to overcome the few remaining problems with what was otherwise considered a successful theory of cosmology, the Big Bang model. In 1979, Alan Guth introduced the inflationary model of the universe to explain why the universe is flat and homogeneous (which refers to the smooth distribution of matter and radiation on a large scale). The basic idea was that the universe underwent a period of rapidly accelerating expansion a few instants after the Big Bang. He offered a mechanism for causing the inflation to begin: false vacuum energy. Guth coined the term "inflation," and was the first to discuss the theory with other scientists worldwide. Guth's original formulation was problematic, as there was no consistent way to bring an end to the inflationary epoch and end up with the hot, isotropic, homogeneous universe observed today. Although the false vacuum could decay into empty "bubbles" of "true vacuum" that expanded at the speed of light, the empty bubbles could not coalesce to reheat the universe, because they could not keep up with the remaining inflating universe. In 1982, this "graceful exit problem" was solved independently by Andrei Linde and by Andreas Albrecht and Paul J. Steinhardt, who showed how to end inflation without making empty bubbles and, instead, end up with a hot expanding universe. The basic idea was to have a continuous "slow-roll" or slow evolution from false vacuum to true without making any bubbles. The improved model was called "new inflation." In 1983, Paul Steinhardt was the first to show that this "new inflation" does not have to end everywhere. Instead, it might only end in a finite patch or a hot bubble full of matter and radiation, and that inflation continues in most of the universe while producing hot bubble after hot bubble along the way. Alexander Vilenkin showed that when quantum effects are properly included, this is actually generic to all new inflation models. Using ideas introduced by Steinhardt and Vilenkin, Andrei Linde published an alternative model of inflation in 1986 which used these ideas to provide a detailed description of what has become known as the Chaotic Inflation theory or eternal inflation. === Quantum fluctuations === New inflation does not produce a perfectly symmetric universe due to quantum fluctuations during inflation. The fluctuations cause the energy and matter density to be different at different points in space. Quantum fluctuations in the hypothetical inflaton field produce changes in the rate of expansion that are responsible for eternal inflation. Those regions with a higher rate of inflation expand faster and dominate the universe, despite the natural tendency of inflation to end in other regions. This allows inflation to continue forever, to produce future-eternal inflation. As a simplified example, suppose that during inflation, the natural decay rate of the inflaton field is slow compared to the effect of quantum fluctuation. When a mini-universe inflates and "self-reproduces" into, say, twenty causally-disconnected mini-universes of equal size to the original mini-universe, perhaps nine of the new mini-universes will have a larger, rather than smaller, average inflaton field value than the original mini-universe, because they inflated from regions of the original mini-universe where quantum fluctuation pushed the inflaton value up more than the slow inflation decay rate brought the inflaton value down. Originally there was one mini-universe with a given inflaton value; now there are nine mini-universes that have a slightly larger inflaton value. (Of course, there are also eleven mini-universes where the inflaton value is slightly lower than it originally was.) Each mini-universe with the larger inflaton field value restarts a similar round of approximate self-reproduction within itself. (The mini-universes with lower inflaton values may also reproduce, unless its inflaton value is small enough that the region drops out of inflation and ceases self-reproduction.) This process continues indefinitely; nine high-inflaton mini-universes might become 81, then 729... Thus, there is eternal inflation. In 1980, quantum fluctuations were suggested by Viatcheslav Mukhanov and Gennady Chibisov in the context of a model of modified gravity by Alexei Starobinsky to be possible seeds for forming galaxies. In the context of inflation, quantum fluctuations were first analyzed at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The average strength of the fluctuations was first calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Guth and So-Young Pi; and James M. Bardeen, Paul Steinhardt and Michael Turner. The early calculations derived at the Nuffield Workshop only focused on the average fluctuations, whose magnitude is too small to affect inflation. However, beginning with the examples presented by Steinhardt and Vilenkin, the same quantum physics was later shown to produce occasional large fluctuations that increase the rate of inflation and keep inflation going eternally. === Further developments === In analyzing the Planck Satellite data from 2013, Anna Ijjas and Paul Steinhardt showed that the simplest textbook inflationary models were eliminated and that the remaining models require exponentially more tuned starting conditions, more parameters to be adjusted, and less inflation. Later Planck observations reported in 2015 confirmed these conclusions. A 2014 paper by Kohli and Haslam called into question the viability of the eternal inflation theory, by analyzing Linde's chaotic inflation theory in which the quantum fluctuations are modeled as Gaussian white noise. They showed that in this popular scenario, eternal inflation in fact cannot be eternal, and the random noise leads to spacetime being filled with singularities. This was demonstrated by showing that solutions to the Einstein field equations diverge in a finite time. Their paper therefore concluded that the theory of eternal inflation based on random quantum fluctuations would not be a viable theory, and the resulting existence of a multiverse is "still very much an open question that will require much deeper investigation". == Inflation, eternal inflation, and the multiverse == In 1983, it was shown that inflation could be eternal, leading to a multiverse in which space is broken up into bubbles or patches whose properties differ from patch to patch spanning all physical possibilities. Paul Steinhardt, who produced the first example of eternal inflation, eventually became a strong and vocal opponent of the theory. He argued that the multiverse represented a breakdown of the inflationary theory, because, in a multiverse, any outcome is equally possible, so inflation makes no predictions and, hence, is untestable. Consequently, he argued, inflation fails a key condition for a scientific theory. Both Linde and Guth, however, continued to support the inflationary theory and the multiverse. Guth declared: It's hard to build models of inflation that don't lead to a multiverse. It's not impossible, so I think there's still certainly research that needs to be done. But most models of inflation do lead to a multiverse, and evidence for inflation will be pushing us in the direction of taking the idea of a multiverse seriously. According to Linde, "It's possible to invent models of inflation that do not allow a multiverse, but it's difficult. Every experiment that brings better credence to inflationary theory brings us much closer to hints that the multiverse is real." In 2018 the late Stephen Hawking and Thomas Hertog published a paper in which the need for an infinite multiverse vanishes as Hawking says their theory gives universes which are "reasonably smooth and globally finite". The theory uses the holographic principle to define an 'exit plane' from the timeless state of eternal inflation. The universes which are generated on the plane are described using a redefinition of the no-boundary wavefunction - in fact the theory requires a boundary at the beginning of time. Stated simply Hawking says that their findings "imply a significant reduction of the multiverse" which as the University of Cambridge points out, makes the theory "predictive and testable" using gravitational wave astronomy. == See also == Astrophysics Cosmology False vacuum Fractal cosmology Inflation (cosmology) Measure problem (cosmology) Physical cosmology Shape of the universe == References == == External links == 'Multiverse' theory suggested by microwave background, BBC News, 3 August 2011 about testing eternal inflation.
Wikipedia/Chaotic_inflation_theory
In mathematical physics, the Duffin–Kemmer–Petiau (DKP) algebra, introduced by R.J. Duffin, Nicholas Kemmer and G. Petiau, is the algebra which is generated by the Duffin–Kemmer–Petiau matrices. These matrices form part of the Duffin–Kemmer–Petiau equation that provides a relativistic description of spin-0 and spin-1 particles. The DKP algebra is also referred to as the meson algebra. == Defining relations == The Duffin–Kemmer–Petiau matrices have the defining relation β a β b β c + β c β b β a = β a η b c + β c η b a {\displaystyle \beta ^{a}\beta ^{b}\beta ^{c}+\beta ^{c}\beta ^{b}\beta ^{a}=\beta ^{a}\eta ^{bc}+\beta ^{c}\eta ^{ba}} where η a b {\displaystyle \eta ^{ab}} stand for a constant diagonal matrix. The Duffin–Kemmer–Petiau matrices β {\displaystyle \beta } for which η a b {\displaystyle \eta ^{ab}} consists in diagonal elements (+1,-1,...,-1) form part of the Duffin–Kemmer–Petiau equation. Five-dimensional DKP matrices can be represented as: β 0 = ( 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ) {\displaystyle \beta ^{0}={\begin{pmatrix}0&1&0&0&0\\1&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\end{pmatrix}}} , β 1 = ( 0 0 − 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ) {\displaystyle \quad \beta ^{1}={\begin{pmatrix}0&0&-1&0&0\\0&0&0&0&0\\1&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\end{pmatrix}}} , β 2 = ( 0 0 0 − 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 ) {\displaystyle \quad \beta ^{2}={\begin{pmatrix}0&0&0&-1&0\\0&0&0&0&0\\0&0&0&0&0\\1&0&0&0&0\\0&0&0&0&0\end{pmatrix}}} , β 3 = ( 0 0 0 0 − 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 ) {\displaystyle \quad \beta ^{3}={\begin{pmatrix}0&0&0&0&-1\\0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\\1&0&0&0&0\end{pmatrix}}} These five-dimensional DKP matrices represent spin-0 particles. The DKP matrices for spin-1 particles are 10-dimensional. The DKP-algebra can be reduced to a direct sum of irreducible subalgebras for spin‐0 and spin‐1 bosons, the subalgebras being defined by multiplication rules for the linearly independent basis elements. == Duffin–Kemmer–Petiau equation == The Duffin–Kemmer–Petiau (DKP) equation, also known as Kemmer equation, is a relativistic wave equation which describes spin-0 and spin-1 particles in the description of the standard model. For particles with nonzero mass, the DKP equation is ( i ℏ β a ∂ a − m c ) ψ = 0 {\displaystyle (i\hbar \beta ^{a}\partial _{a}-mc)\psi =0} where β a {\displaystyle \beta ^{a}} are Duffin–Kemmer–Petiau matrices, m {\displaystyle m} is the particle's mass, ψ {\displaystyle \psi } its wavefunction, ℏ {\displaystyle \hbar } the reduced Planck constant, c {\displaystyle c} the speed of light. For massless particles, the term m c {\displaystyle mc} is replaced by a singular matrix γ {\displaystyle \gamma } that obeys the relations β a γ + γ β a = β a {\displaystyle \beta ^{a}\gamma +\gamma \beta ^{a}=\beta ^{a}} and γ 2 = γ {\displaystyle \gamma ^{2}=\gamma } . The DKP equation for spin-0 is closely linked to the Klein–Gordon equation and the equation for spin-1 to the Proca equations. It suffers the same drawback as the Klein–Gordon equation in that it calls for negative probabilities. Also the De Donder–Weyl covariant Hamiltonian field equations can be formulated in terms of DKP matrices. == History == The Duffin–Kemmer–Petiau algebra was introduced in the 1930s by R.J. Duffin, N. Kemmer and G. Petiau. == Further reading == Fernandes, M. C. B.; Vianna, J. D. M. (1999). "On the generalized phase space approach to Duffin–Kemmer–Petiau particles". Foundations of Physics. 29 (2). Springer Science and Business Media LLC: 201–219. doi:10.1023/a:1018869505031. ISSN 0015-9018. S2CID 118277218. Fernandes, Marco Cezar B.; Vianna, J. David M. (1998). "On the Duffin-Kemmer-Petiau algebra and the generalized phase space". Brazilian Journal of Physics. 28 (4). FapUNIFESP (SciELO): 00. doi:10.1590/s0103-97331998000400024. ISSN 0103-9733. Sharp, Robert T.; Winternitz, Pavel (2004). "Bhabha and Duffin–Kemmer–Petiau equations: spin zero and spin one". Symmetry in physics : in memory of Robert T. Sharp. Providence, R.I.: American Mathematical Society. p. 50 ff. ISBN 0-8218-3409-6. OCLC 53953715. Fainberg, V.Ya.; Pimentel, B.M. (2000). "Duffin–Kemmer–Petiau and Klein–Gordon–Fock equations for electromagnetic, Yang–Mills and external gravitational field interactions: proof of equivalence". Physics Letters A. 271 (1–2). Elsevier BV: 16–25. arXiv:hep-th/0003283. doi:10.1016/s0375-9601(00)00330-3. ISSN 0375-9601. S2CID 9595290. == References ==
Wikipedia/Duffin–Kemmer–Petiau_equation
Philosophy of science is the branch of philosophy concerned with the foundations, methods, and implications of science. Amongst its central questions are the difference between science and non-science, the reliability of scientific theories, and the ultimate purpose and meaning of science as a human endeavour. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of scientific practice, and overlaps with metaphysics, ontology, logic, and epistemology, for example, when it explores the relationship between science and the concept of truth. Philosophy of science is both a theoretical and empirical discipline, relying on philosophical theorising as well as meta-studies of scientific practice. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science. Many of the central problems concerned with the philosophy of science lack contemporary consensus, including whether science can infer truth about unobservable entities and whether inductive reasoning can be justified as yielding definite scientific knowledge. Philosophers of science also consider philosophical problems within particular sciences (such as biology, physics and social sciences such as economics and psychology). Some philosophers of science also use contemporary results in science to reach conclusions about philosophy itself. While philosophical thought pertaining to science dates back at least to the time of Aristotle, the general philosophy of science emerged as a distinct discipline only in the 20th century following the logical positivist movement, which aimed to formulate criteria for ensuring all philosophical statements' meaningfulness and objectively assessing them. Karl Popper criticized logical positivism and helped establish a modern set of standards for scientific methodology. Thomas Kuhn's 1962 book The Structure of Scientific Revolutions was also formative, challenging the view of scientific progress as the steady, cumulative acquisition of knowledge based on a fixed method of systematic experimentation and instead arguing that any progress is relative to a "paradigm", the set of questions, concepts, and practices that define a scientific discipline in a particular historical period. Subsequently, the coherentist approach to science, in which a theory is validated if it makes sense of observations as part of a coherent whole, became prominent due to W. V. Quine and others. Some thinkers such as Stephen Jay Gould seek to ground science in axiomatic assumptions, such as the uniformity of nature. A vocal minority of philosophers, and Paul Feyerabend in particular, argue against the existence of the "scientific method", so all approaches to science should be allowed, including explicitly supernatural ones. Another approach to thinking about science involves studying how knowledge is created from a sociological perspective, an approach represented by scholars like David Bloor and Barry Barnes. Finally, a tradition in continental philosophy approaches science from the perspective of a rigorous analysis of human experience. Philosophies of the particular sciences range from questions about the nature of time raised by Einstein's general relativity, to the implications of economics for public policy. A central theme is whether the terms of one scientific theory can be intra- or intertheoretically reduced to the terms of another. Can chemistry be reduced to physics, or can sociology be reduced to individual psychology? The general questions of philosophy of science also arise with greater specificity in some particular sciences. For instance, the question of the validity of scientific reasoning is seen in a different guise in the foundations of statistics. The question of what counts as science and what should be excluded arises as a life-or-death matter in the philosophy of medicine. Additionally, the philosophies of biology, psychology, and the social sciences explore whether the scientific studies of human nature can achieve objectivity or are inevitably shaped by values and by social relations. == Introduction == === Defining science === Distinguishing between science and non-science is referred to as the demarcation problem. For example, should psychoanalysis, creation science, and historical materialism be considered pseudosciences? Karl Popper called this the central question in the philosophy of science. However, no unified account of the problem has won acceptance among philosophers, and some regard the problem as unsolvable or uninteresting. Martin Gardner has argued for the use of a Potter Stewart standard ("I know it when I see it") for recognizing pseudoscience. Early attempts by the logical positivists grounded science in observation while non-science was non-observational and hence meaningless. Popper argued that the central property of science is falsifiability. That is, every genuinely scientific claim is capable of being proven false, at least in principle. An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of it but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated. === Scientific explanation === A closely related question is what counts as a good scientific explanation. In addition to providing predictions about future events, society often takes scientific theories to provide explanations for events that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what it means to say a scientific theory has explanatory power. One early and influential account of scientific explanation is the deductive-nomological model. It says that a successful scientific explanation must deduce the occurrence of the phenomena in question from a scientific law. This view has been subjected to substantial criticism, resulting in several widely acknowledged counterexamples to the theory. It is especially challenging to characterize what is meant by an explanation when the thing to be explained cannot be deduced from any law because it is a matter of chance, or otherwise cannot be perfectly predicted from what is known. Wesley Salmon developed a model in which a good scientific explanation must be statistically relevant to the outcome to be explained. Others have argued that the key to a good explanation is unifying disparate phenomena or providing a causal mechanism. === Justifying science === Although it is often taken for granted, it is not at all clear how one can infer the validity of a general statement from a number of specific instances or infer the truth of a theory from a series of successful tests. For example, a chicken observes that each morning the farmer comes and gives it food, for hundreds of days in a row. The chicken may therefore use inductive reasoning to infer that the farmer will bring food every morning. However, one morning, the farmer comes and kills the chicken. How is scientific reasoning more trustworthy than the chicken's reasoning? One approach is to acknowledge that induction cannot achieve certainty, but observing more instances of a general statement can at least make the general statement more probable. So the chicken would be right to conclude from all those mornings that it is likely the farmer will come with food again the next morning, even if it cannot be certain. However, there remain difficult questions about the process of interpreting any given evidence into a probability that the general statement is true. One way out of these particular difficulties is to declare that all beliefs about scientific theories are subjective, or personal, and correct reasoning is merely about how evidence should change one's subjective beliefs over time. Some argue that what scientists do is not inductive reasoning at all but rather abductive reasoning, or inference to the best explanation. In this account, science is not about generalizing specific instances but rather about hypothesizing explanations for what is observed. As discussed in the previous section, it is not always clear what is meant by the "best explanation". Ockham's razor, which counsels choosing the simplest available explanation, thus plays an important role in some versions of this approach. To return to the example of the chicken, would it be simpler to suppose that the farmer cares about it and will continue taking care of it indefinitely or that the farmer is fattening it up for slaughter? Philosophers have tried to make this heuristic principle more precise regarding theoretical parsimony or other measures. Yet, although various measures of simplicity have been brought forward as potential candidates, it is generally accepted that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Nicholas Maxwell has argued for some decades that unity rather than simplicity is the key non-empirical factor in influencing the choice of theory in science, persistent preference for unified theories in effect committing science to the acceptance of a metaphysical thesis concerning unity in nature. In order to improve this problematic thesis, it needs to be represented in the form of a hierarchy of theses, each thesis becoming more insubstantial as one goes up the hierarchy. === Observation inseparable from theory === When making observations, scientists look through telescopes, study images on electronic screens, record meter readings, and so on. Generally, on a basic level, they can agree on what they see, e.g., the thermometer shows 37.9 degrees C. But, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. For example, before Albert Einstein's general theory of relativity, observers would have likely interpreted an image of the Einstein cross as five different objects in space. In light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. Alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. Observations that cannot be separated from theoretical interpretation are said to be theory-laden. All observation involves both perception and cognition. That is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. Therefore, observations are affected by one's underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. In this sense, it can be argued that all observation is theory-laden. === The purpose of science === Should science aim to determine ultimate truth, or are there questions that science cannot answer? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, scientific anti-realists argue that science does not aim (or at least does not succeed) at truth, especially truth about unobservables like electrons or other universes. Instrumentalists argue that scientific theories should only be evaluated on whether they are useful. In their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of current theories. Antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. Antirealists attempt to explain the success of scientific theories without reference to truth. Some antirealists claim that scientific theories aim at being accurate only about observable objects and argue that their success is primarily judged by that criterion. ==== Real patterns ==== The notion of real patterns has been propounded, notably by philosopher Daniel C. Dennett, as an intermediate position between strong realism and eliminative materialism. This concept delves into the investigation of patterns observed in scientific phenomena to ascertain whether they signify underlying truths or are mere constructs of human interpretation. Dennett provides a unique ontological account concerning real patterns, examining the extent to which these recognized patterns have predictive utility and allow for efficient compression of information. The discourse on real patterns extends beyond philosophical circles, finding relevance in various scientific domains. For example, in biology, inquiries into real patterns seek to elucidate the nature of biological explanations, exploring how recognized patterns contribute to a comprehensive understanding of biological phenomena. Similarly, in chemistry, debates around the reality of chemical bonds as real patterns continue. Evaluation of real patterns also holds significance in broader scientific inquiries. Researchers, like Tyler Millhouse, propose criteria for evaluating the realness of a pattern, particularly in the context of universal patterns and the human propensity to perceive patterns, even where there might be none. This evaluation is pivotal in advancing research in diverse fields, from climate change to machine learning, where recognition and validation of real patterns in scientific models play a crucial role. === Values and science === Values intersect with science in different ways. There are epistemic values that mainly guide the scientific research. The scientific enterprise is embedded in particular culture and values through individual practitioners. Values emerge from science, both as product and process and can be distributed among several cultures in the society. When it comes to the justification of science in the sense of general public participation by single practitioners, science plays the role of a mediator between evaluating the standards and policies of society and its participating individuals, wherefore science indeed falls victim to vandalism and sabotage adapting the means to the end. If it is unclear what counts as science, how the process of confirming theories works, and what the purpose of science is, there is considerable scope for values and other social influences to shape science. Indeed, values can play a role ranging from determining which research gets funded to influencing which theories achieve scientific consensus. For example, in the 19th century, cultural values held by scientists about race shaped research on evolution, and values concerning social class influenced debates on phrenology (considered scientific at the time). Feminist philosophers of science, sociologists of science, and others explore how social values affect science. == History == === Pre-modern === The origins of philosophy of science trace back to Plato and Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also analyzed reasoning by analogy. The eleventh century Arab polymath Ibn al-Haytham (known in Latin as Alhazen) conducted his research in optics by way of controlled experimental testing and applied geometry, especially in his investigations into the images resulting from the reflection and refraction of light. Roger Bacon (1214–1294), an English thinker and experimenter heavily influenced by al-Haytham, is recognized by many to be the father of modern scientific method. His view that mathematics was essential to a correct understanding of natural philosophy is considered to have been 400 years ahead of its time. === Modern === Francis Bacon (no direct relation to Roger Bacon, who lived 300 years earlier) was a seminal figure in philosophy of science at the time of the Scientific Revolution. In his work Novum Organum (1620)—an allusion to Aristotle's Organon—Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Bacon's method relied on experimental histories to eliminate alternative theories. In 1637, René Descartes established a new framework for grounding scientific knowledge in his treatise, Discourse on Method, advocating the central role of reason as opposed to sensory experience. By contrast, in 1713, the 2nd edition of Isaac Newton's Philosophiae Naturalis Principia Mathematica argued that "... hypotheses ... have no place in experimental philosophy. In this philosophy[,] propositions are deduced from the phenomena and rendered general by induction." This passage influenced a "later generation of philosophically-inclined readers to pronounce a ban on causal hypotheses in natural philosophy". In particular, later in the 18th century, David Hume would famously articulate skepticism about the ability of science to determine causality and gave a definitive formulation of the problem of induction, though both theses would be contested by the end of the 18th century by Immanuel Kant in his Critique of Pure Reason and Metaphysical Foundations of Natural Science. In 19th century Auguste Comte made a major contribution to the theory of science. The 19th century writings of John Stuart Mill are also considered important in the formation of current conceptions of the scientific method, as well as anticipating later accounts of scientific explanation. === Logical positivism === Instrumentalism became popular among physicists around the turn of the 20th century, after which logical positivism defined the field for several decades. Logical positivism accepts only testable statements as meaningful, rejects metaphysical interpretations, and embraces verificationism (a set of theories of knowledge that combines logicism, empiricism, and linguistics to ground philosophy on a basis consistent with examples from the empirical sciences). Seeking to overhaul all of philosophy and convert it to a new scientific philosophy, the Berlin Circle and the Vienna Circle propounded logical positivism in the late 1920s. Interpreting Ludwig Wittgenstein's early philosophy of language, logical positivists identified a verifiability principle or criterion of cognitive meaningfulness. From Bertrand Russell's logicism they sought reduction of mathematics to logic. They also embraced Russell's logical atomism, Ernst Mach's phenomenalism—whereby the mind knows only actual or potential sensory experience, which is the content of all sciences, whether physics or psychology—and Percy Bridgman's operationalism. Thereby, only the verifiable was scientific and cognitively meaningful, whereas the unverifiable was unscientific, cognitively meaningless "pseudostatements"—metaphysical, emotive, or such—not worthy of further review by philosophers, who were newly tasked to organize knowledge rather than develop new knowledge. Logical positivism is commonly portrayed as taking the extreme position that scientific language should never refer to anything unobservable—even the seemingly core notions of causality, mechanism, and principles—but that is an exaggeration. Talk of such unobservables could be allowed as metaphorical—direct observations viewed in the abstract—or at worst metaphysical or emotional. Theoretical laws would be reduced to empirical laws, while theoretical terms would garner meaning from observational terms via correspondence rules. Mathematics in physics would reduce to symbolic logic via logicism, while rational reconstruction would convert ordinary language into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth. In the late 1930s, logical positivists fled Germany and Austria for Britain and America. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, and Rudolf Carnap had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation as a way of identifying the logical form of explanations without any reference to the suspect notion of "causation". The logical positivist movement became a major underpinning of analytic philosophy, and dominated Anglosphere philosophy, including philosophy of science, while influencing sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly assaulted. Nevertheless, it brought about the establishment of philosophy of science as a distinct subdiscipline of philosophy, with Carl Hempel playing a key role. === Thomas Kuhn === In the 1962 book The Structure of Scientific Revolutions, Thomas Kuhn argued that the process of observation and evaluation takes place within a "paradigm", which he describes as "universally recognized achievements that for a time provide model problems and solutions to community of practitioners." A paradigm implicitly identifies the objects and relations under study and suggests what experiments, observations or theoretical improvements need to be carried out to produce a useful result. He characterized normal science as the process of observation and "puzzle solving" which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift. Kurn was a historian of science, and his ideas were inspired by the study of older paradigms that have been discarded, such as Aristotelian mechanics or aether theory. These had often been portrayed by historians as using "unscientific" methods or beliefs. But careful examination showed that they were no less "scientific" than modern paradigms. Both were based on valid evidence, both failed to answer every possible question. A paradigm shift occurred when a significant number of observational anomalies arose in the old paradigm and efforts to resolve them within the paradigm were unsuccessful. A new paradigm was available that handled the anomalies with less difficulty and yet still covered (most of) the previous results. Over a period of time, often as long as a generation, more practitioners began working within the new paradigm and eventually the old paradigm was abandoned. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism; he wrote "terms like 'subjective' and 'intuitive' cannot be applied to [paradigms]." Paradigms are grounded in objective, observable evidence, but our use of them is psychological and our acceptance of them is social. == Current approaches == === Naturalism's axiomatic assumptions === According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit. Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method: That there is an objective reality shared by all rational observers."The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else." That this objective reality is governed by natural laws; "Science, at least today, assumes that the universe obeys knowable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible." That reality can be discovered by means of systematic observation and experimentation.Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding." That Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it. "The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different." That experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results. That experimenters won't be significantly biased by their presumptions. That random sampling is representative of the entire population.A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions. === Coherentism === In contrast to the view that science rests on foundational assumptions, coherentism asserts that statements are justified by being a part of a coherent system. Or, rather, individual statements cannot be validated on their own: only coherent systems can be justified. A prediction of a transit of Venus is justified by its being coherent with broader beliefs about celestial mechanics and earlier observations. As explained above, observation is a cognitive act. That is, it relies on a pre-existing understanding, a systematic set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. If the prediction fails and a transit is not observed, that is likely to occasion an adjustment in the system, a change in some auxiliary assumption, rather than a rejection of the theoretical system. According to the Duhem–Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton's Law of Gravitation in the solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led not to the rejection of Newton's Law but rather to the rejection of the hypothesis that the Solar System comprises only seven planets. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, or something else. One consequence of the Duhem–Quine thesis is that one can make any theory compatible with any empirical observation by the addition of a sufficient number of suitable ad hoc hypotheses. Karl Popper accepted this thesis, leading him to reject naïve falsification. Instead, he favored a "survival of the fittest" view in which the most falsifiable scientific theories are to be preferred. === Anything goes methodology === Paul Feyerabend (1924–1994) argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. He argued that "the only principle that does not inhibit progress is: anything goes". Feyerabend said that science started as a liberating movement, but that over time it had become increasingly dogmatic and rigid and had some oppressive features, and thus had become increasingly an ideology. Because of this, he said it was impossible to come up with an unambiguous way to distinguish science from religion, magic, or mythology. He saw the exclusive dominance of science as a means of directing society as authoritarian and ungrounded. Promulgation of this epistemological anarchism earned Feyerabend the title of "the worst enemy of science" from his detractors. === Sociology of scientific knowledge methodology === According to Kuhn, science is an inherently communal activity which can only be done as part of a community. For him, the fundamental difference between science and other disciplines is the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. For them, social factors play an important and direct role in scientific method, but they do not serve to differentiate science from other disciplines. On this account, science is socially constructed, though this does not necessarily imply the more radical notion that reality itself is a social construct. Michel Foucault sought to analyze and uncover how disciplines within the social sciences developed and adopted the methodologies used by their practitioners. In works like The Archaeology of Knowledge, he used the term human sciences. The human sciences do not comprise mainstream academic disciplines; they are rather an interdisciplinary space for the reflection on man who is the subject of more mainstream scientific knowledge, taken now as an object, sitting between these more conventional areas, and of course associating with disciplines such as anthropology, psychology, sociology, and even history. Rejecting the realist view of scientific inquiry, Foucault argued throughout his work that scientific discourse is not simply an objective study of phenomena, as both natural and social scientists like to believe, but is rather the product of systems of power relations struggling to construct scientific disciplines and knowledge within given societies. With the advances of scientific disciplines, such as psychology and anthropology, the need to separate, categorize, normalize and institutionalize populations into constructed social identities became a staple of the sciences. Constructions of what were considered "normal" and "abnormal" stigmatized and ostracized groups of people, like the mentally ill and sexual and gender minorities. However, some (such as Quine) do maintain that scientific reality is a social construct: Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer ... For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits. The public backlash of scientists against such views, particularly in the 1990s, became known as the science wars. A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists – including David Bloor, Harry Collins, Bruno Latour, Ian Hacking and Anselm Strauss. Concepts and methods (such as rational choice, social choice or game theory) from economics have also been applied for understanding the efficiency of scientific communities in the production of knowledge. This interdisciplinary field has come to be known as science and technology studies. Here the approach to the philosophy of science is to study how scientific communities actually operate. === Continental philosophy === Philosophers in the continental philosophical tradition are not traditionally categorized as philosophers of science. However, they have much to say about science, some of which has anticipated themes in the analytical tradition. For example, in The Genealogy of Morals (1887) Friedrich Nietzsche advanced the thesis that the motive for the search for truth in sciences is a kind of ascetic ideal. In general, continental philosophy views science from a world-historical perspective. Philosophers such as Pierre Duhem (1861–1916) and Gaston Bachelard (1884–1962) wrote their works with this world-historical approach to science, predating Kuhn's 1962 work by a generation or more. All of these approaches involve a historical and sociological turn to science, with a priority on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as emphasised in the analytic tradition. One can trace this continental strand of thought through the phenomenology of Edmund Husserl (1859–1938), the late works of Merleau-Ponty (Nature: Course Notes from the Collège de France, 1956–1960), and the hermeneutics of Martin Heidegger (1889–1976). The largest effect on the continental tradition with respect to science came from Martin Heidegger's critique of the theoretical attitude in general, which of course includes the scientific attitude. For this reason, the continental tradition has remained much more skeptical of the importance of science in human life and in philosophical inquiry. Nonetheless, there have been a number of important works: especially those of a Kuhnian precursor, Alexandre Koyré (1892–1964). Another important development was that of Michel Foucault's analysis of historical and scientific thought in The Order of Things (1966) and his study of power and corruption within the "science" of madness. Post-Heideggerian authors contributing to continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., Truth and Justification, 1998), Carl Friedrich von Weizsäcker (The Unity of Nature, 1980; German: Die Einheit der Natur (1971)), and Wolfgang Stegmüller (Probleme und Resultate der Wissenschaftstheorie und Analytischen Philosophie, 1973–1986). == Other topics == === Reductionism === Analysis involves breaking an observation or theory down into simpler concepts in order to understand it. Reductionism can refer to one of several philosophical positions related to this approach. One type of reductionism suggests that phenomena are amenable to scientific explanation at lower levels of analysis and inquiry. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics. Daniel Dennett distinguishes legitimate reductionism from what he calls greedy reductionism, which denies real complexities and leaps too quickly to sweeping generalizations. === Social accountability === A broad issue affecting the neutrality of science concerns the areas which science chooses to explore—that is, what part of the world and of humankind are studied by science. Philip Kitcher in his Science, Truth, and Democracy argues that scientific studies that attempt to show one segment of the population as being less intelligent, less successful, or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific. == Philosophy of particular sciences == There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating foundational problems in particular sciences. They also examine the implications of particular sciences for broader philosophical questions. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science. === Philosophy of statistics === The problem of induction discussed above is seen in another form in debates over the foundations of statistics. The standard approach to statistical hypothesis testing avoids claims about whether evidence supports a hypothesis or makes it more probable. Instead, the typical test yields a p-value, which is the probability of the evidence being such as it is, under the assumption that the null hypothesis is true. If the p-value is too high, the hypothesis is rejected, in a way analogous to falsification. In contrast, Bayesian inference seeks to assign probabilities to hypotheses. Related topics in philosophy of statistics include probability interpretations, overfitting, and the difference between correlation and causation. === Philosophy of mathematics === Philosophy of mathematics is concerned with the philosophical foundations and implications of mathematics. The central questions are whether numbers, triangles, and other mathematical entities exist independently of the human mind and what is the nature of mathematical propositions. Is asking whether "1 + 1 = 2" is true fundamentally different from asking whether a ball is red? Was calculus invented or discovered? A related question is whether learning mathematics requires experience or reason alone. What does it mean to prove a mathematical theorem and how does one know whether a mathematical proof is correct? Philosophers of mathematics also aim to clarify the relationships between mathematics and logic, human capabilities such as intuition, and the material universe. === Philosophy of physics === Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also included are the predictions of cosmology, the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time). === Philosophy of chemistry === Philosophy of chemistry is the philosophical study of the methodology and content of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams. It includes research on general philosophy of science issues as applied to chemistry. For example, can all chemical phenomena be explained by quantum mechanics or is it not possible to reduce chemistry to physics? For another example, chemists have discussed the philosophy of how theories are confirmed in the context of confirming reaction mechanisms. Determining reaction mechanisms is difficult because they cannot be observed directly. Chemists can use a number of indirect measures as evidence to rule out certain mechanisms, but they are often unsure if the remaining mechanism is correct because there are many other possible mechanisms that they have not tested or even thought of. Philosophers have also sought to clarify the meaning of chemical concepts which do not refer to specific physical entities, such as chemical bonds. === Philosophy of astronomy === The philosophy of astronomy seeks to understand and analyze the methodologies and technologies used by experts in the discipline, focusing on how observations made about space and astrophysical phenomena can be studied. Given that astronomers rely and use theories and formulas from other scientific disciplines, such as chemistry and physics, the pursuit of understanding how knowledge can be obtained about the cosmos, as well as the relation in which Earth and the Solar System have within personal views of humanity's place in the universe, philosophical insights into how facts about space can be scientifically analyzed and configure with other established knowledge is a main point of inquiry. === Philosophy of Earth sciences === The philosophy of Earth science is concerned with how humans obtain and verify knowledge of the workings of the Earth system, including the atmosphere, hydrosphere, and geosphere (solid earth). Earth scientists' ways of knowing and habits of mind share important commonalities with other sciences, but also have distinctive attributes that emerge from the complex, heterogeneous, unique, long-lived, and non-manipulatable nature of the Earth system. === Philosophy of biology === Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, Leibniz and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science began to pay increasing attention to developments in biology, from the rise of the modern synthesis in the 1930s and 1940s to the discovery of the structure of deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed. Research in current philosophy of biology includes investigation of the foundations of evolutionary theory (such as Peter Godfrey-Smith's work), and the role of viruses as persistent symbionts in host genomes. As a consequence, the evolution of genetic content order is seen as the result of competent genome editors in contrast to former narratives in which error replication events (mutations) dominated. === Philosophy of medicine === Beyond medical ethics and bioethics, the philosophy of medicine is a branch of philosophy that includes the epistemology and ontology/metaphysics of medicine. Within the epistemology of medicine, evidence-based medicine (EBM) (or evidence-based practice (EBP)) has attracted attention, most notably the roles of randomisation, blinding and placebo controls. Related to these areas of investigation, ontologies of specific interest to the philosophy of medicine include Cartesian dualism, the monogenetic conception of disease and the conceptualization of 'placebos' and 'placebo effects'. There is also a growing interest in the metaphysics of medicine, particularly the idea of causation. Philosophers of medicine might not only be interested in how medical knowledge is generated, but also in the nature of such phenomena. Causation is of interest because the purpose of much medical research is to establish causal relationships, e.g. what causes disease, or what causes people to get better. === Philosophy of psychiatry === Philosophy of psychiatry explores philosophical questions relating to psychiatry and mental illness. The philosopher of science and medicine Dominic Murphy identifies three areas of exploration in the philosophy of psychiatry. The first concerns the examination of psychiatry as a science, using the tools of the philosophy of science more broadly. The second entails the examination of the concepts employed in discussion of mental illness, including the experience of mental illness, and the normative questions it raises. The third area concerns the links and discontinuities between the philosophy of mind and psychopathology. === Philosophy of psychology === Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example, is the best method for studying psychology to focus only on the response of behavior to external stimuli or should psychologists focus on mental perception and thought processes? If the latter, an important question is how the internal experiences of others can be measured. Self-reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self-deception or selective memory may affect their responses. Then even in the case of accurate self-reports, how can responses be compared across individuals? Even if two individuals respond with the same answer on a Likert scale, they may be experiencing very different things. Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. For example, are humans rational creatures? Is there any sense in which they have free will, and how does that relate to the experience of making choices? Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology. Philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. In particular, neurophilosophy has just recently become its own field with the works of Paul Churchland and Patricia Churchland. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism. === Philosophy of social science === The philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. Philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency. The French philosopher, Auguste Comte (1798–1857), established the epistemological perspective of positivism in The Course in Positivist Philosophy, a series of texts published between 1830 and 1842. The first three volumes of the Course dealt chiefly with the natural sciences already in existence (geoscience, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science: "sociologie". For Comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. Comte offers an evolutionary system proposing that society undergoes three phases in its quest for the truth according to a general 'law of three stages'. These are (1) the theological, (2) the metaphysical, and (3) the positive. Comte's positivism established the initial philosophical foundations for formal sociology and social research. Durkheim, Marx, and Weber are more typically cited as the fathers of contemporary social science. In psychology, a positivistic approach has historically been favoured in behaviourism. Positivism has also been espoused by 'technocrats' who believe in the inevitability of social progress through science and technology. The positivist perspective has been associated with 'scientism'; the view that the methods of the natural sciences may be applied to all areas of investigation, be it philosophical, social scientific, or otherwise. Among most social scientists and historians, orthodox positivism has long since lost popular support. Today, practitioners of both social and physical sciences instead take into account the distorting effect of observer bias and structural limitations. This scepticism has been facilitated by a general weakening of deductivist accounts of science by philosophers such as Thomas Kuhn, and new philosophical movements such as critical realism and neopragmatism. The philosopher-sociologist Jürgen Habermas has critiqued pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself. === Philosophy of technology === The philosophy of technology is a sub-field of philosophy that studies the nature of technology. Specific research topics include study of the role of tacit and explicit knowledge in creating and using technology, the nature of functions in technological artifacts, the role of values in design, and ethics related to technology. Technology and engineering can both involve the application of scientific knowledge. The philosophy of engineering is an emerging sub-field of the broader philosophy of technology. == See also == == References == === Sources === == Further reading == == External links == Philosophy of science at PhilPapers Philosophy of science at the Indiana Philosophy Ontology Project "Philosophy of science". Internet Encyclopedia of Philosophy.
Wikipedia/Philosopher_of_science
In physics, the hydrodynamic quantum analogs refer to experimentally-observed phenomena involving bouncing fluid droplets over a vibrating fluid bath that behave analogously to several quantum-mechanical systems. The experimental evidence for diffraction through slits has been disputed, however, though the diffraction pattern of walking droplets is not exactly the same as in quantum physics, it does appear clearly in the high memory parameter regime (at high forcing of the bath) where all the quantum-like effects are strongest. A droplet can be made to bounce indefinitely in a stationary position on a vibrating fluid surface. This is possible due to a pervading air layer that prevents the drop from coalescing into the bath. For certain combinations of bath surface acceleration, droplet size, and vibration frequency, a bouncing droplet will cease to stay in a stationary position, but instead “walk” in a rectilinear motion on top of the fluid bath. Walking droplet systems have been found to mimic several quantum mechanical phenomena including particle diffraction, quantum tunneling, quantized orbits, the Zeeman Effect, and the quantum corral. Besides being an interesting means to visualise phenomena that are typical of the quantum-mechanical world, floating droplets on a vibrating bath have interesting analogies with the pilot wave theory, one of the many interpretations of quantum mechanics in its early stages of conception and development. The theory was initially proposed by Louis de Broglie in 1927. It suggests that all particles in motion are actually borne on a wave-like motion, similar to how an object moves on a tide. In this theory, it is the evolution of the carrier wave that is given by the Schrödinger equation. It is a deterministic theory and is entirely nonlocal. It is an example of a hidden variable theory, and all non-relativistic quantum mechanics can be accounted for in this theory. The theory was abandoned by de Broglie in 1932, gave way to the Copenhagen interpretation, but was revived by David Bohm in 1952 as De Broglie–Bohm theory. The Copenhagen interpretation does not use the concept of the carrier wave or that a particle moves in definite paths until a measurement is made. == Physics of bouncing and walking droplets == === History === Floating droplets on a vibrating bath were first described in writing by Jearl Walker in a 1978 article in Scientific American. In 2005, Yves Couder and his lab were the first to systematically study the dynamics of bouncing droplets and discovered most of the quantum mechanical analogs. John Bush and his lab expanded upon Couder's work and studied the system in greater detail. In 2015 three separate groups, including John Bush, attempted to reproduce the effect and were unsuccessful. === Stationary bouncing droplet === A fluid droplet can float or bounce over a vibrating fluid bath because of the presence of an air layer between the droplet and the bath surface. The behavior of the droplet depends on the acceleration of the bath surface. Below a critical acceleration, the droplet will take successively smaller bounces before the intervening air layer eventually drains from underneath, causing the droplet to coalesce. Above the bouncing threshold, the intervening air layer replenishes during each bounce so the droplet never touches the bath surface. Near the bath surface, the droplet experiences equilibrium between inertial forces, gravity, and a reaction force due to the interaction with the air layer above the bath surface. This reaction force serves to launch the droplet back above the air like a trampoline. Molacek and Bush proposed two different models for the reaction force. === Walking droplet === For a small range of frequencies and drop sizes, a fluid droplet on a vibrating bath can be made to “walk” on the surface if the surface acceleration is sufficiently high (but still below the Faraday instability). That is, the droplet does not simply bounce in a stationary position but instead wanders in a straight line or in a chaotic trajectory. When a droplet interacts with the surface, it creates a transient wave that propagates from the point of impact. These waves usually decay, and stabilizing forces keep the droplet from drifting. However, when the surface acceleration is high, the transient waves created upon impact do not decay as quickly, deforming the surface such that the stabilizing forces are not enough to keep the droplet stationary. Thus, the droplet begins to “walk.” == Quantum phenomena on a macroscopic scale == A walking droplet on a vibrating fluid bath was found to behave analogously to several different quantum mechanical systems, namely particle diffraction, quantum tunneling, quantized orbits, the Zeeman effect, and the quantum corral. === Single and double slit diffraction === It has been known since the early 19th century that when light is shone through one or two small slits, a diffraction pattern appears on a screen far from the slits. Light has wave-like behavior, and interferes with itself through the slits, creating a pattern of alternating high and low intensity. Single electrons also exhibit wave-like behavior as a result of wave-particle duality. When electrons are fired through small slits, the probability of the electron striking the screen at a specific point shows an interference pattern as well. In 2006, Couder and Fort demonstrated that walking droplets passing through one or two slits exhibit similar interference behavior. They used a square shaped vibrating fluid bath with a constant depth (aside from the walls). The “walls” were regions of much lower depth, where the droplets would be stopped or reflected away. When the droplets were placed in the same initial location, they would pass through the slits and be scattered, seemingly randomly. However, by plotting a histogram of the droplets based on scattering angle, the researchers found that the scattering angle was not random, but droplets had preferred directions that followed the same pattern as light or electrons. In this way, the droplet may mimic the behavior of a quantum particle as it passes through the slit. Despite that research, in 2015 three teams: Bohr and Andersen's group in Denmark, Bush's team at MIT, and a team led by the quantum physicist Herman Batelaan at the University of Nebraska set out to repeat the Couder and Fort's bouncing-droplet double-slit experiment. Having their experimental setups perfected, none of the teams saw the interference-like pattern reported by Couder and Fort. Droplets went through the slits in almost straight lines, and no stripes appeared. It has since been shown that droplet trajectories are sensitive to interactions with container boundaries, air currents, and other parameters. Though the diffraction pattern of walking droplets is not exactly the same as in quantum physics, and is not expected to show a Fraunhofer-like dependence of the number of peaks on the slit width, the diffraction pattern does appear clearly in the high memory regime (at high forcing of the bath). === Quantum tunneling === Quantum tunneling is the quantum mechanical phenomenon where a quantum particle passes through a potential barrier. In classical mechanics, a classical particle could not pass through a potential barrier if the particle does not have enough energy, so the tunneling effect is confined to the quantum realm. For example, a rolling ball would not reach the top of a steep hill without adequate energy. However, a quantum particle, acting as a wave, can undergo both reflection and transmission at a potential barrier. This can be shown as a solution to the time dependent Schrödinger Equation. There is a finite, but usually small, probability to find the electron at a location past the barrier. This probability decreases exponentially with increasing barrier width. The macroscopic analogy using fluid droplets was first demonstrated in 2009. Researchers set up a square vibrating bath surrounded by walls on its perimeter. These “walls” were regions of lower depth, where a walking droplet may be reflected away. When the walking droplets were allowed to move around in the domain, they usually were reflected away from the barriers. However, surprisingly, sometimes the walking droplet would bounce past the barrier, similar to a quantum particle undergoing tunneling. In fact, the crossing probability was also found to decrease exponentially with increasing width of the barrier, exactly analogous to a quantum tunneling particle. === Quantized orbits === When two atomic particles interact and form a bound state, such the hydrogen atom, the energy spectrum is discrete. That is, the energy levels of the bound state are not continuous and only exist in discrete quantities, forming “quantized orbits.” In the case of a hydrogen atom, the quantized orbits are characterized by atomic orbitals, whose shapes are functions of discrete quantum numbers. On the macroscopic level, two walking fluid droplets can interact on a vibrating surface. It was found that the droplets would orbit each other in a stable configuration with a fixed distance apart. The stable distances came in discrete values. The stable orbiting droplets analogously represent a bound state in the quantum mechanical system. The discrete values of the distance between droplets are analogous to discrete energy levels as well. === Zeeman effect === When an external magnetic field is applied to a hydrogen atom, for example, the energy levels are shifted to values slightly above or below the original level. The direction of shift depends on the sign of the z-component of the total angular momentum. This phenomenon is known as the Zeeman Effect. In the context of walking droplets, an analogous Zeeman Effect can be demonstrated by observing orbiting droplets in a vibrating fluid bath. The bath is also brought to rotate at a constant angular velocity. In the rotating bath, the equilibrium distance between droplets shifts slightly farther or closer. The direction of shift depends on whether the orbiting drops rotate in the same direction as the bath or in opposite directions. The analogy to the quantum effect is clear. The bath rotation is analogous to an externally applied magnetic field, and the distance between droplets is analogous to energy levels. The distance shifts under an applied bath rotation, just as the energy levels shift under an applied magnetic field. === Quantum corral === Researchers have found that a walking droplet placed in a circular bath does not wander randomly, but rather there are specific locations the droplet is more likely to be found. Specifically, the probability of finding the walking droplet as a function of the distance from the center is non-uniform and there are several peaks of higher probability. This probability distribution mimics that of an electron confined to a quantum corral. == See also == Pilot-wave models De Broglie–Bohm theory Superfluid vacuum theory Quantum hydrodynamics == References == == External links == Research on hydrodynamic quantum analogues Prof. John Bush (MIT) Archived 2017-03-14 at the Wayback Machine Wired "Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?" 2014
Wikipedia/Fluid_analogs_in_quantum_mechanics
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, the Schrödinger equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy. In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming. == Overview == The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for a system of particles at coordinates ⁠ q {\displaystyle \mathbf {q} } ⁠. The function H {\displaystyle H} is the system's Hamiltonian giving the system's energy. The solution of this equation is the action, ⁠ S {\displaystyle S} ⁠, called Hamilton's principal function.: 291  The solution can be related to the system Lagrangian L {\displaystyle \ {\mathcal {L}}\ } by an indefinite integral of the form used in the principle of least action:: 431  S = ∫ L d t + s o m e c o n s t a n t {\displaystyle \ S=\int {\mathcal {L}}\ \mathrm {d} t+~{\mathsf {some\ constant}}~} Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.: 175  == Mathematical formulation == === Notation === Boldface variables such as q {\displaystyle \mathbf {q} } represent a list of N {\displaystyle N} generalized coordinates, q = ( q 1 , q 2 , … , q N − 1 , q N ) {\displaystyle \mathbf {q} =(q_{1},q_{2},\ldots ,q_{N-1},q_{N})} A dot over a variable or list signifies the time derivative (see Newton's notation). For example, q ˙ = d q d t . {\displaystyle {\dot {\mathbf {q} }}={\frac {d\mathbf {q} }{dt}}.} The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as p ⋅ q = ∑ k = 1 N p k q k . {\displaystyle \mathbf {p} \cdot \mathbf {q} =\sum _{k=1}^{N}p_{k}q_{k}.} === The action functional (a.k.a. Hamilton's principal function) === ==== Definition ==== Let the Hessian matrix H L ( q , q ˙ , t ) = { ∂ 2 L / ∂ q ˙ i ∂ q ˙ j } i j {\textstyle H_{\mathcal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\left\{\partial ^{2}{\mathcal {L}}/\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}\right\}_{ij}} be invertible. The relation d d t ∂ L ∂ q ˙ i = ∑ j = 1 n ( ∂ 2 L ∂ q ˙ i ∂ q ˙ j q ¨ j + ∂ 2 L ∂ q ˙ i ∂ q j q ˙ j ) + ∂ 2 L ∂ q ˙ i ∂ t , i = 1 , … , n , {\displaystyle {\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}=\sum _{j=1}^{n}\left({\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}}}{\ddot {q}}^{j}+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {q}^{j}}}{\dot {q}}^{j}\right)+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial t}},\qquad i=1,\ldots ,n,} shows that the Euler–Lagrange equations form a n × n {\displaystyle n\times n} system of second-order ordinary differential equations. Inverting the matrix H L {\displaystyle H_{\mathcal {L}}} transforms this system into q ¨ i = F i ( q , q ˙ , t ) , i = 1 , … , n . {\displaystyle {\ddot {q}}^{i}=F_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t),\ i=1,\ldots ,n.} Let a time instant t 0 {\displaystyle t_{0}} and a point q 0 ∈ M {\displaystyle \mathbf {q} _{0}\in M} in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every v 0 , {\displaystyle \mathbf {v} _{0},} the initial value problem with the conditions γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ ˙ | τ = t 0 = v 0 {\displaystyle {\dot {\gamma }}|_{\tau =t_{0}}=\mathbf {v} _{0}} has a locally unique solution γ = γ ( τ ; t 0 , q 0 , v 0 ) . {\displaystyle \gamma =\gamma (\tau ;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}).} Additionally, let there be a sufficiently small time interval ( t 0 , t 1 ) {\displaystyle (t_{0},t_{1})} such that extremals with different initial velocities v 0 {\displaystyle \mathbf {v} _{0}} would not intersect in M × ( t 0 , t 1 ) . {\displaystyle M\times (t_{0},t_{1}).} The latter means that, for any q ∈ M {\displaystyle \mathbf {q} \in M} and any t ∈ ( t 0 , t 1 ) , {\displaystyle t\in (t_{0},t_{1}),} there can be at most one extremal γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} for which γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .} Substituting γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} into the action functional results in the Hamilton's principal function (HPF) where γ = γ ( τ ; t , t 0 , q , q 0 ) , {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0}),} γ | τ = t 0 = q 0 , {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0},} γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .} === Formula for the momenta === The momenta are defined as the quantities p i ( q , q ˙ , t ) = ∂ L / ∂ q ˙ i . {\textstyle p_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}.} This section shows that the dependency of p i {\displaystyle p_{i}} on q ˙ {\displaystyle \mathbf {\dot {q}} } disappears, once the HPF is known. Indeed, let a time instant t 0 {\displaystyle t_{0}} and a point q 0 {\displaystyle \mathbf {q} _{0}} in the configuration space be fixed. For every time instant t {\displaystyle t} and a point q , {\displaystyle \mathbf {q} ,} let γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} be the (unique) extremal from the definition of the Hamilton's principal function ⁠ S {\displaystyle S} ⁠. Call v = def γ ˙ ( τ ; t , t 0 , q , q 0 ) | τ = t {\displaystyle \mathbf {v} \,{\stackrel {\text{def}}{=}}\,{\dot {\gamma }}(\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})|_{\tau =t}} the velocity at ⁠ τ = t {\displaystyle \tau =t} ⁠. Then === Formula === Given the Hamiltonian H ( q , p , t ) {\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)} of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function S {\displaystyle S} , Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating S {\displaystyle S} as the generating function for a canonical transformation of the classical Hamiltonian H = H ( q 1 , q 2 , … , q N ; p 1 , p 2 , … , p N ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{N};t).} The conjugate momenta correspond to the first derivatives of S {\displaystyle S} with respect to the generalized coordinates p k = ∂ S ∂ q k . {\displaystyle p_{k}={\frac {\partial S}{\partial q_{k}}}.} As a solution to the Hamilton–Jacobi equation, the principal function contains N + 1 {\displaystyle N+1} undetermined constants, the first N {\displaystyle N} of them denoted as α 1 , α 2 , … , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , and the last one coming from the integration of ∂ S ∂ t {\displaystyle {\frac {\partial S}{\partial t}}} . The relationship between p {\displaystyle \mathbf {p} } and q {\displaystyle \mathbf {q} } then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities β k = ∂ S ∂ α k , k = 1 , 2 , … , N {\displaystyle \beta _{k}={\frac {\partial S}{\partial \alpha _{k}}},\quad k=1,2,\ldots ,N} are also constants of motion, and these equations can be inverted to find q {\displaystyle \mathbf {q} } as a function of all the α {\displaystyle \alpha } and β {\displaystyle \beta } constants and time. == Comparison with other formulations of mechanics == The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the N {\displaystyle N} generalized coordinates q 1 , q 2 , … , q N {\displaystyle q_{1},\,q_{2},\dots ,q_{N}} and the time t {\displaystyle t} . The generalized momenta do not appear, except as derivatives of S {\displaystyle S} , the classical action. For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of N {\displaystyle N} , generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta p 1 , p 2 , … , p N {\displaystyle p_{1},\,p_{2},\dots ,p_{N}} . Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.: 444  == Derivation using a canonical transformation == Any canonical transformation involving a type-2 generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} leads to the relations p = ∂ G 2 ∂ q , Q = ∂ G 2 ∂ P , K ( Q , P , t ) = H ( q , p , t ) + ∂ G 2 ∂ t {\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }},\quad \mathbf {Q} ={\frac {\partial G_{2}}{\partial \mathbf {P} }},\quad \\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial G_{2}}{\partial t}}\end{aligned}}} and Hamilton's equations in terms of the new variables P , Q {\displaystyle \mathbf {P} ,\,\mathbf {Q} } and new Hamiltonian K {\displaystyle K} have the same form: P ˙ = − ∂ K ∂ Q , Q ˙ = + ∂ K ∂ P . {\displaystyle {\dot {\mathbf {P} }}=-{\partial K \over \partial \mathbf {Q} },\quad {\dot {\mathbf {Q} }}=+{\partial K \over \partial \mathbf {P} }.} To derive the HJE, a generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} is chosen in such a way that, it will make the new Hamiltonian K = 0 {\displaystyle K=0} . Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial P ˙ = Q ˙ = 0 {\displaystyle {\dot {\mathbf {P} }}={\dot {\mathbf {Q} }}=0} so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta P {\displaystyle \mathbf {P} } are usually denoted α 1 , α 2 , … , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , i.e. P m = α m {\displaystyle P_{m}=\alpha _{m}} and the new generalized coordinates Q {\displaystyle \mathbf {Q} } are typically denoted as β 1 , β 2 , … , β N {\displaystyle \beta _{1},\,\beta _{2},\dots ,\beta _{N}} , so Q m = β m {\displaystyle Q_{m}=\beta _{m}} . Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant A {\displaystyle A} : G 2 ( q , α , t ) = S ( q , t ) + A , {\displaystyle G_{2}(\mathbf {q} ,{\boldsymbol {\alpha }},t)=S(\mathbf {q} ,t)+A,} the HJE automatically arises p = ∂ G 2 ∂ q = ∂ S ∂ q → H ( q , p , t ) + ∂ G 2 ∂ t = 0 → H ( q , ∂ S ∂ q , t ) + ∂ S ∂ t = 0. {\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}={\frac {\partial S}{\partial \mathbf {q} }}\\[1ex]\rightarrow {}&H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}=0\\[1ex]\rightarrow {}&H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right)}+{\partial S \over \partial t}=0.\end{aligned}}} When solved for S ( q , α , t ) {\displaystyle S(\mathbf {q} ,{\boldsymbol {\alpha }},t)} , these also give us the useful equations Q = β = ∂ S ∂ α , {\displaystyle \mathbf {Q} ={\boldsymbol {\beta }}={\partial S \over \partial {\boldsymbol {\alpha }}},} or written in components for clarity Q m = β m = ∂ S ( q , α , t ) ∂ α m . {\displaystyle Q_{m}=\beta _{m}={\frac {\partial S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}{\partial \alpha _{m}}}.} Ideally, these N equations can be inverted to find the original generalized coordinates q {\displaystyle \mathbf {q} } as a function of the constants α , β , {\displaystyle {\boldsymbol {\alpha }},\,{\boldsymbol {\beta }},} and t {\displaystyle t} , thus solving the original problem. == Separation of variables == When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative ∂ S ∂ t {\displaystyle {\frac {\partial S}{\partial t}}} in the HJE must be a constant, usually denoted ( − E {\displaystyle -E} ), giving the separated solution S = W ( q 1 , q 2 , … , q N ) − E t {\displaystyle S=W(q_{1},q_{2},\ldots ,q_{N})-Et} where the time-independent function W ( q ) {\displaystyle W(\mathbf {q} )} is sometimes called the abbreviated action or Hamilton's characteristic function : 434  and sometimes: 607  written S 0 {\displaystyle S_{0}} (see action principle names). The reduced Hamilton–Jacobi equation can then be written H ( q , ∂ S ∂ q ) = E . {\displaystyle H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }}\right)}=E.} To illustrate separability for other variables, a certain generalized coordinate q k {\displaystyle q_{k}} and its derivative ∂ S ∂ q k {\displaystyle {\frac {\partial S}{\partial q_{k}}}} are assumed to appear together as a single function ψ ( q k , ∂ S ∂ q k ) {\displaystyle \psi {\left(q_{k},{\frac {\partial S}{\partial q_{k}}}\right)}} in the Hamiltonian H = H ( q 1 , q 2 , … , q k − 1 , q k + 1 , … , q N ; p 1 , p 2 , … , p k − 1 , p k + 1 , … , p N ; ψ ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{N};\psi ;t).} In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates S = S k ( q k ) + S rem ( q 1 , … , q k − 1 , q k + 1 , … , q N , t ) . {\displaystyle S=S_{k}(q_{k})+S_{\text{rem}}(q_{1},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N},t).} Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as Γ k {\displaystyle \Gamma _{k}} ), yielding a first-order ordinary differential equation for S k ( q k ) , {\displaystyle S_{k}(q_{k}),} ψ ( q k , d S k d q k ) = Γ k . {\displaystyle \psi {\left(q_{k},{\frac {dS_{k}}{dq_{k}}}\right)}=\Gamma _{k}.} In fortunate cases, the function S {\displaystyle S} can be separated completely into N {\displaystyle N} functions S m ( q m ) , {\displaystyle S_{m}(q_{m}),} S = S 1 ( q 1 ) + S 2 ( q 2 ) + ⋯ + S N ( q N ) − E t . {\displaystyle S=S_{1}(q_{1})+S_{2}(q_{2})+\cdots +S_{N}(q_{N})-Et.} In such a case, the problem devolves to N {\displaystyle N} ordinary differential equations. The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, S {\displaystyle S} will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections. === Examples in various coordinate systems === ==== Spherical coordinates ==== In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written H = 1 2 m [ p r 2 + p θ 2 r 2 + p ϕ 2 r 2 sin 2 ⁡ θ ] + U ( r , θ , ϕ ) . {\displaystyle H={\frac {1}{2m}}\left[p_{r}^{2}+{\frac {p_{\theta }^{2}}{r^{2}}}+{\frac {p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }}\right]+U(r,\theta ,\phi ).} The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions U r ( r ) , U θ ( θ ) , U ϕ ( ϕ ) {\displaystyle U_{r}(r),U_{\theta }(\theta ),U_{\phi }(\phi )} such that U {\displaystyle U} can be written in the analogous form U ( r , θ , ϕ ) = U r ( r ) + U θ ( θ ) r 2 + U ϕ ( ϕ ) r 2 sin 2 ⁡ θ . {\displaystyle U(r,\theta ,\phi )=U_{r}(r)+{\frac {U_{\theta }(\theta )}{r^{2}}}+{\frac {U_{\phi }(\phi )}{r^{2}\sin ^{2}\theta }}.} Substitution of the completely separated solution S = S r ( r ) + S θ ( θ ) + S ϕ ( ϕ ) − E t {\displaystyle S=S_{r}(r)+S_{\theta }(\theta )+S_{\phi }(\phi )-Et} into the HJE yields 1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ ( d S θ d θ ) 2 + 2 m U θ ( θ ) ] + 1 2 m r 2 sin 2 ⁡ θ [ ( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )\right]+{\frac {1}{2mr^{2}\sin ^{2}\theta }}\left[\left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )\right]=E.} This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for ϕ {\displaystyle \phi } ( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) = Γ ϕ {\displaystyle \left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )=\Gamma _{\phi }} where Γ ϕ {\displaystyle \Gamma _{\phi }} is a constant of the motion that eliminates the ϕ {\displaystyle \phi } dependence from the Hamilton–Jacobi equation 1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ 1 sin 2 ⁡ θ ( d S θ d θ ) 2 + 2 m sin 2 ⁡ θ U θ ( θ ) + Γ ϕ ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[{\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }\right]=E.} The next ordinary differential equation involves the θ {\displaystyle \theta } generalized coordinate 1 sin 2 ⁡ θ ( d S θ d θ ) 2 + 2 m sin 2 ⁡ θ U θ ( θ ) + Γ ϕ = Γ θ {\displaystyle {\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }=\Gamma _{\theta }} where Γ θ {\displaystyle \Gamma _{\theta }} is again a constant of the motion that eliminates the θ {\displaystyle \theta } dependence and reduces the HJE to the final ordinary differential equation 1 2 m ( d S r d r ) 2 + U r ( r ) + Γ θ 2 m r 2 = E {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {\Gamma _{\theta }}{2mr^{2}}}=E} whose integration completes the solution for S {\displaystyle S} . ==== Elliptic cylindrical coordinates ==== The Hamiltonian in elliptic cylindrical coordinates can be written H = p μ 2 + p ν 2 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) + p z 2 2 m + U ( μ , ν , z ) {\displaystyle H={\frac {p_{\mu }^{2}+p_{\nu }^{2}}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}+{\frac {p_{z}^{2}}{2m}}+U(\mu ,\nu ,z)} where the foci of the ellipses are located at ± a {\displaystyle \pm a} on the x {\displaystyle x} -axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form U ( μ , ν , z ) = U μ ( μ ) + U ν ( ν ) sinh 2 ⁡ μ + sin 2 ⁡ ν + U z ( z ) {\displaystyle U(\mu ,\nu ,z)={\frac {U_{\mu }(\mu )+U_{\nu }(\nu )}{\sinh ^{2}\mu +\sin ^{2}\nu }}+U_{z}(z)} where U μ ( μ ) {\displaystyle U_{\mu }(\mu )} , U ν ( ν ) {\displaystyle U_{\nu }(\nu )} and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution S = S μ ( μ ) + S ν ( ν ) + S z ( z ) − E t {\displaystyle S=S_{\mu }(\mu )+S_{\nu }(\nu )+S_{z}(z)-Et} into the HJE yields 1 2 m ( d S z d z ) 2 + 1 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) [ ( d S μ d μ ) 2 + ( d S ν d ν ) 2 ] + U z ( z ) + 1 sinh 2 ⁡ μ + sin 2 ⁡ ν [ U μ ( μ ) + U ν ( ν ) ] = E . {\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}\left[\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sinh ^{2}\mu +\sin ^{2}\nu }}\left[U_{\mu }(\mu )+U_{\nu }(\nu )\right]&=E.\end{aligned}}} Separating the first ordinary differential equation 1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}} yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) ( d S μ d μ ) 2 + ( d S ν d ν ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 U ν ( ν ) = 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) ( E − Γ z ) {\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )=2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)\left(E-\Gamma _{z}\right)} which itself may be separated into two independent ordinary differential equations ( d S μ d μ ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 ( Γ z − E ) sinh 2 ⁡ μ = Γ μ ( d S ν d ν ) 2 + 2 m a 2 U ν ( ν ) + 2 m a 2 ( Γ z − E ) sin 2 ⁡ ν = Γ ν {\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}&\,+\,&2ma^{2}U_{\mu }(\mu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sinh ^{2}\mu &=\,&\Gamma _{\mu }\\\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}&\,+\,&2ma^{2}U_{\nu }(\nu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sin ^{2}\nu &=\,&\Gamma _{\nu }\end{alignedat}}} that, when solved, provide a complete solution for S {\displaystyle S} . ==== Parabolic cylindrical coordinates ==== The Hamiltonian in parabolic cylindrical coordinates can be written H = p σ 2 + p τ 2 2 m ( σ 2 + τ 2 ) + p z 2 2 m + U ( σ , τ , z ) . {\displaystyle H={\frac {p_{\sigma }^{2}+p_{\tau }^{2}}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}+{\frac {p_{z}^{2}}{2m}}+U(\sigma ,\tau ,z).} The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form U ( σ , τ , z ) = U σ ( σ ) + U τ ( τ ) σ 2 + τ 2 + U z ( z ) {\displaystyle U(\sigma ,\tau ,z)={\frac {U_{\sigma }(\sigma )+U_{\tau }(\tau )}{\sigma ^{2}+\tau ^{2}}}+U_{z}(z)} where U σ ( σ ) {\displaystyle U_{\sigma }(\sigma )} , U τ ( τ ) {\displaystyle U_{\tau }(\tau )} , and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution S = S σ ( σ ) + S τ ( τ ) + S z ( z ) − E t + constant {\displaystyle S=S_{\sigma }(\sigma )+S_{\tau }(\tau )+S_{z}(z)-Et+{\text{constant}}} into the HJE yields 1 2 m ( d S z d z ) 2 + 1 2 m ( σ 2 + τ 2 ) [ ( d S σ d σ ) 2 + ( d S τ d τ ) 2 ] + U z ( z ) + 1 σ 2 + τ 2 [ U σ ( σ ) + U τ ( τ ) ] = E . {\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}\left[\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sigma ^{2}+\tau ^{2}}}\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]&=E.\end{aligned}}} Separating the first ordinary differential equation 1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}} yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) ( d S σ d σ ) 2 + ( d S τ d τ ) 2 + 2 m [ U σ ( σ ) + U τ ( τ ) ] = 2 m ( σ 2 + τ 2 ) ( E − Γ z ) {\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2m\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]=2m\left(\sigma ^{2}+\tau ^{2}\right)\left(E-\Gamma _{z}\right)} which itself may be separated into two independent ordinary differential equations ( d S σ d σ ) 2 + 2 m U σ ( σ ) + 2 m σ 2 ( Γ z − E ) = Γ σ ( d S τ d τ ) 2 + 2 m U τ ( τ ) + 2 m τ 2 ( Γ z − E ) = Γ τ {\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}&+\,&2mU_{\sigma }(\sigma )&+\,&2m\sigma ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\sigma }\\\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}&+\,&2mU_{\tau }(\tau )&+\,&2m\tau ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\tau }\end{alignedat}}} that, when solved, provide a complete solution for S {\displaystyle S} . == Waves and particles == === Optical wave fronts and trajectories === The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface C t {\textstyle {\mathcal {C}}_{t}} that the light emitted at time t = 0 {\textstyle t=0} has reached at time t {\textstyle t} . Light rays and wave fronts are dual: if one is known, the other can be deduced. More precisely, geometrical optics is a variational problem where the “action” is the travel time T {\textstyle T} along a path, T = 1 c ∫ A B n d s {\displaystyle T={\frac {1}{c}}\int _{A}^{B}n\,ds} where n {\textstyle n} is the medium's index of refraction and d s {\textstyle ds} is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other. The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation. The wave front at time t {\textstyle t} , for a system initially at q 0 {\textstyle \mathbf {q} _{0}} at time t 0 {\textstyle t_{0}} , is defined as the collection of points q {\textstyle \mathbf {q} } such that S ( q , t ) = const {\textstyle S(\mathbf {q} ,t)={\text{const}}} . If S ( q , t ) {\textstyle S(\mathbf {q} ,t)} is known, the momentum is immediately deduced. p = ∂ S ∂ q . {\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}.} Once p {\textstyle \mathbf {p} } is known, tangents to the trajectories q ˙ {\textstyle {\dot {\mathbf {q} }}} are computed by solving the equation ∂ L ∂ q ˙ = p {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {q} }}}}={\boldsymbol {p}}} for q ˙ {\textstyle {\dot {\mathbf {q} }}} , where L {\textstyle {\mathcal {L}}} is the Lagrangian. The trajectories are then recovered from the knowledge of q ˙ {\textstyle {\dot {\mathbf {q} }}} . === Relationship to the Schrödinger equation === The isosurfaces of the function S ( q , t ) {\displaystyle S(\mathbf {q} ,t)} can be determined at any time t. The motion of an S {\displaystyle S} -isosurface as a function of time is defined by the motions of the particles beginning at the points q {\displaystyle \mathbf {q} } on the isosurface. The motion of such an isosurface can be thought of as a wave moving through q {\displaystyle \mathbf {q} } -space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave ψ = ψ 0 e i S / ℏ {\displaystyle \psi =\psi _{0}e^{iS/\hbar }} where ℏ {\displaystyle \hbar } is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having S {\displaystyle S} be a complex number. The Hamilton–Jacobi equation is then rewritten as ℏ 2 2 m ∇ 2 ψ − U ψ = ℏ i ∂ ψ ∂ t {\displaystyle {\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi -U\psi ={\frac {\hbar }{i}}{\frac {\partial \psi }{\partial t}}} which is the Schrödinger equation. Conversely, starting with the Schrödinger equation and our ansatz for ψ {\displaystyle \psi } , it can be deduced that 1 2 m ( ∇ S ) 2 + U + ∂ S ∂ t = i ℏ 2 m ∇ 2 ψ 0 ψ 0 . {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}={\frac {i\hbar }{2m}}{\frac {\nabla ^{2}\psi _{0}}{\psi _{0}}}.} The classical limit ( ℏ → 0 {\displaystyle \hbar \rightarrow 0} ) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation, 1 2 m ( ∇ S ) 2 + U + ∂ S ∂ t = 0. {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}=0.} == Applications == === HJE in a gravitational field === Using the energy–momentum relation in the form g α β P α P β − ( m c ) 2 = 0 {\displaystyle g^{\alpha \beta }P_{\alpha }P_{\beta }-(mc)^{2}=0} for a particle of rest mass m {\displaystyle m} travelling in curved space, where g α β {\displaystyle g^{\alpha \beta }} are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and c {\displaystyle c} is the speed of light. Setting the four-momentum P α {\displaystyle P_{\alpha }} equal to the four-gradient of the action S {\displaystyle S} , P α = − ∂ S ∂ x α {\displaystyle P_{\alpha }=-{\frac {\partial S}{\partial x^{\alpha }}}} gives the Hamilton–Jacobi equation in the geometry determined by the metric g {\displaystyle g} : g α β ∂ S ∂ x α ∂ S ∂ x β − ( m c ) 2 = 0 , {\displaystyle g^{\alpha \beta }{\frac {\partial S}{\partial x^{\alpha }}}{\frac {\partial S}{\partial x^{\beta }}}-(mc)^{2}=0,} in other words, in a gravitational field. === HJE in electromagnetic fields === For a particle of rest mass m {\displaystyle m} and electric charge e {\displaystyle e} moving in electromagnetic field with four-potential A i = ( ϕ , A ) {\displaystyle A_{i}=(\phi ,\mathrm {A} )} in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor g i k = g i k {\displaystyle g^{ik}=g_{ik}} has a form g i k ( ∂ S ∂ x i + e c A i ) ( ∂ S ∂ x k + e c A k ) = m 2 c 2 {\displaystyle g^{ik}\left({\frac {\partial S}{\partial x^{i}}}+{\frac {e}{c}}A_{i}\right)\left({\frac {\partial S}{\partial x^{k}}}+{\frac {e}{c}}A_{k}\right)=m^{2}c^{2}} and can be solved for the Hamilton principal action function S {\displaystyle S} to obtain further solution for the particle trajectory and momentum: x = − e c γ ∫ A z d ξ , y = − e c γ ∫ A y d ξ , z = − e 2 2 c 2 γ 2 ∫ ( A 2 − A 2 ¯ ) d ξ , ξ = c t − e 2 2 γ 2 c 2 ∫ ( A 2 − A 2 ¯ ) d ξ , p x = − e c A x , p y = − e c A y , p z = e 2 2 γ c ( A 2 − A 2 ¯ ) , E = c γ + e 2 2 γ c ( A 2 − A 2 ¯ ) , {\displaystyle {\begin{aligned}x&=-{\frac {e}{c\gamma }}\int A_{z}\,d\xi ,&y&=-{\frac {e}{c\gamma }}\int A_{y}\,d\xi ,\\[1ex]z&=-{\frac {e^{2}}{2c^{2}\gamma ^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,&\xi &=ct-{\frac {e^{2}}{2\gamma ^{2}c^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,\\[1ex]p_{x}&=-{\frac {e}{c}}A_{x},&p_{y}&=-{\frac {e}{c}}A_{y},\\[1ex]p_{z}&={\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),&{\mathcal {E}}&=c\gamma +{\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),\end{aligned}}} where ξ = c t − z {\displaystyle \xi =ct-z} and γ 2 = m 2 c 2 + e 2 c 2 A ¯ 2 {\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}}{c^{2}}}{\overline {A}}^{2}} with A ¯ {\displaystyle {\overline {\mathbf {A} }}} the cycle average of the vector potential. ==== A circularly polarized wave ==== In the case of circular polarization, E x = E 0 sin ⁡ ω ξ 1 , E y = E 0 cos ⁡ ω ξ 1 , A x = c E 0 ω cos ⁡ ω ξ 1 , A y = − c E 0 ω sin ⁡ ω ξ 1 . {\displaystyle {\begin{aligned}E_{x}&=E_{0}\sin \omega \xi _{1},&E_{y}&=E_{0}\cos \omega \xi _{1},\\[1ex]A_{x}&={\frac {cE_{0}}{\omega }}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1}.\end{aligned}}} Hence x = − e c E 0 ω sin ⁡ ω ξ 1 , y = − e c E 0 ω cos ⁡ ω ξ 1 , p x = − e E 0 ω cos ⁡ ω ξ 1 , p y = e E 0 ω sin ⁡ ω ξ 1 , {\displaystyle {\begin{aligned}x&=-{\frac {ecE_{0}}{\omega }}\sin \omega \xi _{1},&y&=-{\frac {ecE_{0}}{\omega }}\cos \omega \xi _{1},\\[1ex]p_{x}&=-{\frac {eE_{0}}{\omega }}\cos \omega \xi _{1},&p_{y}&={\frac {eE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}} where ξ 1 = ξ / c {\displaystyle \xi _{1}=\xi /c} , implying the particle moving along a circular trajectory with a permanent radius e c E 0 / γ ω 2 {\displaystyle ecE_{0}/\gamma \omega ^{2}} and an invariable value of momentum e E 0 / ω 2 {\displaystyle eE_{0}/\omega ^{2}} directed along a magnetic field vector. ==== A monochromatic linearly polarized plane wave ==== For the flat, monochromatic, linearly polarized wave with a field E {\displaystyle E} directed along the axis y {\displaystyle y} E y = E 0 cos ⁡ ω ξ 1 , A y = − c E 0 ω sin ⁡ ω ξ 1 , {\displaystyle {\begin{aligned}E_{y}&=E_{0}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}} hence x = const , y = y 0 cos ⁡ ω ξ 1 , y 0 = − e c E 0 γ ω 2 , z = C z y 0 sin ⁡ 2 ω ξ 1 , C z = e E 0 8 γ ω , γ 2 = m 2 c 2 + e 2 E 0 2 2 ω 2 , {\displaystyle {\begin{aligned}x&={\text{const}},\\[1ex]y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {ecE_{0}}{\gamma \omega ^{2}}},\\[1ex]z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {eE_{0}}{8\gamma \omega }},\\[1ex]\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}E_{0}^{2}}{2\omega ^{2}}},\end{aligned}}} p x = 0 , p y = p y , 0 sin ⁡ ω ξ 1 , p y , 0 = e E 0 ω , p z = − 2 C z p y , 0 cos ⁡ 2 ω ξ 1 {\displaystyle {\begin{aligned}p_{x}&=0,\\[1ex]p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {eE_{0}}{\omega }},\\[1ex]p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1}\end{aligned}}} implying the particle figure-8 trajectory with a long its axis oriented along the electric field E {\displaystyle E} vector. ==== An electromagnetic wave with a solenoidal magnetic field ==== For the electromagnetic wave with axial (solenoidal) magnetic field: E = E ϕ = ω ρ 0 c B 0 cos ⁡ ω ξ 1 , {\displaystyle E=E_{\phi }={\frac {\omega \rho _{0}}{c}}B_{0}\cos \omega \xi _{1},} A ϕ = − ρ 0 B 0 sin ⁡ ω ξ 1 = − L s π ρ 0 N s I 0 sin ⁡ ω ξ 1 , {\displaystyle A_{\phi }=-\rho _{0}B_{0}\sin \omega \xi _{1}=-{\frac {L_{s}}{\pi \rho _{0}N_{s}}}I_{0}\sin \omega \xi _{1},} hence x = constant , y = y 0 cos ⁡ ω ξ 1 , y 0 = − e ρ 0 B 0 γ ω , z = C z y 0 sin ⁡ 2 ω ξ 1 , C z = e ρ 0 B 0 8 c γ , γ 2 = m 2 c 2 + e 2 ρ 0 2 B 0 2 2 c 2 , {\displaystyle {\begin{aligned}x&={\text{constant}},\\y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {e\rho _{0}B_{0}}{\gamma \omega }},\\z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {e\rho _{0}B_{0}}{8c\gamma }},\\\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}\rho _{0}^{2}B_{0}^{2}}{2c^{2}}},\end{aligned}}} p x = 0 , p y = p y , 0 sin ⁡ ω ξ 1 , p y , 0 = e ρ 0 B 0 c , p z = − 2 C z p y , 0 cos ⁡ 2 ω ξ 1 , {\displaystyle {\begin{aligned}p_{x}&=0,\\p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {e\rho _{0}B_{0}}{c}},\\p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1},\end{aligned}}} where B 0 {\displaystyle B_{0}} is the magnetic field magnitude in a solenoid with the effective radius ρ 0 {\displaystyle \rho _{0}} , inductivity L s {\displaystyle L_{s}} , number of windings N s {\displaystyle N_{s}} , and an electric current magnitude I 0 {\displaystyle I_{0}} through the solenoid windings. The particle motion occurs along the figure-8 trajectory in y z {\displaystyle yz} plane set perpendicular to the solenoid axis with arbitrary azimuth angle φ {\displaystyle \varphi } due to axial symmetry of the solenoidal magnetic field. == See also == == References == == Further reading == Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). New York: Springer. ISBN 0-387-96890-3. Hamilton, W. (1833). "On a General Method of Expressing the Paths of Light, and of the Planets, by the Coefficients of a Characteristic Function" (PDF). Dublin University Review: 795–826. Hamilton, W. (1834). "On the Application to Dynamics of a General Mathematical Method previously Applied to Optics" (PDF). British Association Report: 513–518. Fetter, A. & Walecka, J. (2003). Theoretical Mechanics of Particles and Continua. Dover Books. ISBN 978-0-486-43261-8. Landau, L. D.; Lifshitz, E. M. (1975). Mechanics. Amsterdam: Elsevier. Sakurai, J. J. (1985). Modern Quantum Mechanics. Benjamin/Cummings Publishing. ISBN 978-0-8053-7501-5. Jacobi, C. G. J. (1884), Vorlesungen über Dynamik, C. G. J. Jacobi's Gesammelte Werke (in German), Berlin: G. Reimer, OL 14009561M Nakane, Michiyo; Fraser, Craig G. (2002). "The Early History of Hamilton-Jacobi Dynamics". Centaurus. 44 (3–4): 161–227. doi:10.1111/j.1600-0498.2002.tb00613.x. PMID 17357243.
Wikipedia/Hamilton-Jacobi_equations