text
stringlengths
559
401k
source
stringlengths
13
121
In particle physics, the electroweak interaction or electroweak force is the unified description of two of the fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force. During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around 5.5×1012 K (from the Large Hadron Collider). Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles, known as the Weinberg–Salam theory. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable. == History == After the Wu experiment in 1956 discovered parity violation in the weak interaction, a search began for a way to relate the weak and electromagnetic interactions. Extending his doctoral advisor Julian Schwinger's work, Sheldon Glashow first experimented with introducing two different symmetries, one chiral and one achiral, and combined them such that their overall symmetry was unbroken. This did not yield a renormalizable theory, and its gauge symmetry had to be broken by hand as no spontaneous mechanism was known, but it predicted a new particle, the Z boson. This received little notice, as it matched no experimental finding. In 1964, Salam and John Clive Ward had the same idea, but predicted a massless photon and three massive gauge bosons with a manually broken symmetry. Later around 1967, while investigating spontaneous symmetry breaking, Weinberg found a set of symmetries predicting a massless, neutral gauge boson. Initially rejecting such a particle as useless, he later realized his symmetries produced the electroweak force, and he proceeded to predict rough masses for the W and Z bosons. Significantly, he suggested this new theory was renormalizable. In 1971, Gerard 't Hooft proved that spontaneously broken gauge symmetries are renormalizable even with massive gauge bosons. == Formulation == Mathematically, electromagnetism is unified with the weak interactions as a Yang–Mills field with an SU(2) × U(1) gauge group, which describes the formal operations that can be applied to the electroweak gauge fields without changing the dynamics of the system. These fields are the weak isospin fields W1, W2, and W3, and the weak hypercharge field B. This invariance is known as electroweak symmetry. The generators of SU(2) and U(1) are given the name weak isospin (labeled T) and weak hypercharge (labeled Y) respectively. These then give rise to the gauge bosons that mediate the electroweak interactions – the three W bosons of weak isospin (W1, W2, and W3), and the B boson of weak hypercharge, respectively, all of which are "initially" massless. These are not physical fields yet, before spontaneous symmetry breaking and the associated Higgs mechanism. In the Standard Model, the observed physical particles, the W± and Z0 bosons, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1)Y to U(1)em, effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum-field-theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom. The electric charge arises as the particular linear combination (nontrivial) of YW (weak hypercharge) and the T3 component of weak isospin ( Q = T 3 + 1 2 Y W {\displaystyle Q=T_{3}+{\tfrac {1}{2}}\,Y_{\mathrm {W} }} ) that does not couple to the Higgs boson. That is to say: the Higgs and the electromagnetic field have no effect on each other, at the level of the fundamental forces ("tree level"), while any other combination of the hypercharge and the weak isospin must interact with the Higgs. This causes an apparent separation between the weak force, which interacts with the Higgs, and electromagnetism, which does not. Mathematically, the electric charge is a specific combination of the hypercharge and T3 outlined in the figure. U(1)em (the symmetry group of electromagnetism only) is defined to be the group generated by this special linear combination, and the symmetry described by the U(1)em group is unbroken, since it does not directly interact with the Higgs. The above spontaneous symmetry breaking makes the W3 and B bosons coalesce into two different physical bosons with different masses – the Z0 boson, and the photon (γ), ( γ Z 0 ) = ( cos ⁡ θ W sin ⁡ θ W − sin ⁡ θ W cos ⁡ θ W ) ( B W 3 ) , {\displaystyle {\begin{pmatrix}\gamma \\Z^{0}\end{pmatrix}}={\begin{pmatrix}\cos \theta _{\text{W}}&\sin \theta _{\text{W}}\\-\sin \theta _{\text{W}}&\cos \theta _{\text{W}}\end{pmatrix}}{\begin{pmatrix}B\\W_{3}\end{pmatrix}},} where θW is the weak mixing angle. The axes representing the particles have essentially just been rotated, in the (W3, B) plane, by the angle θW. This also introduces a mismatch between the mass of the Z0 and the mass of the W± particles (denoted as mZ and mW, respectively), m Z = m W cos ⁡ θ W . {\displaystyle m_{\text{Z}}={\frac {m_{\text{W}}}{\,\cos \theta _{\text{W}}\,}}~.} The W1 and W2 bosons, in turn, combine to produce the charged massive bosons W±: W ± = 1 2 ( W 1 ∓ i W 2 ) . {\displaystyle W^{\pm }={\frac {1}{\sqrt {2\,}}}\,{\bigl (}\,W_{1}\mp iW_{2}\,{\bigr )}~.} == Lagrangian == === Before electroweak symmetry breaking === The Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking manifests, L E W = L g + L f + L h + L y . {\displaystyle {\mathcal {L}}_{\mathrm {EW} }={\mathcal {L}}_{g}+{\mathcal {L}}_{f}+{\mathcal {L}}_{h}+{\mathcal {L}}_{y}~.} The L g {\displaystyle {\mathcal {L}}_{g}} term describes the interaction between the three W vector bosons and the B vector boson, L g = − 1 4 W a μ ν W μ ν a − 1 4 B μ ν B μ ν , {\displaystyle {\mathcal {L}}_{g}=-{\tfrac {1}{4}}W_{a}^{\mu \nu }W_{\mu \nu }^{a}-{\tfrac {1}{4}}B^{\mu \nu }B_{\mu \nu },} where W a μ ν {\displaystyle W_{a}^{\mu \nu }} ( a = 1 , 2 , 3 {\displaystyle a=1,2,3} ) and B μ ν {\displaystyle B^{\mu \nu }} are the field strength tensors for the weak isospin and weak hypercharge gauge fields. L f {\displaystyle {\mathcal {L}}_{f}} is the kinetic term for the Standard Model fermions. The interaction of the gauge bosons and the fermions are through the gauge covariant derivative, L f = Q ¯ j i D / Q j + u ¯ j i D / u j + d ¯ j i D / d j + L ¯ j i D / L j + e ¯ j i D / e j , {\displaystyle {\mathcal {L}}_{f}={\overline {Q}}_{j}iD\!\!\!\!/\;Q_{j}+{\overline {u}}_{j}iD\!\!\!\!/\;u_{j}+{\overline {d}}_{j}iD\!\!\!\!/\;d_{j}+{\overline {L}}_{j}iD\!\!\!\!/\;L_{j}+{\overline {e}}_{j}iD\!\!\!\!/\;e_{j},} where the subscript j sums over the three generations of fermions; Q, u, and d are the left-handed doublet, right-handed singlet up, and right handed singlet down quark fields; and L and e are the left-handed doublet and right-handed singlet electron fields. The Feynman slash D / {\displaystyle D\!\!\!\!/} means the contraction of the 4-gradient with the Dirac matrices, defined as D / ≡ γ μ D μ , {\displaystyle D\!\!\!\!/\equiv \gamma ^{\mu }\ D_{\mu },} and the covariant derivative (excluding the gluon gauge field for the strong interaction) is defined as D μ ≡ ∂ μ − i g ′ 2 Y B μ − i g 2 T j W μ j . {\displaystyle \ D_{\mu }\equiv \partial _{\mu }-i\ {\frac {g'}{2}}\ Y\ B_{\mu }-i\ {\frac {g}{2}}\ T_{j}\ W_{\mu }^{j}.} Here Y {\displaystyle \ Y\ } is the weak hypercharge and the T j {\displaystyle \ T_{j}\ } are the components of the weak isospin. The L h {\displaystyle {\mathcal {L}}_{h}} term describes the Higgs field h {\displaystyle h} and its interactions with itself and the gauge bosons, L h = | D μ h | 2 − λ ( | h | 2 − v 2 2 ) 2 , {\displaystyle {\mathcal {L}}_{h}=|D_{\mu }h|^{2}-\lambda \left(|h|^{2}-{\frac {v^{2}}{2}}\right)^{2}\ ,} where v {\displaystyle v} is the vacuum expectation value. The L y {\displaystyle \ {\mathcal {L}}_{y}\ } term describes the Yukawa interaction with the fermions, L y = − y u i j ϵ a b h b † Q ¯ i a u j c − y d i j h Q ¯ i d j c − y e i j h L ¯ i e j c + h . c . , {\displaystyle {\mathcal {L}}_{y}=-y_{u}^{ij}\epsilon ^{ab}\ h_{b}^{\dagger }\ {\overline {Q}}_{ia}u_{j}^{c}-y_{d}^{ij}\ h\ {\overline {Q}}_{i}d_{j}^{c}-y_{e}^{ij}\ h\ {\overline {L}}_{i}e_{j}^{c}+\mathrm {h.c.} ~,} and generates their masses, manifest when the Higgs field acquires a nonzero vacuum expectation value, discussed next. The y k i j , {\displaystyle \ y_{k}^{ij}\ ,} for k ∈ { u , d , e } , {\displaystyle \ k\in \{\mathrm {u,d,e} \}\ ,} are matrices of Yukawa couplings. === After electroweak symmetry breaking === The Lagrangian reorganizes itself as the Higgs field acquires a non-vanishing vacuum expectation value dictated by the potential of the previous section. As a result of this rewriting, the symmetry breaking becomes manifest. In the history of the universe, this is believed to have happened shortly after the hot big bang, when the universe was at a temperature 159.5±1.5 GeV (assuming the Standard Model of particle physics). Due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows. L E W = L K + L N + L C + L H + L H V + L W W V + L W W V V + L Y . {\displaystyle {\mathcal {L}}_{\mathrm {EW} }={\mathcal {L}}_{\mathrm {K} }+{\mathcal {L}}_{\mathrm {N} }+{\mathcal {L}}_{\mathrm {C} }+{\mathcal {L}}_{\mathrm {H} }+{\mathcal {L}}_{\mathrm {HV} }+{\mathcal {L}}_{\mathrm {WWV} }+{\mathcal {L}}_{\mathrm {WWVV} }+{\mathcal {L}}_{\mathrm {Y} }~.} The kinetic term L K {\displaystyle {\mathcal {L}}_{K}} contains all the quadratic terms of the Lagrangian, which include the dynamic terms (the partial derivatives) and the mass terms (conspicuously absent from the Lagrangian before symmetry breaking) L K = ∑ f f ¯ ( i ∂ / − m f ) f − 1 4 A μ ν A μ ν − 1 2 W μ ν + W − μ ν + m W 2 W μ + W − μ − 1 4 Z μ ν Z μ ν + 1 2 m Z 2 Z μ Z μ + 1 2 ( ∂ μ H ) ( ∂ μ H ) − 1 2 m H 2 H 2 , {\displaystyle {\begin{aligned}{\mathcal {L}}_{\mathrm {K} }=\sum _{f}{\overline {f}}(i\partial \!\!\!/\!\;-m_{f})\ f-{\frac {1}{4}}\ A_{\mu \nu }\ A^{\mu \nu }-{\frac {1}{2}}\ W_{\mu \nu }^{+}\ W^{-\mu \nu }+m_{W}^{2}\ W_{\mu }^{+}\ W^{-\mu }\\\qquad -{\frac {1}{4}}\ Z_{\mu \nu }Z^{\mu \nu }+{\frac {1}{2}}\ m_{Z}^{2}\ Z_{\mu }\ Z^{\mu }+{\frac {1}{2}}\ (\partial ^{\mu }\ H)(\partial _{\mu }\ H)-{\frac {1}{2}}\ m_{H}^{2}\ H^{2}~,\end{aligned}}} where the sum runs over all the fermions of the theory (quarks and leptons), and the fields A μ ν , {\displaystyle \ A_{\mu \nu }\ ,} Z μ ν , {\displaystyle \ Z_{\mu \nu }\ ,} W μ ν − , {\displaystyle \ W_{\mu \nu }^{-}\ ,} and W μ ν + ≡ ( W μ ν − ) † {\displaystyle \ W_{\mu \nu }^{+}\equiv (W_{\mu \nu }^{-})^{\dagger }\ } are given as X μ ν a = ∂ μ X ν a − ∂ ν X μ a + g f a b c X μ b X ν c , {\displaystyle X_{\mu \nu }^{a}=\partial _{\mu }X_{\nu }^{a}-\partial _{\nu }X_{\mu }^{a}+gf^{abc}X_{\mu }^{b}X_{\nu }^{c}~,} with X {\displaystyle X} to be replaced by the relevant field ( A , {\displaystyle A,} Z , {\displaystyle Z,} W ± {\displaystyle W^{\pm }} ) and f abc by the structure constants of the appropriate gauge group. The neutral current L N {\displaystyle \ {\mathcal {L}}_{\mathrm {N} }\ } and charged current L C {\displaystyle \ {\mathcal {L}}_{\mathrm {C} }\ } components of the Lagrangian contain the interactions between the fermions and gauge bosons, L N = e J μ e m A μ + g cos ⁡ θ W ( J μ 3 − sin 2 ⁡ θ W J μ e m ) Z μ , {\displaystyle {\mathcal {L}}_{\mathrm {N} }=e\ J_{\mu }^{\mathrm {em} }\ A^{\mu }+{\frac {g}{\ \cos \theta _{W}\ }}\ (\ J_{\mu }^{3}-\sin ^{2}\theta _{W}\ J_{\mu }^{\mathrm {em} }\ )\ Z^{\mu }~,} where e = g sin ⁡ θ W = g ′ cos ⁡ θ W . {\displaystyle ~e=g\ \sin \theta _{\mathrm {W} }=g'\ \cos \theta _{\mathrm {W} }~.} The electromagnetic current J μ e m {\displaystyle \;J_{\mu }^{\mathrm {em} }\;} is J μ e m = ∑ f q f f ¯ γ μ f , {\displaystyle J_{\mu }^{\mathrm {em} }=\sum _{f}\ q_{f}\ {\overline {f}}\ \gamma _{\mu }\ f~,} where q f {\displaystyle \ q_{f}\ } is the fermions' electric charges. The neutral weak current J μ 3 {\displaystyle \ J_{\mu }^{3}\ } is J μ 3 = ∑ f T f 3 f ¯ γ μ 1 − γ 5 2 f , {\displaystyle J_{\mu }^{3}=\sum _{f}\ T_{f}^{3}\ {\overline {f}}\ \gamma _{\mu }\ {\frac {\ 1-\gamma ^{5}\ }{2}}\ f~,} where T f 3 {\displaystyle T_{f}^{3}} is the fermions' weak isospin. The charged current part of the Lagrangian is given by L C = − g 2 [ u ¯ i γ μ 1 − γ 5 2 M i j C K M d j + ν ¯ i γ μ 1 − γ 5 2 e i ] W μ + + h . c . , {\displaystyle {\mathcal {L}}_{\mathrm {C} }=-{\frac {g}{\ {\sqrt {2\;}}\ }}\ \left[\ {\overline {u}}_{i}\ \gamma ^{\mu }\ {\frac {\ 1-\gamma ^{5}\ }{2}}\;M_{ij}^{\mathrm {CKM} }\ d_{j}+{\overline {\nu }}_{i}\ \gamma ^{\mu }\;{\frac {\ 1-\gamma ^{5}\ }{2}}\;e_{i}\ \right]\ W_{\mu }^{+}+\mathrm {h.c.} ~,} where ν {\displaystyle \ \nu \ } is the right-handed singlet neutrino field, and the CKM matrix M i j C K M {\displaystyle M_{ij}^{\mathrm {CKM} }} determines the mixing between mass and weak eigenstates of the quarks. L H {\displaystyle {\mathcal {L}}_{\mathrm {H} }} contains the Higgs three-point and four-point self interaction terms, L H = − g m H 2 4 m W H 3 − g 2 m H 2 32 m W 2 H 4 . {\displaystyle {\mathcal {L}}_{\mathrm {H} }=-{\frac {\ g\ m_{\mathrm {H} }^{2}\,}{\ 4\ m_{\mathrm {W} }\ }}\;H^{3}-{\frac {\ g^{2}\ m_{\mathrm {H} }^{2}\ }{32\ m_{\mathrm {W} }^{2}}}\;H^{4}~.} L H V {\displaystyle {\mathcal {L}}_{\mathrm {HV} }} contains the Higgs interactions with gauge vector bosons, L H V = ( g m H V + g 2 4 H 2 ) ( W μ + W − μ + 1 2 cos 2 ⁡ θ W Z μ Z μ ) . {\displaystyle {\mathcal {L}}_{\mathrm {HV} }=\left(\ g\ m_{\mathrm {HV} }+{\frac {\ g^{2}\ }{4}}\;H^{2}\ \right)\left(\ W_{\mu }^{+}\ W^{-\mu }+{\frac {1}{\ 2\ \cos ^{2}\ \theta _{\mathrm {W} }\ }}\;Z_{\mu }\ Z^{\mu }\ \right)~.} L W W V {\displaystyle {\mathcal {L}}_{\mathrm {WWV} }} contains the gauge three-point self interactions, L W W V = − i g [ ( W μ ν + W − μ − W + μ W μ ν − ) ( A ν sin ⁡ θ W − Z ν cos ⁡ θ W ) + W ν − W μ + ( A μ ν sin ⁡ θ W − Z μ ν cos ⁡ θ W ) ] . {\displaystyle {\mathcal {L}}_{\mathrm {WWV} }=-i\ g\ \left[\;\left(\ W_{\mu \nu }^{+}\ W^{-\mu }-W^{+\mu }\ W_{\mu \nu }^{-}\ \right)\left(\ A^{\nu }\ \sin \theta _{\mathrm {W} }-Z^{\nu }\ \cos \theta _{\mathrm {W} }\ \right)+W_{\nu }^{-}\ W_{\mu }^{+}\ \left(\ A^{\mu \nu }\ \sin \theta _{\mathrm {W} }-Z^{\mu \nu }\ \cos \theta _{\mathrm {W} }\ \right)\;\right]~.} L W W V V {\displaystyle {\mathcal {L}}_{\mathrm {WWVV} }} contains the gauge four-point self interactions, L W W V V = − g 2 4 { [ 2 W μ + W − μ + ( A μ sin ⁡ θ W − Z μ cos ⁡ θ W ) 2 ] 2 − [ W μ + W ν − + W ν + W μ − + ( A μ sin ⁡ θ W − Z μ cos ⁡ θ W ) ( A ν sin ⁡ θ W − Z ν cos ⁡ θ W ) ] 2 } . {\displaystyle {\begin{aligned}{\mathcal {L}}_{\mathrm {WWVV} }=-{\frac {\ g^{2}\ }{4}}\ {\Biggl \{}\ &{\Bigl [}\ 2\ W_{\mu }^{+}\ W^{-\mu }+(\ A_{\mu }\ \sin \theta _{\mathrm {W} }-Z_{\mu }\ \cos \theta _{\mathrm {W} }\ )^{2}\ {\Bigr ]}^{2}\\&-{\Bigl [}\ W_{\mu }^{+}\ W_{\nu }^{-}+W_{\nu }^{+}\ W_{\mu }^{-}+\left(\ A_{\mu }\ \sin \theta _{\mathrm {W} }-Z_{\mu }\ \cos \theta _{\mathrm {W} }\ \right)\left(\ A_{\nu }\ \sin \theta _{\mathrm {W} }-Z_{\nu }\ \cos \theta _{\mathrm {W} }\ \right)\ {\Bigr ]}^{2}\,{\Biggr \}}~.\end{aligned}}} L Y {\displaystyle \ {\mathcal {L}}_{\mathrm {Y} }\ } contains the Yukawa interactions between the fermions and the Higgs field, L Y = − ∑ f g m f 2 m W f ¯ f H . {\displaystyle {\mathcal {L}}_{\mathrm {Y} }=-\sum _{f}\ {\frac {\ g\ m_{f}\ }{2\ m_{\mathrm {W} }}}\;{\overline {f}}\ f\ H~.} == See also == Electroweak star Fundamental forces History of quantum field theory Standard Model (mathematical formulation) Unitarity gauge Weinberg angle Yang–Mills theory == Notes == == References == == Further reading == === General readers === B. A. Schumm (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 0-8018-7971-X. Conveys much of the Standard Model with no formal mathematics. Very thorough on the weak interaction. === Texts === D. J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 0-471-60386-4. W. Greiner; B. Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 3-540-67672-4. E. A. Paschos (2023). Electroweak Theory. Cambridge University Press. ISBN 9781009402378. === Articles === E. S. Abers; B. W. Lee (1973). "Gauge theories". Physics Reports. 9 (1): 1–141. Bibcode:1973PhR.....9....1A. doi:10.1016/0370-1573(73)90027-6. Y. Hayato; et al. (1999). "Search for Proton Decay through p → νK+ in a Large Water Cherenkov Detector". Physical Review Letters. 83 (8): 1529–1533. arXiv:hep-ex/9904020. Bibcode:1999PhRvL..83.1529H. doi:10.1103/PhysRevLett.83.1529. S2CID 118326409. J. Hucks (1991). "Global structure of the standard model, anomalies, and charge quantization". Physical Review D. 43 (8): 2709–2717. Bibcode:1991PhRvD..43.2709H. doi:10.1103/PhysRevD.43.2709. PMID 10013661. S. F. Novaes (2000). "Standard Model: An Introduction". arXiv:hep-ph/0001283. D. P. Roy (1999). "Basic Constituents of Matter and their Interactions – A Progress Report". arXiv:hep-ph/9912523.
Wikipedia/Electroweak_theory
A molecule is a group of two or more atoms that are held together by attractive forces known as chemical bonds; depending on context, the term may or may not include ions that satisfy this criterion. In quantum physics, organic chemistry, and biochemistry, the distinction from ions is dropped and molecule is often used when referring to polyatomic ions. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, e.g. two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, e.g. water (two hydrogen atoms and one oxygen atom; H2O). In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. This relaxes the requirement that a molecule contains two or more atoms, since the noble gases are individual atoms. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are typically not considered single molecules. Concepts similar to molecules have been discussed since ancient times, but modern investigation into the nature of molecules and their bonds began in the 17th century. Refined over time by scientists such as Robert Boyle, Amedeo Avogadro, Jean Perrin, and Linus Pauling, the study of molecules is today known as molecular physics or molecular chemistry. == Etymology == According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. The word is derived from French molécule (1678), from Neo-Latin molecula, diminutive of Latin moles "mass, barrier". The word, which until the late 18th century was used only in Latin form, became popular after being used in works of philosophy by Descartes. == History == The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules. The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids. Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact. A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies. The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe. In a more concrete manner, however, the concept of aggregates or units of bonded atoms, i.e. "molecules", traces its origins to Robert Boyle's 1661 hypothesis, in his famous treatise The Sceptical Chymist, that matter is composed of clusters of particles and that chemical change results from the rearrangement of the clusters. Boyle argued that matter's basic elements consisted of various sorts and sizes of particles, called "corpuscles", which were capable of arranging themselves into groups. In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds. If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and similarly for the other combinations of ultimate particles. Amedeo Avogadro created the word "molecule". His 1811 paper "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", he essentially states, i.e. according to Partington's A Short History of Chemistry, that:The smallest particles of gases are not necessarily simple atoms, but are made up of a certain number of these atoms united by attraction to form a single molecule.In coordination with these concepts, in 1833 the French chemist Marc Antoine Auguste Gaudin presented a clear account of Avogadro's hypothesis, regarding atomic weights, by making use of "volume diagrams", which clearly show both semi-correct molecular geometries, such as a linear water molecule, and correct molecular formulas, such as H2O: In 1917, an unknown American undergraduate chemical engineer named Linus Pauling was learning the Dalton hook-and-eye bonding method, which was the mainstream description of bonds between atoms at the time. Pauling, however, was not satisfied with this method and looked to the newly emerging field of quantum physics for a new method. In 1926, French physicist Jean Perrin received the Nobel Prize in physics for proving, conclusively, the existence of molecules. He did this by calculating the Avogadro constant using three different methods, all involving liquid phase systems. First, he used a gamboge soap-like emulsion, second by doing experimental work on Brownian motion, and third by confirming Einstein's theory of particle rotation in the liquid phase. In 1927, the physicists Fritz London and Walter Heitler applied the new quantum mechanics to the deal with the saturable, nondynamic forces of attraction and repulsion, i.e., exchange forces, of the hydrogen molecule. Their valence bond treatment of this problem, in their joint paper, was a landmark in that it brought chemistry under quantum mechanics. Their work was an influence on Pauling, who had just received his doctorate and visited Heitler and London in Zürich on a Guggenheim Fellowship. Subsequently, in 1931, building on the work of Heitler and London and on theories found in Lewis' famous article, Pauling published his ground-breaking article "The Nature of the Chemical Bond" in which he used quantum mechanics to calculate properties and structures of molecules, such as angles between bonds and rotation about bonds. On these concepts, Pauling developed hybridization theory to account for bonds in molecules such as CH4, in which four sp³ hybridised orbitals are overlapped by hydrogen's 1s orbital, yielding four sigma (σ) bonds. The four bonds are of the same length and strength, which yields a molecular structure as shown below: == Molecular science == The science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate. == Prevalence == Molecules as components of matter are common. They also make up most of the oceans and atmosphere. Most organic substances are molecules. The substances of life are molecules, e.g. proteins, the amino acids of which they are composed, the nucleic acids (DNA and RNA), sugars, carbohydrates, fats, and vitamins. The nutrient minerals are generally ionic compounds, thus they are not molecules, e.g. iron sulfate. However, the majority of familiar solid substances on Earth are made partly or completely of crystals or ionic compounds, which are not made of molecules. These include all of the minerals that make up the substance of the Earth, sand, clay, pebbles, rocks, boulders, bedrock, the molten interior, and the core of the Earth. All of these contain many chemical bonds, but are not made of identifiable molecules. No typical molecule can be defined for salts nor for covalent crystals, although these are often composed of repeating unit cells that extend either in a plane, e.g. graphene; or three-dimensionally e.g. diamond, quartz, sodium chloride. The theme of repeated unit-cellular-structure also holds for most metals which are condensed phases with metallic bonding. Thus solid metals are not made of molecules. In glasses, which are solids that exist in a vitreous disordered state, the atoms are held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating unit-cellular-structure that characterizes salts, covalent crystals, and metals. == Bonding == Molecules are generally held together by covalent bonding. Several non-metallic elements exist only as molecules in the environment either in compounds or as homonuclear molecules, not as free atoms: for example, hydrogen. While some people say a metallic crystal can be considered a single giant molecule held together by metallic bonding, others point out that metals behave very differently than molecules. === Covalent === A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are termed shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding. === Ionic === Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The ions are atoms that have lost one or more electrons (termed cations) and atoms that have gained one or more electrons (termed anions). This transfer of electrons is termed electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. At normal temperatures and pressures, ionic bonding mostly creates solids (or occasionally liquids) without separate identifiable molecules, but the vaporization/sublimation of such materials does produce separate molecules where electrons are still transferred fully enough for the bonds to be considered ionic rather than covalent. == Molecular size == Most molecules are far too small to be seen with the naked eye, although molecules of many polymers can reach macroscopic sizes, including biopolymers such as DNA. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules. The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å. Effective molecular radius is the size a molecule displays in solution. The table of permselectivity for different substances contains examples. == Molecular formulas == === Chemical formula types === The chemical formula for a molecule uses one line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and plus (+) and minus (−) signs. These are limited to one typographic line of symbols, which may include subscripts and superscripts. A compound's empirical formula is a very simple type of chemical formula. It is the simplest integer ratio of the chemical elements that constitute it. For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethanol (ethyl alcohol) is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen= 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule. The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules. However different isomers can have the same atomic composition while being different molecules. The empirical formula is often the same as the molecular formula but not always. For example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH. The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations. === Structural formula === For molecules with a complicated 3-dimensional structure, especially involving atoms bonded to four different substituents, a simple molecular formula or even semi-structural chemical formula may not be enough to completely specify the molecule. In this case, a graphical type of formula called a structural formula may be needed. Structural formulas may in turn be represented with a one-dimensional chemical name, but such chemical nomenclature requires many words and terms which are not part of chemical formulas. == Molecular geometry == Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure. The chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomer, may have very similar physico-chemical properties and at the same time different biochemical activities. == Molecular spectroscopy == Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to the Planck relation). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission. Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal). Microwave spectroscopy commonly measures changes in the rotation of molecules, and can be used to identify molecules in outer space. Infrared spectroscopy measures the vibration of molecules, including stretching, bending or twisting motions. It is commonly used to identify the kinds of bonds or functional groups in molecules. Changes in the arrangements of electrons yield absorption or emission lines in ultraviolet, visible or near infrared light, and result in colour. Nuclear resonance spectroscopy measures the environment of particular nuclei in the molecule, and can be used to characterise the numbers of atoms in different positions in a molecule. == Theoretical aspects == The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry. When trying to define rigorously whether an arrangement of atoms is sufficiently stable to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state". This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state and is so loosely bound that it is only likely to be observed at very low temperatures. Whether or not an arrangement of atoms is sufficiently stable to be considered a molecule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe. == See also == == References == == External links == Molecule of the Month – School of Chemistry, University of Bristol
Wikipedia/Molecules
In quantum geometry or noncommutative geometry a quantum differential calculus or noncommutative differential structure on an algebra A {\displaystyle A} over a field k {\displaystyle k} means the specification of a space of differential forms over the algebra. The algebra A {\displaystyle A} here is regarded as a coordinate ring but it is important that it may be noncommutative and hence not an actual algebra of coordinate functions on any actual space, so this represents a point of view replacing the specification of a differentiable structure for an actual space. In ordinary differential geometry one can multiply differential 1-forms by functions from the left and from the right, and there exists an exterior derivative. Correspondingly, a first order quantum differential calculus means at least the following: An A {\displaystyle A} - A {\displaystyle A} -bimodule Ω 1 {\displaystyle \Omega ^{1}} over A {\displaystyle A} , i.e. one can multiply elements of Ω 1 {\displaystyle \Omega ^{1}} by elements of A {\displaystyle A} in an associative way: a ( ω b ) = ( a ω ) b , ∀ a , b ∈ A , ω ∈ Ω 1 . {\displaystyle a(\omega b)=(a\omega )b,\ \forall a,b\in A,\ \omega \in \Omega ^{1}.} A linear map d : A → Ω 1 {\displaystyle {\rm {d}}:A\to \Omega ^{1}} obeying the Leibniz rule d ( a b ) = a ( d b ) + ( d a ) b , ∀ a , b ∈ A {\displaystyle {\rm {d}}(ab)=a({\rm {d}}b)+({\rm {d}}a)b,\ \forall a,b\in A} Ω 1 = { a ( d b ) | a , b ∈ A } {\displaystyle \Omega ^{1}=\{a({\rm {d}}b)\ |\ a,b\in A\}} (optional connectedness condition) ker ⁡ d = k 1 {\displaystyle \ker \ {\rm {d}}=k1} The last condition is not always imposed but holds in ordinary geometry when the manifold is connected. It says that the only functions killed by d {\displaystyle {\rm {d}}} are constant functions. An exterior algebra or differential graded algebra structure over A {\displaystyle A} means a compatible extension of Ω 1 {\displaystyle \Omega ^{1}} to include analogues of higher order differential forms Ω = ⊕ n Ω n , d : Ω n → Ω n + 1 {\displaystyle \Omega =\oplus _{n}\Omega ^{n},\ {\rm {d}}:\Omega ^{n}\to \Omega ^{n+1}} obeying a graded-Leibniz rule with respect to an associative product on Ω {\displaystyle \Omega } and obeying d 2 = 0 {\displaystyle {\rm {d}}^{2}=0} . Here Ω 0 = A {\displaystyle \Omega ^{0}=A} and it is usually required that Ω {\displaystyle \Omega } is generated by A , Ω 1 {\displaystyle A,\Omega ^{1}} . The product of differential forms is called the exterior or wedge product and often denoted ∧ {\displaystyle \wedge } . The noncommutative or quantum de Rham cohomology is defined as the cohomology of this complex. A higher order differential calculus can mean an exterior algebra, or it can mean the partial specification of one, up to some highest degree, and with products that would result in a degree beyond the highest being unspecified. The above definition lies at the crossroads of two approaches to noncommutative geometry. In the Connes approach a more fundamental object is a replacement for the Dirac operator in the form of a spectral triple, and an exterior algebra can be constructed from this data. In the quantum groups approach to noncommutative geometry one starts with the algebra and a choice of first order calculus but constrained by covariance under a quantum group symmetry. == Note == The above definition is minimal and gives something more general than classical differential calculus even when the algebra A {\displaystyle A} is commutative or functions on an actual space. This is because we do not demand that a ( d b ) = ( d b ) a , ∀ a , b ∈ A {\displaystyle a({\rm {d}}b)=({\rm {d}}b)a,\ \forall a,b\in A} since this would imply that d ( a b − b a ) = 0 , ∀ a , b ∈ A {\displaystyle {\rm {d}}(ab-ba)=0,\ \forall a,b\in A} , which would violate axiom 4 when the algebra was noncommutative. As a byproduct, this enlarged definition includes finite difference calculi and quantum differential calculi on finite sets and finite groups (finite group Lie algebra theory). == Examples == For A = C [ x ] {\displaystyle A={\mathbb {C} }[x]} the algebra of polynomials in one variable the translation-covariant quantum differential calculi are parametrized by λ ∈ C {\displaystyle \lambda \in \mathbb {C} } and take the form Ω 1 = C . d x , ( d x ) f ( x ) = f ( x + λ ) ( d x ) , d f = f ( x + λ ) − f ( x ) λ d x {\displaystyle \Omega ^{1}={\mathbb {C} }.{\rm {d}}x,\quad ({\rm {d}}x)f(x)=f(x+\lambda )({\rm {d}}x),\quad {\rm {d}}f={f(x+\lambda )-f(x) \over \lambda }{\rm {d}}x} This shows how finite differences arise naturally in quantum geometry. Only the limit λ → 0 {\displaystyle \lambda \to 0} has functions commuting with 1-forms, which is the special case of high school differential calculus. For A = C [ t , t − 1 ] {\displaystyle A={\mathbb {C} }[t,t^{-1}]} the algebra of functions on an algebraic circle, the translation (i.e. circle-rotation)-covariant differential calculi are parametrized by q ≠ 0 ∈ C {\displaystyle q\neq 0\in \mathbb {C} } and take the form Ω 1 = C . d t , ( d t ) f ( t ) = f ( q t ) ( d t ) , d f = f ( q t ) − f ( t ) q ( t − 1 ) d t {\displaystyle \Omega ^{1}={\mathbb {C} }.{\rm {d}}t,\quad ({\rm {d}}t)f(t)=f(qt)({\rm {d}}t),\quad {\rm {d}}f={f(qt)-f(t) \over q(t-1)}\,{\rm {dt}}} This shows how q {\displaystyle q} -differentials arise naturally in quantum geometry. For any algebra A {\displaystyle A} one has a universal differential calculus defined by Ω 1 = ker ⁡ ( m : A ⊗ A → A ) , d a = 1 ⊗ a − a ⊗ 1 , ∀ a ∈ A {\displaystyle \Omega ^{1}=\ker(m:A\otimes A\to A),\quad {\rm {d}}a=1\otimes a-a\otimes 1,\quad \forall a\in A} where m {\displaystyle m} is the algebra product. By axiom 3., any first order calculus is a quotient of this. == See also == Quantum geometry Noncommutative geometry Quantum calculus Quantum group Quantum spacetime == Further reading == Connes, A. (1994), Noncommutative geometry, Academic Press, ISBN 0-12-185860-X Majid, S. (2002), A quantum groups primer, London Mathematical Society Lecture Note Series, vol. 292, Cambridge University Press, doi:10.1017/CBO9780511549892, ISBN 978-0-521-01041-2, MR 1904789
Wikipedia/Quantum_differential_calculus
In various interpretations of quantum mechanics, wave function collapse, also called reduction of the state vector, occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation. In the Copenhagen interpretation, wave function collapse connects quantum to classical models, with a special role for the observer. By contrast, objective-collapse proposes an origin in physical processes. In the many-worlds interpretation, collapse does not exist; all wave function outcomes occur while quantum decoherence accounts for the appearance of collapse. Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement. == Mathematical description == In quantum mechanics each measurable physical quantity of a quantum system is called an observable which, for example, could be the position r {\displaystyle r} and the momentum p {\displaystyle p} but also energy E {\displaystyle E} , z {\displaystyle z} components of spin ( s z {\displaystyle s_{z}} ), and so on. The observable acts as a linear function on the states of the system; its eigenvectors correspond to the quantum state (i.e. eigenstate) and the eigenvalues to the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writing ϕ i {\displaystyle \phi _{i}} for an eigenstate and c i {\displaystyle c_{i}} for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector using bra–ket notation: | ψ ⟩ = ∑ i c i | ϕ i ⟩ . {\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .} The kets { | ϕ i ⟩ } {\displaystyle \{|\phi _{i}\rangle \}} specify the different available quantum "alternatives", i.e., particular quantum states. The wave function is a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true. === Collapse === To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation,: 566  abruptly converting an arbitrary state into a single component eigenstate of the observable: | ψ ⟩ = ∑ i c i | ϕ i ⟩ → | ψ ′ ⟩ = | ϕ i ⟩ . {\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle \rightarrow |\psi '\rangle =|\phi _{i}\rangle .} where the arrow represents a measurement of the observable corresponding to the ϕ {\displaystyle \phi } basis. For any single event, only one eigenvalue is measured, chosen randomly from among the possible values. === Meaning of the expansion coefficients === The complex coefficients { c i } {\displaystyle \{c_{i}\}} in the expansion of a quantum state in terms of eigenstates { | ϕ i ⟩ } {\displaystyle \{|\phi _{i}\rangle \}} , | ψ ⟩ = ∑ i c i | ϕ i ⟩ . {\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .} can be written as an (complex) overlap of the corresponding eigenstate and the quantum state: c i = ⟨ ϕ i | ψ ⟩ . {\displaystyle c_{i}=\langle \phi _{i}|\psi \rangle .} They are called the probability amplitudes. The square modulus | c i | 2 {\displaystyle |c_{i}|^{2}} is the probability that a measurement of the observable yields the eigenstate | ϕ i ⟩ {\displaystyle |\phi _{i}\rangle } . The sum of the probability over all possible outcomes must be one: ⟨ ψ | ψ ⟩ = ∑ i | c i | 2 = 1. {\displaystyle \langle \psi |\psi \rangle =\sum _{i}|c_{i}|^{2}=1.} As examples, individual counts in a double slit experiment with electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern. In a Stern-Gerlach experiment with silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area. This statistical aspect of quantum measurements differs fundamentally from classical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information.: 17  == Terminology == The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. A quantum state is a mathematical description of a quantum system; a quantum state vector uses Hilbert space vectors for the description.: 159  Reduction of the state vector replaces the full state vector with a single eigenstate of the observable. The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation".: 324  When the wave function representation is used, the "reduction" is called "wave function collapse". == The measurement problem == The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called the measurement problem of quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses the Born rule to compute the probable outcomes. Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrödinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics.: 127  == Physical approaches to collapse == Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself". Various interpretations of quantum mechanics attempt to provide a physical model for collapse.: 816  Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories like de Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results from tests of Bell's theorem shows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes the many-worlds interpretation and consistent histories models. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example the objective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule.: 819  The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as the Copenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent. === Quantum decoherence === Quantum decoherence explains why a system interacting with an environment transitions from being a pure state, exhibiting superpositions, to a mixed state, an incoherent combination of classical alternatives. This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in the second law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining the classical limit of quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them. The form of decoherence known as environment-induced superselection proposes that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. The combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate. == History == The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik. Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process. Niels Bohr never mentions wave function collapse in his published work, but he repeatedly cautioned that we must give up a "pictorial representation". Despite the differences between Bohr and Heisenberg, their views are often grouped together as the "Copenhagen interpretation", of which wave function collapse is regarded as a key feature. John von Neumann's influential 1932 work Mathematical Foundations of Quantum Mechanics took a more formal approach, developing an "ideal" measurement scheme: 1270  that postulated that there were two processes of wave function change: The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement (state reduction or collapse). The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation. In 1957 Hugh Everett III proposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe.: 1288  While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of the Born rule.: 1290 : 5  Beginning in 1970 H. Dieter Zeh sought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work by Wojciech H. Zurek in 1980 lead eventually to a large number of papers on many aspects of the concept. Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system".: 1273  Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics.: 1302  By explicitly dealing with the interaction of object and measuring instrument, von Neumann described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Von Neumann's projection postulate was conceived based on experimental evidence available during the 1930s, in particular Compton scattering. Later work refined the notion of measurements into the more easily discussed first kind, that will give the same value when immediately repeated, and the second kind that give different values when repeated. == See also == == References == == External links == Quotations related to Wave function collapse at Wikiquote
Wikipedia/Wavefunction_collapse
The de Broglie–Bohm theory is an interpretation of quantum mechanics which postulates that, in addition to the wavefunction, an actual configuration of particles exists, even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992). The theory is deterministic and explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the configuration of all the particles under consideration. Measurements are a particular case of quantum processes described by the theory—for which it yields the same quantum predictions as other interpretations of quantum mechanics. The theory does not have a "measurement problem", due to the fact that the particles have a definite configuration at all times. The Born rule in de Broglie–Bohm theory is not a postulate. Rather, in this theory, the link between the probability density and the wave function has the status of a theorem, a result of a separate postulate, the "quantum equilibrium hypothesis", which is additional to the basic principles governing the wave function. There are several equivalent mathematical formulations of the theory. == Overview == De Broglie–Bohm theory is based on the following postulates: There is a configuration q {\displaystyle q} of the universe, described by coordinates q k {\displaystyle q^{k}} , which is an element of the configuration space Q {\displaystyle Q} . The configuration space is different for different versions of pilot-wave theory. For example, this may be the space of positions Q k {\displaystyle \mathbf {Q} _{k}} of N {\displaystyle N} particles, or, in case of field theory, the space of field configurations ϕ ( x ) {\displaystyle \phi (x)} . The configuration evolves (for spin=0) according to the guiding equation m k d q k d t ( t ) = ℏ ∇ k Im ⁡ ln ⁡ ψ ( q , t ) = ℏ Im ⁡ ( ∇ k ψ ψ ) ( q , t ) = m k j k ψ ∗ ψ = Re ⁡ ( P ^ k Ψ Ψ ) , {\displaystyle m_{k}{\frac {dq^{k}}{dt}}(t)=\hbar \nabla _{k}\operatorname {Im} \ln \psi (q,t)=\hbar \operatorname {Im} \left({\frac {\nabla _{k}\psi }{\psi }}\right)(q,t)={\frac {m_{k}\mathbf {j} _{k}}{\psi ^{*}\psi }}=\operatorname {Re} \left({\frac {\mathbf {\hat {P}} _{k}\Psi }{\Psi }}\right),} where j {\displaystyle \mathbf {j} } is the probability current or probability flux, and P ^ {\displaystyle \mathbf {\hat {P}} } is the momentum operator. Here, ψ ( q , t ) {\displaystyle \psi (q,t)} is the standard complex-valued wavefunction from quantum theory, which evolves according to the Schrödinger equation i ℏ ∂ ∂ t ψ ( q , t ) = − ∑ i = 1 N ℏ 2 2 m i ∇ i 2 ψ ( q , t ) + V ( q ) ψ ( q , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (q,t)=-\sum _{i=1}^{N}{\frac {\hbar ^{2}}{2m_{i}}}\nabla _{i}^{2}\psi (q,t)+V(q)\psi (q,t).} This completes the specification of the theory for any quantum theory with Hamilton operator of type H = ∑ 1 2 m i p ^ i 2 + V ( q ^ ) {\textstyle H=\sum {\frac {1}{2m_{i}}}{\hat {p}}_{i}^{2}+V({\hat {q}})} . The configuration is distributed according to | ψ ( q , t ) | 2 {\displaystyle |\psi (q,t)|^{2}} at some moment of time t {\displaystyle t} , and this consequently holds for all times. Such a state is named quantum equilibrium. With quantum equilibrium, this theory agrees with the results of standard quantum mechanics. Even though this latter relation is frequently presented as an axiom of the theory, Bohm presented it as derivable from statistical-mechanical arguments in the original papers of 1952. This argument was further supported by the work of Bohm in 1953 and was substantiated by Vigier and Bohm's paper of 1954, in which they introduced stochastic fluid fluctuations that drive a process of asymptotic relaxation from quantum non-equilibrium to quantum equilibrium (ρ → |ψ|2). === Double-slit experiment === The double-slit experiment is an illustration of wave–particle duality. In it, a beam of particles (such as electrons) travels through a barrier that has two slits. If a detector screen is on the side beyond the barrier, the pattern of detected particles shows interference fringes characteristic of waves arriving at the screen from two sources (the two slits); however, the interference pattern is made up of individual dots corresponding to particles that had arrived on the screen. The system seems to exhibit the behaviour of both waves (interference patterns) and particles (dots on the screen). If this experiment is modified so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. It can also be arranged to have a minimally invasive detector at one of the slits to detect which slit the particle went through. When that is done, the interference pattern disappears. In de Broglie–Bohm theory, the wavefunction is defined at both slits, but each particle has a well-defined trajectory that passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes is determined by the initial position of the particle. Such initial position is not knowable or controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. In Bohm's 1952 papers he used the wavefunction to construct a quantum potential that, when included in Newton's equations, gave the trajectories of the particles streaming through the two slits. In effect the wavefunction interferes with itself and guides the particles by the quantum potential in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, resulting in the interference pattern on the detector screen. To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it results in the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space. == Theory == === Pilot wave === The de Broglie–Bohm theory describes a pilot wave ψ ( q , t ) ∈ C {\displaystyle \psi (q,t)\in \mathbb {C} } in a configuration space Q {\displaystyle Q} and trajectories q ( t ) ∈ Q {\displaystyle q(t)\in Q} of particles as in classical mechanics but defined by non-Newtonian mechanics. At every moment of time there exists not only a wavefunction, but also a well-defined configuration of the whole universe (i.e., the system as defined by the boundary conditions used in solving the Schrödinger equation). The de Broglie–Bohm theory works on particle positions and trajectories like classical mechanics but the dynamics are different. In classical mechanics, the accelerations of the particles are imparted directly by forces, which exist in physical three-dimensional space. In de Broglie–Bohm theory, the quantum "field exerts a new kind of "quantum-mechanical" force".: 76  Bohm hypothesized that each particle has a "complex and subtle inner structure" that provides the capacity to react to the information provided by the wavefunction by the quantum potential. Also, unlike in classical mechanics, physical properties (e.g., mass, charge) are spread out over the wavefunction in de Broglie–Bohm theory, not localized at the position of the particle. The wavefunction itself, and not the particles, determines the dynamical evolution of the system: the particles do not act back onto the wave function. As Bohm and Hiley worded it, "the Schrödinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles [...] the quantum theory can be understood completely in terms of the assumption that the quantum field has no sources or other forms of dependence on the particles". P. Holland considers this lack of reciprocal action of particles and wave function to be one "[a]mong the many nonclassical properties exhibited by this theory". Holland later called this a merely apparent lack of back reaction, due to the incompleteness of the description. In what follows below, the setup for one particle moving in R 3 {\displaystyle \mathbb {R} ^{3}} is given followed by the setup for N particles moving in 3 dimensions. In the first instance, configuration space and real space are the same, while in the second, real space is still R 3 {\displaystyle \mathbb {R} ^{3}} , but configuration space becomes R 3 N {\displaystyle \mathbb {R} ^{3N}} . While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space, which is how particles are entangled with each other in this theory. Extensions to this theory include spin and more complicated configuration spaces. We use variations of Q {\displaystyle \mathbf {Q} } for particle positions, while ψ {\displaystyle \psi } represents the complex-valued wavefunction on configuration space. === Guiding equation === For a spinless single particle moving in R 3 {\displaystyle \mathbb {R} ^{3}} , the particle's velocity is d Q d t ( t ) = ℏ m Im ⁡ ( ∇ ψ ψ ) ( Q , t ) . {\displaystyle {\frac {d\mathbf {Q} }{dt}}(t)={\frac {\hbar }{m}}\operatorname {Im} \left({\frac {\nabla \psi }{\psi }}\right)(\mathbf {Q} ,t).} For many particles labeled Q k {\displaystyle \mathbf {Q} _{k}} for the k {\displaystyle k} -th particle their velocities are d Q k d t ( t ) = ℏ m k Im ⁡ ( ∇ k ψ ψ ) ( Q 1 , Q 2 , … , Q N , t ) . {\displaystyle {\frac {d\mathbf {Q} _{k}}{dt}}(t)={\frac {\hbar }{m_{k}}}\operatorname {Im} \left({\frac {\nabla _{k}\psi }{\psi }}\right)(\mathbf {Q} _{1},\mathbf {Q} _{2},\ldots ,\mathbf {Q} _{N},t).} The main fact to notice is that this velocity field depends on the actual positions of all of the N {\displaystyle N} particles in the universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe. === Schrödinger equation === The one-particle Schrödinger equation governs the time evolution of a complex-valued wavefunction on R 3 {\displaystyle \mathbb {R} ^{3}} . The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function V {\displaystyle V} on R 3 {\displaystyle \mathbb {R} ^{3}} : i ℏ ∂ ∂ t ψ = − ℏ 2 2 m ∇ 2 ψ + V ψ . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi =-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi +V\psi .} For many particles, the equation is the same except that ψ {\displaystyle \psi } and V {\displaystyle V} are now on configuration space, R 3 N {\displaystyle \mathbb {R} ^{3N}} : i ℏ ∂ ∂ t ψ = − ∑ k = 1 N ℏ 2 2 m k ∇ k 2 ψ + V ψ . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi =-\sum _{k=1}^{N}{\frac {\hbar ^{2}}{2m_{k}}}\nabla _{k}^{2}\psi +V\psi .} This is the same wavefunction as in conventional quantum mechanics. === Relation to the Born rule === In Bohm's original papers, he discusses how de Broglie–Bohm theory results in the usual measurement results of quantum mechanics. The main idea is that this is true if the positions of the particles satisfy the statistical distribution given by | ψ | 2 {\displaystyle |\psi |^{2}} . And that distribution is guaranteed to be true for all time by the guiding equation if the initial distribution of the particles satisfies | ψ | 2 {\displaystyle |\psi |^{2}} . For a given experiment, one can postulate this as being true and verify it experimentally. But, as argued by Dürr et al., one needs to argue that this distribution for subsystems is typical. The authors argue that | ψ | 2 {\displaystyle |\psi |^{2}} , by virtue of its equivariance under the dynamical evolution of the system, is the appropriate measure of typicality for initial conditions of the positions of the particles. The authors then prove that the vast majority of possible initial configurations will give rise to statistics obeying the Born rule (i.e., | ψ | 2 {\displaystyle |\psi |^{2}} ) for measurement outcomes. In summary, in a universe governed by the de Broglie–Bohm dynamics, Born rule behavior is typical. The situation is thus analogous to the situation in classical statistical physics. A low-entropy initial condition will, with overwhelmingly high probability, evolve into a higher-entropy state: behavior consistent with the second law of thermodynamics is typical. There are anomalous initial conditions that would give rise to violations of the second law; however in the absence of some very detailed evidence supporting the realization of one of those conditions, it would be quite unreasonable to expect anything but the actually observed uniform increase of entropy. Similarly in the de Broglie–Bohm theory, there are anomalous initial conditions that would produce measurement statistics in violation of the Born rule (conflicting the predictions of standard quantum theory), but the typicality theorem shows that absent some specific reason to believe one of those special initial conditions was in fact realized, the Born rule behavior is what one should expect. It is in this qualified sense that the Born rule is, for the de Broglie–Bohm theory, a theorem rather than (as in ordinary quantum theory) an additional postulate. It can also be shown that a distribution of particles which is not distributed according to the Born rule (that is, a distribution "out of quantum equilibrium") and evolving under the de Broglie–Bohm dynamics is overwhelmingly likely to evolve dynamically into a state distributed as | ψ | 2 {\displaystyle |\psi |^{2}} . === The conditional wavefunction of a subsystem === In the formulation of the de Broglie–Bohm theory, there is only a wavefunction for the entire universe (which always evolves by the Schrödinger equation). Here, the "universe" is simply the system limited by the same boundary conditions used to solve the Schrödinger equation. However, once the theory is formulated, it is convenient to introduce a notion of wavefunction also for subsystems of the universe. Let us write the wavefunction of the universe as ψ ( t , q I , q II ) {\displaystyle \psi (t,q^{\text{I}},q^{\text{II}})} , where q I {\displaystyle q^{\text{I}}} denotes the configuration variables associated to some subsystem (I) of the universe, and q II {\displaystyle q^{\text{II}}} denotes the remaining configuration variables. Denote respectively by Q I ( t ) {\displaystyle Q^{\text{I}}(t)} and Q II ( t ) {\displaystyle Q^{\text{II}}(t)} the actual configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case. The conditional wavefunction of subsystem (I) is defined by ψ I ( t , q I ) = ψ ( t , q I , Q II ( t ) ) . {\displaystyle \psi ^{\text{I}}(t,q^{\text{I}})=\psi (t,q^{\text{I}},Q^{\text{II}}(t)).} It follows immediately from the fact that Q ( t ) = ( Q I ( t ) , Q II ( t ) ) {\displaystyle Q(t)=(Q^{\text{I}}(t),Q^{\text{II}}(t))} satisfies the guiding equation that also the configuration Q I ( t ) {\displaystyle Q^{\text{I}}(t)} satisfies a guiding equation identical to the one presented in the formulation of the theory, with the universal wavefunction ψ {\displaystyle \psi } replaced with the conditional wavefunction ψ I {\displaystyle \psi ^{\text{I}}} . Also, the fact that Q ( t ) {\displaystyle Q(t)} is random with probability density given by the square modulus of ψ ( t , ⋅ ) {\displaystyle \psi (t,\cdot )} implies that the conditional probability density of Q I ( t ) {\displaystyle Q^{\text{I}}(t)} given Q II ( t ) {\displaystyle Q^{\text{II}}(t)} is given by the square modulus of the (normalized) conditional wavefunction ψ I ( t , ⋅ ) {\displaystyle \psi ^{\text{I}}(t,\cdot )} (in the terminology of Dürr et al. this fact is called the fundamental conditional probability formula). Unlike the universal wavefunction, the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation, but in many situations it does. For instance, if the universal wavefunction factors as ψ ( t , q I , q II ) = ψ I ( t , q I ) ψ II ( t , q II ) , {\displaystyle \psi (t,q^{\text{I}},q^{\text{II}})=\psi ^{\text{I}}(t,q^{\text{I}})\psi ^{\text{II}}(t,q^{\text{II}}),} then the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to ψ I {\displaystyle \psi ^{\text{I}}} (this is what standard quantum theory would regard as the wavefunction of subsystem (I)). If, in addition, the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then ψ I {\displaystyle \psi ^{\text{I}}} does satisfy a Schrödinger equation. More generally, assume that the universal wave function ψ {\displaystyle \psi } can be written in the form ψ ( t , q I , q II ) = ψ I ( t , q I ) ψ II ( t , q II ) + ϕ ( t , q I , q II ) , {\displaystyle \psi (t,q^{\text{I}},q^{\text{II}})=\psi ^{\text{I}}(t,q^{\text{I}})\psi ^{\text{II}}(t,q^{\text{II}})+\phi (t,q^{\text{I}},q^{\text{II}}),} where ϕ {\displaystyle \phi } solves Schrödinger equation and, ϕ ( t , q I , Q II ( t ) ) = 0 {\displaystyle \phi (t,q^{\text{I}},Q^{\text{II}}(t))=0} for all t {\displaystyle t} and q I {\displaystyle q^{\text{I}}} . Then, again, the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to ψ I {\displaystyle \psi ^{\text{I}}} , and if the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then ψ I {\displaystyle \psi ^{\text{I}}} satisfies a Schrödinger equation. The fact that the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation is related to the fact that the usual collapse rule of standard quantum theory emerges from the Bohmian formalism when one considers conditional wavefunctions of subsystems. == Extensions == === Relativity === Pilot-wave theory is explicitly nonlocal, which is in ostensible conflict with special relativity. Various extensions of "Bohm-like" mechanics exist that attempt to resolve this problem. Bohm himself in 1953 presented an extension of the theory satisfying the Dirac equation for a single particle. However, this was not extensible to the many-particle case because it used an absolute time. A renewed interest in constructing Lorentz-invariant extensions of Bohmian theory arose in the 1990s; see Bohm and Hiley: The Undivided Universe and references therein. Another approach is given by Dürr et al., who use Bohm–Dirac models and a Lorentz-invariant foliation of space-time. Thus, Dürr et al. (1999) showed that it is possible to formally restore Lorentz invariance for the Bohm–Dirac theory by introducing additional structure. This approach still requires a foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. In 2013, Dürr et al. suggested that the required foliation could be covariantly determined by the wavefunction. The relation between nonlocality and preferred foliation can be better understood as follows. In de Broglie–Bohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time. Initially, it had been considered impossible to set out a description of photon trajectories in the de Broglie–Bohm theory in view of the difficulties of describing bosons relativistically. In 1996, Partha Ghose presented a relativistic quantum-mechanical description of spin-0 and spin-1 bosons starting from the Duffin–Kemmer–Petiau equation, setting out Bohmian trajectories for massive bosons and for massless bosons (and therefore photons). In 2001, Jean-Pierre Vigier emphasized the importance of deriving a well-defined description of light in terms of particle trajectories in the framework of either the Bohmian mechanics or the Nelson stochastic mechanics. The same year, Ghose worked out Bohmian photon trajectories for specific cases. Subsequent weak-measurement experiments yielded trajectories that coincide with the predicted trajectories. The significance of these experimental findings is controversial. Chris Dewdney and G. Horton have proposed a relativistically covariant, wave-functional formulation of Bohm's quantum field theory and have extended it to a form that allows the inclusion of gravity. Nikolić has proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wavefunctions. He has developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which | ψ | 2 {\displaystyle |\psi |^{2}} is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de Broglie–Bohm theory without introducing a preferred foliation of space-time. His work also covers the extension of the Bohmian interpretation to a quantization of fields and strings. Roderick I. Sutherland at the University in Sydney has a Lagrangian formalism for the pilot wave and its beables. It draws on Yakir Aharonov's retrocasual weak measurements to explain many-particle entanglement in a special relativistic way without the need for configuration space. The basic idea was already published by Olivier Costa de Beauregard in the 1950s and is also used by John Cramer in his transactional interpretation except the beables that exist between the von Neumann strong projection operator measurements. Sutherland's Lagrangian includes two-way action-reaction between pilot wave and beables. Therefore, it is a post-quantum non-statistical theory with final boundary conditions that violate the no-signal theorems of quantum theory. Just as special relativity is a limiting case of general relativity when the spacetime curvature vanishes, so, too is statistical no-entanglement signaling quantum theory with the Born rule a limiting case of the post-quantum action-reaction Lagrangian when the reaction is set to zero and the final boundary condition is integrated out. === Spin === To incorporate spin, the wavefunction becomes complex-vector-valued. The value space is called spin space; for a spin-1/2 particle, spin space can be taken to be C 2 {\displaystyle \mathbb {C} ^{2}} . The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrödinger equation is modified by adding a Pauli spin term: d Q k d t ( t ) = ℏ m k Im ⁡ ( ( ψ , D k ψ ) ( ψ , ψ ) ) ( Q 1 , … , Q N , t ) , i ℏ ∂ ∂ t ψ = ( − ∑ k = 1 N ℏ 2 2 m k D k 2 + V − ∑ k = 1 N μ k S k ℏ s k ⋅ B ( q k ) ) ψ , {\displaystyle {\begin{aligned}{\frac {d\mathbf {Q} _{k}}{dt}}(t)&={\frac {\hbar }{m_{k}}}\operatorname {Im} \left({\frac {(\psi ,D_{k}\psi )}{(\psi ,\psi )}}\right)(\mathbf {Q} _{1},\ldots ,\mathbf {Q} _{N},t),\\i\hbar {\frac {\partial }{\partial t}}\psi &=\left(-\sum _{k=1}^{N}{\frac {\hbar ^{2}}{2m_{k}}}D_{k}^{2}+V-\sum _{k=1}^{N}\mu _{k}{\frac {\mathbf {S} _{k}}{\hbar s_{k}}}\cdot \mathbf {B} (\mathbf {q} _{k})\right)\psi ,\end{aligned}}} where m k , e k , μ k {\displaystyle m_{k},e_{k},\mu _{k}} — the mass, charge and magnetic moment of the k {\displaystyle k} –th particle S k {\displaystyle \mathbf {S} _{k}} — the appropriate spin operator acting in the k {\displaystyle k} –th particle's spin space s k {\displaystyle s_{k}} — spin quantum number of the k {\displaystyle k} –th particle ( s k = 1 / 2 {\displaystyle s_{k}=1/2} for electron) A {\displaystyle \mathbf {A} } is vector potential in R 3 {\displaystyle \mathbb {R} ^{3}} B = ∇ × A {\displaystyle \mathbf {B} =\nabla \times \mathbf {A} } is the magnetic field in R 3 {\displaystyle \mathbb {R} ^{3}} D k = ∇ k − i e k ℏ A ( q k ) {\textstyle D_{k}=\nabla _{k}-{\frac {ie_{k}}{\hbar }}\mathbf {A} (\mathbf {q} _{k})} is the covariant derivative, involving the vector potential, ascribed to the coordinates of k {\displaystyle k} –th particle (in SI units) ψ {\displaystyle \psi } — the wavefunction defined on the multidimensional configuration space; e.g. a system consisting of two spin-1/2 particles and one spin-1 particle has a wavefunction of the form ψ : R 9 × R → C 2 ⊗ C 2 ⊗ C 3 , {\displaystyle \psi :\mathbb {R} ^{9}\times \mathbb {R} \to \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}\otimes \mathbb {C} ^{3},} where ⊗ {\displaystyle \otimes } is a tensor product, so this spin space is 12-dimensional ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} is the inner product in spin space C d {\displaystyle \mathbb {C} ^{d}} : ( ϕ , ψ ) = ∑ s = 1 d ϕ s ∗ ψ s . {\displaystyle (\phi ,\psi )=\sum _{s=1}^{d}\phi _{s}^{*}\psi _{s}.} === Stochastic electrodynamics === Stochastic electrodynamics (SED) is an extension of the de Broglie–Bohm interpretation of quantum mechanics, with the electromagnetic zero-point field (ZPF) playing a central role as the guiding pilot-wave. Modern approaches to SED, like those proposed by the group around late Gerhard Grössing, among others, consider wave and particle-like quantum effects as well-coordinated emergent systems. These emergent systems are the result of speculated and calculated sub-quantum interactions with the zero-point field. === Quantum field theory === In Dürr et al., the authors describe an extension of de Broglie–Bohm theory for handling creation and annihilation operators, which they refer to as "Bell-type quantum field theories". The basic idea is that configuration space becomes the (disjoint) space of all possible configurations of any number of particles. For part of the time, the system evolves deterministically under the guiding equation with a fixed number of particles. But under a stochastic process, particles may be created and annihilated. The distribution of creation events is dictated by the wavefunction. The wavefunction itself is evolving at all times over the full multi-particle configuration space. Hrvoje Nikolić introduces a purely deterministic de Broglie–Bohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destroyed even when a true creation or destruction of particles does not take place. === Curved space === To extend de Broglie–Bohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of the Schrödinger equation. For a de Broglie–Bohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space, and the potential in the Schrödinger equation becomes a local self-adjoint operator acting on that space. The field equations for the de Broglie–Bohm theory in the relativistic case with spin can also be given for curved space-times with torsion. In a general spacetime with curvature and torsion, the guiding equation for the four-velocity u i {\displaystyle u^{i}} of an elementary fermion particle is u i = e μ i ψ ¯ γ μ ψ ψ ¯ ψ , {\displaystyle u^{i}={\frac {e_{\mu }^{i}{\bar {\psi }}\gamma ^{\mu }\psi }{{\bar {\psi }}\psi }},} where the wave function ψ {\displaystyle \psi } is a spinor, ψ ¯ {\displaystyle {\bar {\psi }}} is the corresponding adjoint, γ μ {\displaystyle \gamma ^{\mu }} are the Dirac matrices, and e μ i {\displaystyle e_{\mu }^{i}} is a tetrad. If the wave function propagates according to the curved Dirac equation, then the particle moves according to the Mathisson-Papapetrou equations of motion, which are an extension of the geodesic equation. This relativistic wave-particle duality follows from the conservation laws for the spin tensor and energy-momentum tensor, and also from the covariant Heisenberg picture equation of motion. === Exploiting nonlocality === De Broglie and Bohm's causal interpretation of quantum mechanics was later extended by Bohm, Vigier, Hiley, Valentini and others to include stochastic properties. Bohm and other physicists, including Valentini, view the Born rule linking R {\displaystyle R} to the probability density function ρ = R 2 {\displaystyle \rho =R^{2}} as representing not a basic law, but a result of a system having reached quantum equilibrium during the course of the time development under the Schrödinger equation. It can be shown that, once an equilibrium has been reached, the system remains in such equilibrium over the course of its further evolution: this follows from the continuity equation associated with the Schrödinger evolution of ψ {\displaystyle \psi } . It is less straightforward to demonstrate whether and how such an equilibrium is reached in the first place. Antony Valentini has extended de Broglie–Bohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but has the virtue of making the parallel universes of the chaotic inflation theory observable in principle. Unlike de Broglie–Bohm theory, Valentini's theory the wavefunction evolution also depends on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary. Valentini argues that the laws of quantum mechanics are emergent and form a "quantum equilibrium" that is analogous to thermal equilibrium in classical dynamics, such that other "quantum non-equilibrium" distributions may in principle be observed and exploited, for which the statistical predictions of quantum theory are violated. It is controversially argued that quantum theory is merely a special case of a much wider nonlinear physics, a physics in which non-local (superluminal) signalling is possible, and in which the uncertainty principle can be violated. == Results == Below are some highlights of the results that arise out of an analysis of de Broglie–Bohm theory. Experimental results agree with all of quantum mechanics' standard predictions insofar as it has them. But while standard quantum mechanics is limited to discussing the results of "measurements", de Broglie–Bohm theory governs the dynamics of a system without the intervention of outside observers (p. 117 in Bell). The basis for agreement with standard quantum mechanics is that the particles are distributed according to | ψ | 2 {\displaystyle |\psi |^{2}} . This is a statement of observer ignorance: the initial positions are represented by a statistical distribution so deterministic trajectories will result in a statistical distribution. === Measuring spin and polarization === According to ordinary quantum theory, it is not possible to measure the spin or polarization of a particle directly; instead, the component in one direction is measured; the outcome from a single particle may be 1, meaning that the particle is aligned with the measuring apparatus, or −1, meaning that it is aligned the opposite way. An ensemble of particles prepared by a polarizer to be in state 1 will all measure polarized in state 1 in a subsequent apparatus. A polarized ensemble sent through a polarizer set at angle to the first pass will result in some values of 1 and some of −1 with a probability that depends on the relative alignment. For a full explanation of this, see the Stern–Gerlach experiment. In de Broglie–Bohm theory, the results of a spin experiment cannot be analyzed without some knowledge of the experimental setup. It is possible to modify the setup so that the trajectory of the particle is unaffected, but that the particle with one setup registers as spin-up, while in the other setup it registers as spin-down. Thus, for the de Broglie–Bohm theory, the particle's spin is not an intrinsic property of the particle; instead spin is, so to speak, in the wavefunction of the particle in relation to the particular device being used to measure the spin. This is an illustration of what is sometimes referred to as contextuality and is related to naive realism about operators. Interpretationally, measurement results are a deterministic property of the system and its environment, which includes information about the experimental setup including the context of co-measured observables; in no sense does the system itself possess the property being measured, as would have been the case in classical physics. === Measurements, the quantum formalism, and observer independence === De Broglie–Bohm theory gives almost the same results as (non-relativisitic) quantum mechanics. It treats the wavefunction as a fundamental object in the theory, as the wavefunction describes how the particles move. This means that no experiment can distinguish between the two theories. This section outlines the ideas as to how the standard quantum formalism arises out of quantum mechanics. ==== Collapse of the wavefunction ==== De Broglie–Bohm theory is a theory that applies primarily to the whole universe. That is, there is a single wavefunction governing the motion of all of the particles in the universe according to the guiding equation. Theoretically, the motion of one particle depends on the positions of all of the other particles in the universe. In some situations, such as in experimental systems, we can represent the system itself in terms of a de Broglie–Bohm theory in which the wavefunction of the system is obtained by conditioning on the environment of the system. Thus, the system can be analyzed with the Schrödinger equation and the guiding equation, with an initial | ψ | 2 {\displaystyle |\psi |^{2}} distribution for the particles in the system (see the section on the conditional wavefunction of a subsystem for details). It requires a special setup for the conditional wavefunction of a system to obey a quantum evolution. When a system interacts with its environment, such as through a measurement, the conditional wavefunction of the system evolves in a different way. The evolution of the universal wavefunction can become such that the wavefunction of the system appears to be in a superposition of distinct states. But if the environment has recorded the results of the experiment, then using the actual Bohmian configuration of the environment to condition on, the conditional wavefunction collapses to just one alternative, the one corresponding with the measurement results. Collapse of the universal wavefunction never occurs in de Broglie–Bohm theory. Its entire evolution is governed by the Schrödinger equation, and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrödinger equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include, and this will affect when "collapse" occurs. ==== Operators as observables ==== In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de Broglie–Bohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de Broglie–Bohm theory, a theorem. A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction. In the history of de Broglie–Bohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De Broglie–Bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant. There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to | ψ | 2 {\displaystyle |\psi |^{2}} , and no contradiction to experimental results is possible to detect. Operators as observables leads many to believe that many operators are equivalent. De Broglie–Bohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de Broglie–Bohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al. for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators. ==== Hidden variables ==== De Broglie–Bohm theory is often referred to as a "hidden-variable" theory. Bohm used this description in his original papers on the subject, writing: "From the point of view of the usual interpretation, these additional elements or parameters [permitting a detailed causal and continuous description of all processes] could be called 'hidden' variables." Bohm and Hiley later stated that they found Bohm's choice of the term "hidden variables" to be too restrictive. In particular, they argued that a particle is not actually hidden but rather "is what is most directly manifested in an observation [though] its properties cannot be observed with arbitrary precision (within the limits set by uncertainty principle)". However, others nevertheless treat the term "hidden variable" as a suitable description. Generalized particle trajectories can be extrapolated from numerous weak measurements on an ensemble of equally prepared systems, and such trajectories coincide with the de Broglie–Bohm trajectories. In particular, an experiment with two entangled photons, in which a set of Bohmian trajectories for one of the photons was determined using weak measurements and postselection, can be understood in terms of a nonlocal connection between that photon's trajectory and the other photon's polarization. However, not only the De Broglie–Bohm interpretation, but also many other interpretations of quantum mechanics that do not include such trajectories are consistent with such experimental evidence. === Different predictions === A specialized version of the double slit experiment has been devised to test characteristics of the trajectory predictions. Results from one such experiment agreed with the predictions of standard quantum mechanics and disagreed with the Bohm predictions when they conflicted. These conclusions have been the subject of debate. === Heisenberg's uncertainty principle === The Heisenberg's uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. As an example, if one measures the position with an accuracy of Δ x {\displaystyle \Delta x} and the momentum with an accuracy of Δ p {\displaystyle \Delta p} , then Δ x Δ p ≳ h . {\displaystyle \Delta x\Delta p\gtrsim h.} In de Broglie–Bohm theory, there is always a matter of fact about the position and momentum of a particle. Each particle has a well-defined trajectory, as well as a wavefunction. Observers have limited knowledge as to what this trajectory is (and thus of the position and momentum). It is the lack of knowledge of the particle's trajectory that accounts for the uncertainty relation. What one can know about a particle at any given time is described by the wavefunction. Since the uncertainty relation can be derived from the wavefunction in other interpretations of quantum mechanics, it can be likewise derived (in the epistemic sense mentioned above) on the de Broglie–Bohm theory. To put the statement differently, the particles' positions are only known statistically. As in classical mechanics, successive observations of the particles' positions refine the experimenter's knowledge of the particles' initial conditions. Thus, with succeeding observations, the initial conditions become more and more restricted. This formalism is consistent with the normal use of the Schrödinger equation. For the derivation of the uncertainty relation, see Heisenberg uncertainty principle, noting that this article describes the principle from the viewpoint of the Copenhagen interpretation. === Quantum entanglement, Einstein–Podolsky–Rosen paradox, Bell's theorem, and nonlocality === De Broglie–Bohm theory highlighted the issue of nonlocality: it inspired John Stewart Bell to prove his now-famous theorem, which in turn led to the Bell test experiments. In the Einstein–Podolsky–Rosen paradox, the authors describe a thought experiment that one could perform on a pair of particles that have interacted, the results of which they interpreted as indicating that quantum mechanics is an incomplete theory. Decades later John Bell proved Bell's theorem (see p. 14 in Bell), in which he showed that, if they are to agree with the empirical predictions of quantum mechanics, all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is) or give up the assumption that experiments produce unique results (see counterfactual definiteness and many-worlds interpretation). In particular, Bell proved that any local theory with unique results must make empirical predictions satisfying a statistical constraint called "Bell's inequality". Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violated, meaning that the relevant quantum-mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent nonlocality of the effect. The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de Broglie–Bohm version to bring this [nonlocality] out so explicitly that it cannot be ignored." The de Broglie–Bohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. Maudlin provides an analysis of exactly what kind of nonlocality is present and how it is compatible with relativity. Bell has shown that the nonlocality does not allow superluminal communication. Maudlin has shown this in greater detail. === Classical limit === Bohm's formulation of de Broglie–Bohm theory in a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al. for steps towards a rigorous analysis. === Quantum trajectory method === Work by Robert E. Wyatt in the early 2000s attempted to use the Bohm "particles" as an adaptive mesh that follows the actual trajectory of a quantum state in time and space. In the "quantum trajectory" method, one samples the quantum wavefunction with a mesh of quadrature points. One then evolves the quadrature points in time according to the Bohm equations of motion. At each time step, one then re-synthesizes the wavefunction from the points, recomputes the quantum forces, and continues the calculation. (QuickTime movies of this for H + H2 reactive scattering can be found on the Wyatt group web-site at UT Austin.) This approach has been adapted, extended, and used by a number of researchers in the chemical physics community as a way to compute semi-classical and quasi-classical molecular dynamics. A 2007 issue of The Journal of Physical Chemistry A was dedicated to Prof. Wyatt and his work on "computational Bohmian dynamics". Eric R. Bittner's group at the University of Houston has advanced a statistical variant of this approach that uses Bayesian sampling technique to sample the quantum density and compute the quantum potential on a structureless mesh of points. This technique was recently used to estimate quantum effects in the heat capacity of small clusters Nen for n ≈ 100. There remain difficulties using the Bohmian approach, mostly associated with the formation of singularities in the quantum potential due to nodes in the quantum wavefunction. In general, nodes forming due to interference effects lead to the case where R − 1 ∇ 2 R → ∞ . {\displaystyle R^{-1}\nabla ^{2}R\to \infty .} This results in an infinite force on the sample particles forcing them to move away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged. These methods, as does Bohm's Hamilton–Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account. The properties of trajectories in the de Broglie–Bohm theory differ significantly from the Moyal quantum trajectories as well as the quantum trajectories from the unraveling of an open quantum system. == Similarities with the many-worlds interpretation == Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of de Broglie-Bohm mechanics and Everett's many-worlds. In particular, the unreal many-worlds interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch worlds: The second issue with Bohmian mechanics may, at first sight, appear rather harmless, but which on a closer look develops considerable destructive power: the issue of empty branches. These are the components of the post-measurement state that do not guide any particles because they do not have the actual configuration q in their support. At first sight, the empty branches do not appear problematic but on the contrary very helpful as they enable the theory to explain unique outcomes of measurements. Also, they seem to explain why there is an effective "collapse of the wavefunction", as in ordinary quantum mechanics. On a closer view, though, one must admit that these empty branches do not actually disappear. As the wavefunction is taken to describe a really existing field, all their branches really exist and will evolve forever by the Schrödinger dynamics, no matter how many of them will become empty in the course of the evolution. Every branch of the global wavefunction potentially describes a complete world which is, according to Bohm's ontology, only a possible world that would be the actual world if only it were filled with particles, and which is in every respect identical to a corresponding world in Everett's theory. Only one branch at a time is occupied by particles, thereby representing the actual world, while all other branches, though really existing as part of a really existing wavefunction, are empty and thus contain some sort of "zombie worlds" with planets, oceans, trees, cities, cars and people who talk like us and behave like us, but who do not actually exist. Now, if the Everettian theory may be accused of ontological extravagance, then Bohmian mechanics could be accused of ontological wastefulness. On top of the ontology of empty branches comes the additional ontology of particle positions that are, on account of the quantum equilibrium hypothesis, forever unknown to the observer. Yet, the actual configuration is never needed for the calculation of the statistical predictions in experimental reality, for these can be obtained by mere wavefunction algebra. From this perspective, Bohmian mechanics may appear as a wasteful and redundant theory. I think it is considerations like these that are the biggest obstacle in the way of a general acceptance of Bohmian mechanics. Many authors have expressed critical views of de Broglie–Bohm theory by comparing it to Everett's many-worlds approach. Many (but not all) proponents of de Broglie–Bohm theory (such as Bohm and Bell) interpret the universal wavefunction as physically real. According to some supporters of Everett's theory, if the (never collapsing) wavefunction is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohmian particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers. H. Dieter Zeh comments on these "empty" branches: It is usually overlooked that Bohm's theory contains the same "many worlds" of dynamically separate branches as the Everett interpretation (now regarded as "empty" wave components), since it is based on precisely the same ... global wave function ... David Deutsch has expressed the same point more "acerbically": Pilot-wave theories are parallel-universe theories in a state of chronic denial. This conclusion has been challenged by Detlef Dürr and Justin Lazarovici: The Bohmian, of course, cannot accept this argument. For her, it is decidedly the particle configuration in three-dimensional space and not the wave function on the abstract configuration space that constitutes a world (or rather, the world). Instead, she will accuse the Everettian of not having local beables (in Bell's sense) in her theory, that is, the ontological variables that refer to localized entities in three-dimensional space or four-dimensional spacetime. The many worlds of her theory thus merely appear as a grotesque consequence of this omission. == Occam's-razor criticism == Both Hugh Everett III and Bohm treated the wavefunction as a physically real field. Everett's many-worlds interpretation is an attempt to demonstrate that the wavefunction alone is sufficient to account for all our observations. When we see the particle detectors flash or hear the click of a Geiger counter, Everett's theory interprets this as our wavefunction responding to changes in the detector's wavefunction, which is responding in turn to the passage of another wavefunction (which we think of as a "particle", but is actually just another wave packet). No particle (in the Bohm sense of having a defined position and velocity) exists according to that theory. For this reason Everett sometimes referred to his own many-worlds approach as the "pure wave theory". Of Bohm's 1952 approach, Everett said: Our main criticism of this view is on the grounds of simplicity – if one desires to hold the view that ψ {\displaystyle \psi } is a real field, then the associated particle is superfluous, since, as we have endeavored to illustrate, the pure wave theory is itself satisfactory. In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether, which was found to be unnecessary in special relativity. This argument is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor. According to Brown & Wallace, the de Broglie–Bohm particles play no role in the solution of the measurement problem. For these authors, the "result assumption" (see above) is inconsistent with the view that there is no measurement problem in the predictable outcome (i.e. single-outcome) case. They also say that a standard tacit assumption of de Broglie–Bohm theory (that an observer becomes aware of configurations of particles of ordinary objects by means of correlations between such configurations and the configuration of the particles in the observer's brain) is unreasonable. This conclusion has been challenged by Valentini, who argues that the entirety of such objections arises from a failure to interpret de Broglie–Bohm theory on its own terms. According to Peter R. Holland, in a wider Hamiltonian framework, theories can be formulated in which particles do act back on the wave function. == Derivations == De Broglie–Bohm theory has been derived many times and in many ways. Below are six derivations, all of which are very different and lead to different ways of understanding and extending this theory. The Schrödinger equation can be derived by using Einstein's light quanta hypothesis: E = ℏ ω {\displaystyle E=\hbar \omega } and de Broglie's hypothesis: p = ℏ k {\displaystyle \mathbf {p} =\hbar \mathbf {k} } . The guiding equation can be derived in a similar fashion. We assume a plane wave: ψ ( x , t ) = A e i ( k ⋅ x − ω t ) {\displaystyle \psi (\mathbf {x} ,t)=Ae^{i(\mathbf {k} \cdot \mathbf {x} -\omega t)}} . Notice that i k = ∇ ψ / ψ {\displaystyle i\mathbf {k} =\nabla \psi /\psi } . Assuming that p = m v {\displaystyle \mathbf {p} =m\mathbf {v} } for the particle's actual velocity, we have that v = ℏ m Im ⁡ ( ∇ ψ ψ ) {\displaystyle \mathbf {v} ={\frac {\hbar }{m}}\operatorname {Im} \left({\frac {\nabla \psi }{\psi }}\right)} . Thus, we have the guiding equation. Notice that this derivation does not use the Schrödinger equation. Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method that generalizes to many possible alternative theories. The starting point is the continuity equation − ∂ ρ ∂ t = ∇ ⋅ ( ρ v ψ ) {\displaystyle -{\frac {\partial \rho }{\partial t}}=\nabla \cdot (\rho v^{\psi })} for the density ρ = | ψ | 2 {\displaystyle \rho =|\psi |^{2}} . This equation describes a probability flow along a current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle. A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform the Schrödinger equation into two coupled equations: the continuity equation from above and the Hamilton–Jacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows: Decomposition: ψ ( x , t ) = R ( x , t ) e i S ( x , t ) / ℏ . {\displaystyle \psi (\mathbf {x} ,t)=R(\mathbf {x} ,t)e^{iS(\mathbf {x} ,t)/\hbar }.} Note that R 2 ( x , t ) {\displaystyle R^{2}(\mathbf {x} ,t)} corresponds to the probability density ρ ( x , t ) = | ψ ( x , t ) | 2 {\displaystyle \rho (\mathbf {x} ,t)=|\psi (\mathbf {x} ,t)|^{2}} . Continuity equation: − ∂ ρ ( x , t ) ∂ t = ∇ ⋅ ( ρ ( x , t ) ∇ S ( x , t ) m ) {\displaystyle -{\frac {\partial \rho (\mathbf {x} ,t)}{\partial t}}=\nabla \cdot \left(\rho (\mathbf {x} ,t){\frac {\nabla S(\mathbf {x} ,t)}{m}}\right)} . Hamilton–Jacobi equation: ∂ S ( x , t ) ∂ t = − [ 1 2 m ( ∇ S ( x , t ) ) 2 + V − ℏ 2 2 m ∇ 2 R ( x , t ) R ( x , t ) ] . {\displaystyle {\frac {\partial S(\mathbf {x} ,t)}{\partial t}}=-\left[{\frac {1}{2m}}(\nabla S(\mathbf {x} ,t))^{2}+V-{\frac {\hbar ^{2}}{2m}}{\frac {\nabla ^{2}R(\mathbf {x} ,t)}{R(\mathbf {x} ,t)}}\right].} The Hamilton–Jacobi equation is the equation derived from a Newtonian system with potential V − ℏ 2 2 m ∇ 2 R R {\displaystyle V-{\frac {\hbar ^{2}}{2m}}{\frac {\nabla ^{2}R}{R}}} and velocity field ∇ S m . {\displaystyle {\frac {\nabla S}{m}}.} The potential V {\displaystyle V} is the classical potential that appears in the Schrödinger equation, and the other term involving R {\displaystyle R} is the quantum potential, terminology introduced by Bohm. This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by ∇ S m {\displaystyle {\frac {\nabla S}{m}}} , which is a symptom of this being a first-order theory, not a second-order theory. A fourth derivation was given by Dürr et al. In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that the Schrödinger equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis. A fifth derivation, given by Dürr et al. is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first-order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator H {\displaystyle H} , the equation to satisfy for all functions f {\displaystyle f} (with associated multiplication operator f ^ {\displaystyle {\hat {f}}} ) is ( v ( f ) ) ( q ) = Re ⁡ ( ψ , i ℏ [ H , f ^ ] ψ ) ( ψ , ψ ) ( q ) {\displaystyle (v(f))(q)=\operatorname {Re} {\frac {\left(\psi ,{\frac {i}{\hbar }}[H,{\hat {f}}]\psi \right)}{(\psi ,\psi )}}(q)} , where ( v , w ) {\displaystyle (v,w)} is the local Hermitian inner product on the value space of the wavefunction. This formulation allows for stochastic theories such as the creation and annihilation of particles. A further derivation has been given by Peter R. Holland, on which he bases his quantum-physics textbook The Quantum Theory of Motion. It is based on three basic postulates and an additional fourth postulate that links the wavefunction to measurement probabilities: A physical system consists in a spatiotemporally propagating wave and a point particle guided by it. The wave is described mathematically by a solution ψ {\displaystyle \psi } to the Schrödinger wave equation. The particle motion is described by a solution to x ˙ ( t ) = [ ∇ S ( x ( t ) , t ) ) ] / m {\displaystyle \mathbf {\dot {x}} (t)=[\nabla S(\mathbf {x} (t),t))]/m} in dependence on initial condition x ( t = 0 ) {\displaystyle \mathbf {x} (t=0)} , with S {\displaystyle S} the phase of ψ {\displaystyle \psi } .The fourth postulate is subsidiary yet consistent with the first three: The probability ρ ( x ( t ) ) {\displaystyle \rho (\mathbf {x} (t))} to find the particle in the differential volume d 3 x {\displaystyle d^{3}x} at time t equals | ψ ( x ( t ) ) | 2 {\displaystyle |\psi (\mathbf {x} (t))|^{2}} . == History == The theory was historically developed in the 1920s by de Broglie, who, in 1927, was persuaded to abandon it in favour of the then-mainstream Copenhagen interpretation. David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot-wave theory in 1952. Bohm's suggestions were not then widely received, partly due to reasons unrelated to their content, such as Bohm's youthful communist affiliations. The de Broglie–Bohm theory was widely deemed unacceptable by mainstream theorists, mostly because of its explicit non-locality. On the theory, John Stewart Bell, author of the 1964 Bell's theorem wrote in 1982: Bohm showed explicitly how parameters could indeed be introduced, into nonrelativistic wave mechanics, with the help of which the indeterministic description could be transformed into a deterministic one. More importantly, in my opinion, the subjectivity of the orthodox version, the necessary reference to the "observer", could be eliminated. ...But why then had Born not told me of this "pilot wave"? If only to point out what was wrong with it? Why did von Neumann not consider it? More extraordinarily, why did people go on producing "impossibility" proofs, after 1952, and as recently as 1978?... Why is the pilot wave picture ignored in text books? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show us that vagueness, subjectivity, and indeterminism, are not forced on us by experimental facts, but by deliberate theoretical choice? Since the 1990s, there has been renewed interest in formulating extensions to de Broglie–Bohm theory, attempting to reconcile it with special relativity and quantum field theory, besides other features such as spin or curved spatial geometries. De Broglie–Bohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference. === Pilot-wave theory === Louis de Broglie presented his pilot wave theory at the 1927 Solvay Conference, after close collaboration with Schrödinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Broglie's mild manner left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless because he was "discouraged by criticisms which [it] roused". De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al. Also, in 1932 John von Neumann published a no hidden variables proof in his book Mathematical Foundations of Quantum Mechanics, that was widely believed to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades. In 1926, Erwin Madelung had developed a hydrodynamic version of the Schrödinger equation, which is incorrectly considered as a basis for the density current derivation of the de Broglie–Bohm theory. The Madelung equations, being quantum analog of Euler equations of fluid dynamics, differ philosophically from the de Broglie–Bohm mechanics and are the basis of the stochastic interpretation of quantum mechanics. Peter R. Holland has pointed out that, earlier in 1927, Einstein had actually submitted a preprint with a similar proposal but, not convinced, had withdrawn it before publication. According to Holland, failure to appreciate key points of the de Broglie–Bohm theory has led to confusion, the key point being "that the trajectories of a many-body quantum system are correlated not because the particles exert a direct force on one another (à la Coulomb) but because all are acted upon by an entity – mathematically described by the wavefunction or functions of it – that lies beyond them". This entity is the quantum potential. After publishing his popular textbook Quantum Theory that adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's no hidden variables proof. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It was an independent origination of the pilot wave theory, and extended it to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de Broglie–Bohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993]. This stage applies to multiple particles, and is deterministic. The de Broglie–Bohm theory is an example of a hidden-variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrödinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden-variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local. Bohm's paper was largely ignored or panned by other physicists. Albert Einstein, who had suggested that Bohm search for a realist alternative to the prevailing Copenhagen approach, did not consider Bohm's interpretation to be a satisfactory answer to the quantum nonlocality question, calling it "too cheap", while Werner Heisenberg considered it a "superfluous 'ideological superstructure' ". Wolfgang Pauli, who had been unconvinced by de Broglie in 1927, conceded to Bohm as follows: I just received your long letter of 20th November, and I also have studied more thoroughly the details of your paper. I do not see any longer the possibility of any logical contradiction as long as your results agree completely with those of the usual wave mechanics and as long as no means is given to measure the values of your hidden parameters both in the measuring apparatus and in the observe [sic] system. As far as the whole matter stands now, your 'extra wave-mechanical predictions' are still a check, which cannot be cashed. He subsequently described Bohm's theory as "artificial metaphysics". According to physicist Max Dresden, when Bohm's theory was presented at the Institute for Advanced Study in Princeton, many of the objections were ad hominem, focusing on Bohm's sympathy with communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee. In 1979, Chris Philippidis, Chris Dewdney and Basil Hiley were the first to perform numeric computations on the basis of the quantum potential to deduce ensembles of particle trajectories. Their work renewed the interests of physicists in the Bohm interpretation of quantum physics. Eventually John Bell began to defend the theory. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden-variables theories (which include Bohm's). The trajectories of the Bohm model that would result for particular experimental arrangements were termed "surreal" by some. Still in 2016, mathematical physicist Sheldon Goldstein said of Bohm's theory: "There was a time when you couldn't even talk about it because it was heretical. It probably still is the kiss of death for a physics career to be actually working on Bohm, but maybe that's changing." === Bohmian mechanics === Bohmian mechanics is the same theory, but with an emphasis on the notion of current flow, which is determined on the basis of the quantum equilibrium hypothesis that the probability follows the Born rule. The term "Bohmian mechanics" is also often used to include most of the further extensions past the spin-less version of Bohm. While de Broglie–Bohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles. All of non-relativistic quantum mechanics can be fully accounted for in this theory. Recent studies have used this formalism to compute the evolution of many-body quantum systems, with a considerable increase in speed as compared to other quantum-based methods. === Causal interpretation and ontological interpretation === Bohm developed his original ideas, calling them the Causal Interpretation. Later he felt that causal sounded too much like deterministic and preferred to call his theory the Ontological Interpretation. The main reference is "The Undivided Universe" (Bohm, Hiley 1993). This stage covers work by Bohm and in collaboration with Jean-Pierre Vigier and Basil Hiley. Bohm is clear that this theory is non-deterministic (the work with Hiley includes a stochastic theory). As such, this theory is not strictly speaking a formulation of de Broglie–Bohm theory, but it deserves mention here because the term "Bohm Interpretation" is ambiguous between this theory and de Broglie–Bohm theory. In 1996 philosopher of science Arthur Fine gave an in-depth analysis of possible interpretations of Bohm's model of 1952. William Simpson has suggested a hylomorphic interpretation of Bohmian mechanics, in which the cosmos is an Aristotelian substance composed of material particles and a substantial form. The wave function is assigned a dispositional role in choreographing the trajectories of the particles. === Hydrodynamic quantum analogs === Experiments on hydrodynamical analogs of quantum mechanics beginning with the work of Couder and Fort (2006) have purported to show that macroscopic classical pilot-waves can exhibit characteristics previously thought to be restricted to the quantum realm. Hydrodynamic pilot-wave analogs have been claimed to duplicate the double slit experiment, tunneling, quantized orbits, and numerous other quantum phenomena which have led to a resurgence in interest in pilot wave theories. The analogs have been compared to the Faraday wave. These results have been disputed: experiments fail to reproduce aspects of the double-slit experiments. High precision measurements in the tunneling case point to a different origin of the unpredictable crossing: rather than initial position uncertainty or environmental noise, interactions at the barrier seem to be involved. Another classical analog has been reported in surface gravity waves. == Surrealistic trajectories == In 1992, Englert, Scully, Sussman, and Walther proposed experiments that would show particles taking paths that differ from the Bohm trajectories. They described the Bohm trajectories as "surrealistic"; their proposal was later referred to as ESSW after the last names of the authors. In 2016, Mahler et al. verified the ESSW predictions. However they propose the surrealistic effect is a consequence of the nonlocality inherent in Bohm's theory. == See also == Madelung equations Local hidden-variable theory Superfluid vacuum theory Fluid analogs in quantum mechanics Probability current == Notes == == References == == Sources == == Further reading == == External links ==
Wikipedia/De_Broglie–Bohm_theory
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space. These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables, which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables. Prior to the development of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of differential geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the development of quantum mechanics (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space. == History of the formalism == === The "old quantum theory" and the need for new mathematics === In the 1890s, Planck was able to derive the blackbody spectrum, which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, h, is now called the Planck constant in his honor. In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons. All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of the Planck constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time. In 1923, de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system. The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity. === The "new quantum theory" === Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent. Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization. Already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics – the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form. Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in many types of generalizations of the field. The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and 1928 book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations. === Later developments === The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. Path integral formulation Phase-space formulation of quantum mechanics & geometric quantization quantum field theory in curved spacetime axiomatic, algebraic and constructive quantum field theory C*-algebra formalism Generalized statistical model of quantum mechanics A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself. Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics. == Postulates of quantum mechanics == A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a phase space formulated by symplectic manifold, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self-adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. (It is possible, to map this Hilbert-space picture to a phase space formulation, invertibly. See below.) The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms. === Description of the state of a system === Each isolated physical system is associated with a (topologically) separable complex Hilbert space H with inner product ⟨φ|ψ⟩. Separability is a mathematically convenient hypothesis, with the physical interpretation that the state is uniquely determined by countably many observations. Quantum states can be identified with equivalence classes in H, where two vectors (of length 1) represent the same state if they differ only by a phase factor: | ψ k ⟩ ∼ | ψ l ⟩ ⇔ | ψ k ⟩ = e i α | ψ l ⟩ , α ∈ R . {\displaystyle |\psi _{k}\rangle \sim |\psi _{l}\rangle \;\;\Leftrightarrow \;\;|\psi _{k}\rangle =e^{i\alpha }|\psi _{l}\rangle ,\quad \ \alpha \in \mathbb {R} .} As such, a quantum state is an element of a projective Hilbert space, conventionally termed a "ray". Accompanying Postulate I is the composite system postulate: In the presence of quantum entanglement, the quantum state of the composite system cannot be factored as a tensor product of states of its local constituents; Instead, it is expressed as a sum, or superposition, of tensor products of states of component subsystems. A subsystem in an entangled composite system generally cannot be described by a state vector (or a ray), but instead is described by a density operator; Such quantum state is known as a mixed state. The density operator of a mixed state is a trace class, nonnegative (positive semi-definite) self-adjoint operator ρ normalized to be of trace 1. In turn, any density operator of a mixed state can be represented as a subsystem of a larger composite system in a pure state (see purification theorem). In the absence of quantum entanglement, the quantum state of the composite system is called a separable state. The density matrix of a bipartite system in a separable state can be expressed as ρ = ∑ k p k ρ 1 k ⊗ ρ 2 k {\displaystyle \rho =\sum _{k}p_{k}\rho _{1}^{k}\otimes \rho _{2}^{k}} , where ∑ k p k = 1 {\displaystyle \;\sum _{k}p_{k}=1} . If there is only a single non-zero p k {\displaystyle p_{k}} , then the state can be expressed just as ρ = ρ 1 ⊗ ρ 2 , {\textstyle \rho =\rho _{1}\otimes \rho _{2},} and is called simply separable or product state. === Measurement on a system === ==== Description of physical quantities ==== Physical observables are represented by Hermitian matrices on H. Since these operators are Hermitian, their eigenvalues are always real, and represent the possible outcomes/results from measuring the corresponding observable. If the spectrum of the observable is discrete, then the possible results are quantized. ==== Results of measurement ==== By spectral theory, we can associate a probability measure to the values of A in any state ψ. We can also show that the possible values of the observable A in any state must belong to the spectrum of A. The expectation value (in the sense of probability theory) of the observable A for the system in state represented by the unit vector ψ ∈ H is ⟨ ψ | A | ψ ⟩ {\displaystyle \langle \psi |A|\psi \rangle } . If we represent the state ψ in the basis formed by the eigenvectors of A, then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue. For a mixed state ρ, the expected value of A in the state ρ is tr ⁡ ( A ρ ) {\displaystyle \operatorname {tr} (A\rho )} , and the probability of obtaining an eigenvalue a n {\displaystyle a_{n}} in a discrete, nondegenerate spectrum of the corresponding observable A {\displaystyle A} is given by P ( a n ) = tr ⁡ ( | a n ⟩ ⟨ a n | ρ ) = ⟨ a n | ρ | a n ⟩ {\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (|a_{n}\rangle \langle a_{n}|\rho )=\langle a_{n}|\rho |a_{n}\rangle } . If the eigenvalue a n {\displaystyle a_{n}} has degenerate, orthonormal eigenvectors { | a n 1 ⟩ , | a n 2 ⟩ , … , | a n m ⟩ } {\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}} , then the projection operator onto the eigensubspace can be defined as the identity operator in the eigensubspace: P n = | a n 1 ⟩ ⟨ a n 1 | + | a n 2 ⟩ ⟨ a n 2 | + ⋯ + | a n m ⟩ ⟨ a n m | , {\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|,} and then P ( a n ) = tr ⁡ ( P n ρ ) {\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (P_{n}\rho )} . Postulates II.a and II.b are collectively known as the Born rule of quantum mechanics. ==== Effect of measurement on the state ==== When a measurement is performed, only one result is obtained (according to some interpretations of quantum mechanics). This is modeled mathematically as the processing of additional information from the measurement, confining the probabilities of an immediate second measurement of the same observable. In the case of a discrete, non-degenerate spectrum, two sequential measurements of the same observable will always give the same value assuming the second immediately follows the first. Therefore, the state vector must change as a result of measurement, and collapse onto the eigensubspace associated with the eigenvalue measured. For a mixed state ρ, after obtaining an eigenvalue a n {\displaystyle a_{n}} in a discrete, nondegenerate spectrum of the corresponding observable A {\displaystyle A} , the updated state is given by ρ ′ = P n ρ P n † tr ⁡ ( P n ρ P n † ) {\textstyle \rho '={\frac {P_{n}\rho P_{n}^{\dagger }}{\operatorname {tr} (P_{n}\rho P_{n}^{\dagger })}}} . If the eigenvalue a n {\displaystyle a_{n}} has degenerate, orthonormal eigenvectors { | a n 1 ⟩ , | a n 2 ⟩ , … , | a n m ⟩ } {\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}} , then the projection operator onto the eigensubspace is P n = | a n 1 ⟩ ⟨ a n 1 | + | a n 2 ⟩ ⟨ a n 2 | + ⋯ + | a n m ⟩ ⟨ a n m | {\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|} . Postulates II.c is sometimes called the "state update rule" or "collapse rule"; Together with the Born rule (Postulates II.a and II.b), they form a complete representation of measurements, and are sometimes collectively called the measurement postulate(s). Note that the projection-valued measures (PVM) described in the measurement postulate(s) can be generalized to positive operator-valued measures (POVM), which is the most general kind of measurement in quantum mechanics. A POVM can be understood as the effect on a component subsystem when a PVM is performed on a larger, composite system (see Naimark's dilation theorem). === Time evolution of a system === The Schrödinger equation describes how a state vector evolves in time. Depending on the text, it may be derived from some other assumptions, motivated on heuristic grounds, or asserted as a postulate. Derivations include using the de Broglie relation between wavelength and momentum or path integrals. Equivalently, the time evolution postulate can be stated as: For a closed system in a mixed state ρ, the time evolution is ρ ( t ) = U ( t ; t 0 ) ρ ( t 0 ) U † ( t ; t 0 ) {\displaystyle \rho (t)=U(t;t_{0})\rho (t_{0})U^{\dagger }(t;t_{0})} . The evolution of an open quantum system can be described by quantum operations (in an operator sum formalism) and quantum instruments, and generally does not have to be unitary. === Other implications of the postulates === Physical symmetries act on the Hilbert space of quantum states unitarily or antiunitarily due to Wigner's theorem (supersymmetry is another matter entirely). Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and other density operators mixed states. One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article. Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below. === Spin === In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position r and time t as continuous variables, ψ = ψ(r, t). For spin wavefunctions the spin is an additional discrete variable: ψ = ψ(r, t, σ), where σ takes the values; σ = − S ℏ , − ( S − 1 ) ℏ , … , 0 , … , + ( S − 1 ) ℏ , + S ℏ . {\displaystyle \sigma =-S\hbar ,-(S-1)\hbar ,\dots ,0,\dots ,+(S-1)\hbar ,+S\hbar \,.} That is, the state of a single particle with spin S is represented by a (2S + 1)-component spinor of complex-valued wave functions. Two classes of particles with very different behaviour are bosons which have integer spin (S = 0, 1, 2, ...), and fermions possessing half-integer spin (S = 1⁄2, 3⁄2, 5⁄2, ...). === Symmetrization postulate === In quantum mechanics, two particles can be distinguished from one another using two methods. By performing a measurement of intrinsic properties of each particle, particles of different types can be distinguished. Otherwise, if the particles are identical, their trajectories can be tracked which distinguishes the particles based on the locality of each particle. While the second method is permitted in classical mechanics, (i.e. all classical particles are treated with distinguishability), the same cannot be said for quantum mechanical particles since the process is infeasible due to the fundamental uncertainty principles that govern small scales. Hence the requirement of indistinguishability of quantum particles is presented by the symmetrization postulate. The postulate is applicable to a system of bosons or fermions, for example, in predicting the spectra of helium atom. The postulate, explained in the following sections, can be stated as follows: Exceptions can occur when the particles are constrained to two spatial dimensions where existence of particles known as anyons are possible which are said to have a continuum of statistical properties spanning the range between fermions and bosons. The connection between behaviour of identical particles and their spin is given by spin statistics theorem. It can be shown that two particles localized in different regions of space can still be represented using a symmetrized/antisymmetrized wavefunction and that independent treatment of these wavefunctions gives the same result. Hence the symmetrization postulate is applicable in the general case of a system of identical particles. ==== Exchange Degeneracy ==== In a system of identical particles, let P be known as exchange operator that acts on the wavefunction as: P ( ⋯ | ψ ⟩ | ϕ ⟩ ⋯ ) ≡ ⋯ | ϕ ⟩ | ψ ⟩ ⋯ {\displaystyle P{\bigg (}\cdots |\psi \rangle |\phi \rangle \cdots {\bigg )}\equiv \cdots |\phi \rangle |\psi \rangle \cdots } If a physical system of identical particles is given, wavefunction of all particles can be well known from observation but these cannot be labelled to each particle. Thus, the above exchanged wavefunction represents the same physical state as the original state which implies that the wavefunction is not unique. This is known as exchange degeneracy. More generally, consider a linear combination of such states, | Ψ ⟩ {\displaystyle |\Psi \rangle } . For the best representation of the physical system, we expect this to be an eigenvector of P since exchange operator is not excepted to give completely different vectors in projective Hilbert space. Since P 2 = 1 {\displaystyle P^{2}=1} , the possible eigenvalues of P are +1 and −1. The | Ψ ⟩ {\displaystyle |\Psi \rangle } states for identical particle system are represented as symmetric for +1 eigenvalue or antisymmetric for -1 eigenvalue as follows: P | ⋯ n i , n j ⋯ ; S ⟩ = + | ⋯ n i , n j ⋯ ; S ⟩ {\displaystyle P|\cdots n_{i},n_{j}\cdots ;S\rangle =+|\cdots n_{i},n_{j}\cdots ;S\rangle } P | ⋯ n i , n j ⋯ ; A ⟩ = − | ⋯ n i , n j ⋯ ; A ⟩ {\displaystyle P|\cdots n_{i},n_{j}\cdots ;A\rangle =-|\cdots n_{i},n_{j}\cdots ;A\rangle } The explicit symmetric/antisymmetric form of | Ψ ⟩ {\displaystyle |\Psi \rangle } is constructed using a symmetrizer or antisymmetrizer operator. Particles that form symmetric states are called bosons and those that form antisymmetric states are called as fermions. The relation of spin with this classification is given from spin statistics theorem which shows that integer spin particles are bosons and half integer spin particles are fermions. ==== Pauli exclusion principle ==== The property of spin relates to another basic property concerning systems of N identical particles: the Pauli exclusion principle, which is a consequence of the following permutation behaviour of an N-particle wave function; again in the position representation one must postulate that for the transposition of any two of the N particles one always should have i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor (−1)2S which is +1 for bosons, but (−1) for fermions. Electrons are fermions with S = 1/2; quanta of light are bosons with S = 1. Due to the form of anti-symmetrized wavefunction: Ψ n 1 ⋯ n N ( A ) ( x 1 , … , x N ) = 1 N ! | ψ n 1 ( x 1 ) ψ n 1 ( x 2 ) ⋯ ψ n 1 ( x N ) ψ n 2 ( x 1 ) ψ n 2 ( x 2 ) ⋯ ψ n 2 ( x N ) ⋮ ⋮ ⋱ ⋮ ψ n N ( x 1 ) ψ n N ( x 2 ) ⋯ ψ n N ( x N ) | {\displaystyle \Psi _{n_{1}\cdots n_{N}}^{(A)}(x_{1},\ldots ,x_{N})={\frac {1}{\sqrt {N!}}}\left|{\begin{matrix}\psi _{n_{1}}(x_{1})&\psi _{n_{1}}(x_{2})&\cdots &\psi _{n_{1}}(x_{N})\\\psi _{n_{2}}(x_{1})&\psi _{n_{2}}(x_{2})&\cdots &\psi _{n_{2}}(x_{N})\\\vdots &\vdots &\ddots &\vdots \\\psi _{n_{N}}(x_{1})&\psi _{n_{N}}(x_{2})&\cdots &\psi _{n_{N}}(x_{N})\\\end{matrix}}\right|} if the wavefunction of each particle is completely determined by a set of quantum number, then two fermions cannot share the same set of quantum numbers since the resulting function cannot be anti-symmetrized (i.e. above formula gives zero). The same cannot be said of Bosons since their wavefunction is: | x 1 x 2 ⋯ x N ; S ⟩ = ∏ j n j ! N ! ∑ p | x p ( 1 ) ⟩ | x p ( 2 ) ⟩ ⋯ | x p ( N ) ⟩ {\displaystyle |x_{1}x_{2}\cdots x_{N};S\rangle ={\frac {\prod _{j}n_{j}!}{N!}}\sum _{p}\left|x_{p(1)}\right\rangle \left|x_{p(2)}\right\rangle \cdots \left|x_{p(N)}\right\rangle } where n j {\displaystyle n_{j}} is the number of particles with same wavefunction. ==== Exceptions for symmetrization postulate ==== In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension d = 2 can one construct entities where (−1)2S is replaced by an arbitrary complex number with magnitude 1, called anyons. In relativistic quantum mechanics, spin statistic theorem can prove that under certain set of assumptions that the integer spins particles are classified as bosons and half spin particles are classified as fermions. Anyons which form neither symmetric nor antisymmetric states are said to have fractional spin. Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties. == Mathematical structure of quantum mechanics == === Pictures of dynamics === Summary: === Representations === The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations of quantization, the deformation extension from classical to quantum mechanics. The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent. === Time as an operator === The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated with a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter s, and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in s would be generated by a "Hamiltonian" H − E, where E is the energy operator and H is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of H − E (this requires the use of a rigged Hilbert space and a renormalization of the norm). This is related to the quantization of constrained systems and quantization of gauge theories. It is also possible to formulate a quantum theory of "events" where time becomes an observable. == Problem of measurement == The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement. The von Neumann description of quantum measurement of an observable A, when the system is prepared in a pure state ψ is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain): Let A have spectral resolution A = ∫ λ d E A ⁡ ( λ ) , {\displaystyle A=\int \lambda \,d\operatorname {E} _{A}(\lambda ),} where EA is the resolution of the identity (also called projection-valued measure) associated with A. Then the probability of the measurement outcome lying in an interval B of R is |EA(B) ψ|2. In other words, the probability is obtained by integrating the characteristic function of B against the countably additive measure ⟨ ψ ∣ E A ⁡ ψ ⟩ . {\displaystyle \langle \psi \mid \operatorname {E} _{A}\psi \rangle .} If the measured value is contained in B, then immediately after the measurement, the system will be in the (generally non-normalized) state EA(B)ψ. If the measured value does not lie in B, replace B by its complement for the above state. For example, suppose the state space is the n-dimensional complex Hilbert space Cn and A is a Hermitian matrix with eigenvalues λi, with corresponding eigenvectors ψi. The projection-valued measure associated with A, EA, is then E A ⁡ ( B ) = | ψ i ⟩ ⟨ ψ i | , {\displaystyle \operatorname {E} _{A}(B)=|\psi _{i}\rangle \langle \psi _{i}|,} where B is a Borel set containing only the single eigenvalue λi. If the system is prepared in state | ψ ⟩ {\displaystyle |\psi \rangle } Then the probability of a measurement returning the value λi can be calculated by integrating the spectral measure ⟨ ψ ∣ E A ⁡ ψ ⟩ {\displaystyle \langle \psi \mid \operatorname {E} _{A}\psi \rangle } over Bi. This gives trivially ⟨ ψ | ψ i ⟩ ⟨ ψ i ∣ ψ ⟩ = | ⟨ ψ ∣ ψ i ⟩ | 2 . {\displaystyle \langle \psi |\psi _{i}\rangle \langle \psi _{i}\mid \psi \rangle =|\langle \psi \mid \psi _{i}\rangle |^{2}.} The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate. A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections | ψ i ⟩ ⟨ ψ i | {\displaystyle |\psi _{i}\rangle \langle \psi _{i}|} by a finite set of positive operators F i F i ∗ {\displaystyle F_{i}F_{i}^{*}} whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes {λ1 ... λn} is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is λi. Instead of collapsing to the (unnormalized) state | ψ i ⟩ ⟨ ψ i | ψ ⟩ {\displaystyle |\psi _{i}\rangle \langle \psi _{i}|\psi \rangle } after the measurement, the system now will be in the state F i | ψ ⟩ . {\displaystyle F_{i}|\psi \rangle .} Since the Fi Fi* operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds. The same formulation applies to general mixed states. In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace. == List of mathematical tools == Part of the folklore of the subject concerns the mathematical physics textbook Methods of Mathematical Physics put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new. The main tools include: linear algebra: complex numbers, eigenvectors, eigenvalues functional analysis: Hilbert spaces, linear operators, spectral theory differential equations: partial differential equations, separation of variables, ordinary differential equations, Sturm–Liouville theory, eigenfunctions harmonic analysis: Fourier transforms == See also == List of mathematical topics in quantum theory Quantum foundations Symmetry in quantum mechanics == Notes == == References == Bäuerle, Gerard G. A.; de Kerf, Eddy A. (1990). Lie Algebras, Part 1: Finite and Infinite Dimensional Lie Algebras and Applications in Physics. Studies in Mathematical Physics. Amsterdam: North Holland. ISBN 0-444-88776-8. Byron, Frederick W.; Fuller, Robert W. (1992). Mathematics of Classical and Quantum Physics. New York: Courier Corporation. ISBN 978-0-486-67164-2. Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck (2020). Quantum mechanics. Volume 2: Angular momentum, spin, and approximation methods. Weinheim: Wiley-VCH Verlag GmbH & Co. KGaA. ISBN 978-3-527-82272-0. Dirac, P. A. M. (1925). "The Fundamental Equations of Quantum Mechanics". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 109 (752): 642–653. Bibcode:1925RSPSA.109..642D. doi:10.1098/rspa.1925.0150. Edwards, David A. (1979). "The mathematical foundations of quantum mechanics". Synthese. 42 (1). Springer Science and Business Media LLC: 1–70. doi:10.1007/bf00413704. ISSN 0039-7857. S2CID 46969028. Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge. Sudbury, Mass.: Jones & Bartlett Learning. ISBN 978-0-7637-2470-2. Jauch, J. M.; Wigner, E. P.; Yanase, M. M. (1997). "Some Comments Concerning Measurements in Quantum Mechanics". Part I: Particles and Fields. Part II: Foundations of Quantum Mechanics. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 475–482. doi:10.1007/978-3-662-09203-3_52. ISBN 978-3-642-08179-8. Solem, J. C.; Biedenharn, L. C. (1993). "Understanding geometrical phases in quantum mechanics: An elementary example". Foundations of Physics. 23 (2): 185–195. Bibcode:1993FoPh...23..185S. doi:10.1007/BF01883623. S2CID 121930907. Streater, Raymond Frederick; Wightman, Arthur Strong (2000). PCT, Spin and Statistics, and All that. Princeton, NJ: Princeton University Press. ISBN 978-0-691-07062-9. Sakurai, Jun John; Napolitano, Jim (2021). Modern quantum mechanics (3rd ed.). Cambridge: Cambridge University Press. ISBN 978-1-108-47322-4. Weyl, Hermann (1950) [1931]. The Theory of Groups and Quantum Mechanics. Translated by Robertson, H. P. Dover. == Further reading == Auyang, Sunny Y. (1995). How is Quantum Field Theory Possible?. New York, NY: Oxford University Press on Demand. ISBN 978-0-19-509344-5. Emch, Gérard G. (1972). Algebraic Methods in Statistical Mechanics and Quantum Field Theory. New York: John Wiley & Sons. ISBN 0-471-23900-3. Giachetta, Giovanni; Mangiarotti, Luigi; Sardanashvily, Gennadi (2005). Geometric and Algebraic Topological Methods in Quantum Mechanics. WORLD SCIENTIFIC. arXiv:math-ph/0410040. doi:10.1142/5731. ISBN 978-981-256-129-9. Gleason, Andrew M. (1957). "Measures on the Closed Subspaces of a Hilbert Space". Journal of Mathematics and Mechanics. 6 (6). Indiana University Mathematics Department: 885–893. JSTOR 24900629. Hall, Brian C. (2013). Quantum Theory for Mathematicians. Graduate Texts in Mathematics. Vol. 267. New York, NY: Springer New York. Bibcode:2013qtm..book.....H. doi:10.1007/978-1-4614-7116-5. ISBN 978-1-4614-7115-8. ISSN 0072-5285. S2CID 117837329. Jauch, Josef Maria (1968). Foundations of Quantum Mechanics. Reading, Mass.: Addison-Wesley. ISBN 0-201-03298-8. Jost, R. (1965). The General Theory of Quantized Fields. Lectures in applied mathematics. American Mathematical Society. Kuhn, Thomas S. (1987). Black-Body Theory and the Quantum Discontinuity, 1894-1912. Chicago: University of Chicago Press. ISBN 978-0-226-45800-7. Landsman, Klaas (2017). Foundations of Quantum Theory. Fundamental Theories of Physics. Vol. 188. Cham: Springer International Publishing. doi:10.1007/978-3-319-51777-3. ISBN 978-3-319-51776-6. ISSN 0168-1222. Mackey, George W. (2004). Mathematical Foundations of Quantum Mechanics. Mineola, N.Y: Courier Corporation. ISBN 978-0-486-43517-6. McMahon, David (2013). Quantum Mechanics Demystified, 2nd Edition (PDF). New York, NY: McGraw-Hill Prof Med/Tech. ISBN 978-0-07-176563-3. Moretti, Valter (2017). Spectral Theory and Quantum Mechanics. Unitext. Vol. 110. Cham: Springer International Publishing. doi:10.1007/978-3-319-70706-8. ISBN 978-3-319-70705-1. ISSN 2038-5714. S2CID 125121522. Moretti, Valter (2019). Fundamental Mathematical Structures of Quantum Theory. Cham: Springer International Publishing. doi:10.1007/978-3-030-18346-2. ISBN 978-3-030-18345-5. S2CID 197485828. Prugovecki, Eduard (2006). Quantum Mechanics in Hilbert Space. Mineola, NY: Courier Dover Publications. ISBN 978-0-486-45327-9. Reed, Michael; Simon, Barry (1972). Methods of Modern Mathematical Physics. New York: Academic Press. ISBN 978-0-12-585001-8. Shankar, R. (2013). Principles of Quantum Mechanics (PDF). Springer. ISBN 978-1-4615-7675-4. Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics (PDF). Providence, R.I: American Mathematical Soc. ISBN 978-0-8218-4660-5. von Neumann, John (2018). Mathematical Foundations of Quantum Mechanics. Princeton Oxford: Princeton University Press. ISBN 978-0-691-17856-1. Weaver, Nik (2001). Mathematical Quantization. Chapman and Hall/CRC. doi:10.1201/9781420036237. ISBN 978-0-429-07514-8.
Wikipedia/Mathematical_formulations_of_quantum_mechanics
A physical system is a collection of physical objects under study. The collection differs from a set: all the objects must coexist and have some physical relationship. In other words, it is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment, which is ignored except for its effects on the system. The split between system and environment is the analyst's choice, generally made to simplify the analysis. For example, the water in a lake, the water in half of a lake, or an individual molecule of water in the lake can each be considered a physical system. An isolated system is one that has negligible interaction with its environment. Often a system in this sense is chosen to correspond to the more usual meaning of system, such as a particular machine. In the study of quantum coherence, the "system" may refer to the microscopic properties of an object (e.g. the mean of a pendulum bob), while the relevant "environment" may be the internal degrees of freedom, described classically by the pendulum's thermal vibrations. Because no quantum system is completely isolated from its surroundings, it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems. In control theory, a physical system being controlled (a "controlled system") is called a "plant". == See also == Conceptual systems Phase space Physical phenomenon Physical ontology Signal-flow graph Systems engineering Systems science Thermodynamic system Open quantum system == References == == Further reading == Bunge, Mario (13 March 2013). Foundations of Physics. Springer Science & Business Media. ISBN 978-3-642-49287-7. Retrieved 21 June 2023. Bunge, Mario; Mahner, Martin (2004). Über die Natur der Dinge: Materialismus und Wissenschaft (in German). S. Hirzel. ISBN 978-3-7776-1321-5. Halloun, Ibrahim A. (25 January 2007). Modeling Theory in Science Education. Springer Science & Business Media. ISBN 978-1-4020-2140-4. Retrieved 21 June 2023. Schmutzer, Ernst (29 August 2005). Grundlagen der Theoretischen Physik (in German). John Wiley & Sons. ISBN 978-3-527-40555-8. Retrieved 21 June 2023.
Wikipedia/Physical_systems
An operator is a function over a space of physical states onto another space of states. The simplest example of the utility of operators is the study of symmetry (which makes the concept of a group useful in this context). Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory. They play a central role in describing observables (measurable quantities like energy, momentum, etc.). == Operators in classical mechanics == In classical mechanics, the movement of a particle (or system of particles) is completely determined by the Lagrangian L ( q , q ˙ , t ) {\displaystyle L(q,{\dot {q}},t)} or equivalently the Hamiltonian H ( q , p , t ) {\displaystyle H(q,p,t)} , a function of the generalized coordinates q, generalized velocities q ˙ = d q / d t {\displaystyle {\dot {q}}=\mathrm {d} q/\mathrm {d} t} and its conjugate momenta: p = ∂ L ∂ q ˙ {\displaystyle p={\frac {\partial L}{\partial {\dot {q}}}}} If either L or H is independent of a generalized coordinate q, meaning the L and H do not change when q is changed, which in turn means the dynamics of the particle are still the same even when q changes, the corresponding momenta conjugate to those coordinates will be conserved (this is part of Noether's theorem, and the invariance of motion with respect to the coordinate q is a symmetry). Operators in classical mechanics are related to these symmetries. More technically, when H is invariant under the action of a certain group of transformations G: S ∈ G , H ( S ( q , p ) ) = H ( q , p ) {\displaystyle S\in G,H(S(q,p))=H(q,p)} . The elements of G are physical operators, which map physical states among themselves. === Table of classical mechanics operators === where R ( n ^ , θ ) {\displaystyle R({\hat {\boldsymbol {n}}},\theta )} is the rotation matrix about an axis defined by the unit vector n ^ {\displaystyle {\hat {\boldsymbol {n}}}} and angle θ. == Generators == If the transformation is infinitesimal, the operator action should be of the form I + ϵ A , {\displaystyle I+\epsilon A,} where I {\displaystyle I} is the identity operator, ϵ {\displaystyle \epsilon } is a parameter with a small value, and A {\displaystyle A} will depend on the transformation at hand, and is called a generator of the group. Again, as a simple example, we will derive the generator of the space translations on 1D functions. As it was stated, T a f ( x ) = f ( x − a ) {\displaystyle T_{a}f(x)=f(x-a)} . If a = ϵ {\displaystyle a=\epsilon } is infinitesimal, then we may write T ϵ f ( x ) = f ( x − ϵ ) ≈ f ( x ) − ϵ f ′ ( x ) . {\displaystyle T_{\epsilon }f(x)=f(x-\epsilon )\approx f(x)-\epsilon f'(x).} This formula may be rewritten as T ϵ f ( x ) = ( I − ϵ D ) f ( x ) {\displaystyle T_{\epsilon }f(x)=(I-\epsilon D)f(x)} where D {\displaystyle D} is the generator of the translation group, which in this case happens to be the derivative operator. Thus, it is said that the generator of translations is the derivative. == The exponential map == The whole group may be recovered, under normal circumstances, from the generators, via the exponential map. In the case of the translations the idea works like this. The translation for a finite value of a {\displaystyle a} may be obtained by repeated application of the infinitesimal translation: T a f ( x ) = lim N → ∞ T a / N ⋯ T a / N f ( x ) {\displaystyle T_{a}f(x)=\lim _{N\to \infty }T_{a/N}\cdots T_{a/N}f(x)} with the ⋯ {\displaystyle \cdots } standing for the application N {\displaystyle N} times. If N {\displaystyle N} is large, each of the factors may be considered to be infinitesimal: T a f ( x ) = lim N → ∞ ( I − a N D ) N f ( x ) . {\displaystyle T_{a}f(x)=\lim _{N\to \infty }\left(I-{\frac {a}{N}}D\right)^{N}f(x).} But this limit may be rewritten as an exponential: T a f ( x ) = exp ⁡ ( − a D ) f ( x ) . {\displaystyle T_{a}f(x)=\exp(-aD)f(x).} To be convinced of the validity of this formal expression, we may expand the exponential in a power series: T a f ( x ) = ( I − a D + a 2 D 2 2 ! − a 3 D 3 3 ! + ⋯ ) f ( x ) . {\displaystyle T_{a}f(x)=\left(I-aD+{a^{2}D^{2} \over 2!}-{a^{3}D^{3} \over 3!}+\cdots \right)f(x).} The right-hand side may be rewritten as f ( x ) − a f ′ ( x ) + a 2 2 ! f ″ ( x ) − a 3 3 ! f ( 3 ) ( x ) + ⋯ {\displaystyle f(x)-af'(x)+{\frac {a^{2}}{2!}}f''(x)-{\frac {a^{3}}{3!}}f^{(3)}(x)+\cdots } which is just the Taylor expansion of f ( x − a ) {\displaystyle f(x-a)} , which was our original value for T a f ( x ) {\displaystyle T_{a}f(x)} . The mathematical properties of physical operators are a topic of great importance in itself. For further information, see C*-algebra and Gelfand–Naimark theorem. == Operators in quantum mechanics == The mathematical formulation of quantum mechanics (QM) is built upon the concept of an operator. Physical pure states in quantum mechanics are represented as unit-norm vectors (probabilities are normalized to one) in a special complex Hilbert space. Time evolution in this vector space is given by the application of the evolution operator. Any observable, i.e., any quantity which can be measured in a physical experiment, should be associated with a self-adjoint linear operator. The operators must yield real eigenvalues, since they are values which may come up as the result of the experiment. Mathematically this means the operators must be Hermitian. The probability of each eigenvalue is related to the projection of the physical state on the subspace related to that eigenvalue. See below for mathematical details about Hermitian operators. In the wave mechanics formulation of QM, the wavefunction varies with space and time, or equivalently momentum and time (see position and momentum space for details), so observables are differential operators. In the matrix mechanics formulation, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices. Any other symmetry, mapping a physical state into another, should keep this restriction. === Wavefunction === The wavefunction must be square-integrable (see Lp spaces), meaning: ∭ R 3 | ψ ( r ) | 2 d 3 r = ∭ R 3 ψ ( r ) ∗ ψ ( r ) d 3 r < ∞ {\displaystyle \iiint _{\mathbb {R} ^{3}}|\psi (\mathbf {r} )|^{2}\,d^{3}\mathbf {r} =\iiint _{\mathbb {R} ^{3}}\psi (\mathbf {r} )^{*}\psi (\mathbf {r} )\,d^{3}\mathbf {r} <\infty } and normalizable, so that: ∭ R 3 | ψ ( r ) | 2 d 3 r = 1 {\displaystyle \iiint _{\mathbb {R} ^{3}}|\psi (\mathbf {r} )|^{2}\,d^{3}\mathbf {r} =1} Two cases of eigenstates (and eigenvalues) are: for discrete eigenstates | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } forming a discrete basis, so any state is a sum | ψ ⟩ = ∑ i c i | ϕ i ⟩ {\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle } where ci are complex numbers such that |ci|2 = ci*ci is the probability of measuring the state | ϕ i ⟩ {\displaystyle |\phi _{i}\rangle } , and the corresponding set of eigenvalues ai is also discrete - either finite or countably infinite. In this case, the inner product of two eigenstates is given by ⟨ ϕ i | ϕ j ⟩ = δ i j {\displaystyle \langle \phi _{i}\vert \phi _{j}\rangle =\delta _{ij}} , where δ m n {\displaystyle \delta _{mn}} denotes the Kronecker Delta. However, for a continuum of eigenstates forming a continuous basis, any state is an integral | ψ ⟩ = ∫ c ( ϕ ) d ϕ | ϕ ⟩ {\displaystyle |\psi \rangle =\int c(\phi )\,d\phi |\phi \rangle } where c(φ) is a complex function such that |c(φ)|2 = c(φ)*c(φ) is the probability of measuring the state | ϕ ⟩ {\displaystyle |\phi \rangle } , and there is an uncountably infinite set of eigenvalues a. In this case, the inner product of two eigenstates is defined as ⟨ ϕ ′ | ϕ ⟩ = δ ( ϕ − ϕ ′ ) {\displaystyle \langle \phi '\vert \phi \rangle =\delta (\phi -\phi ')} , where here δ ( x − y ) {\displaystyle \delta (x-y)} denotes the Dirac Delta. === Linear operators in wave mechanics === Let ψ be the wavefunction for a quantum system, and A ^ {\displaystyle {\hat {A}}} be any linear operator for some observable A (such as position, momentum, energy, angular momentum etc.). If ψ is an eigenfunction of the operator A ^ {\displaystyle {\hat {A}}} , then A ^ ψ = a ψ , {\displaystyle {\hat {A}}\psi =a\psi ,} where a is the eigenvalue of the operator, corresponding to the measured value of the observable, i.e. observable A has a measured value a. If ψ is an eigenfunction of a given operator A ^ {\displaystyle {\hat {A}}} , then a definite quantity (the eigenvalue a) will be observed if a measurement of the observable A is made on the state ψ. Conversely, if ψ is not an eigenfunction of A ^ {\displaystyle {\hat {A}}} , then it has no eigenvalue for A ^ {\displaystyle {\hat {A}}} , and the observable does not have a single definite value in that case. Instead, measurements of the observable A will yield each eigenvalue with a certain probability (related to the decomposition of ψ relative to the orthonormal eigenbasis of A ^ {\displaystyle {\hat {A}}} ). In bra–ket notation the above can be written; A ^ ψ = A ^ ψ ( r ) = A ^ ⟨ r ∣ ψ ⟩ = ⟨ r | A ^ | ψ ⟩ a ψ = a ψ ( r ) = a ⟨ r ∣ ψ ⟩ = ⟨ r ∣ a ∣ ψ ⟩ {\displaystyle {\begin{aligned}{\hat {A}}\psi &={\hat {A}}\psi (\mathbf {r} )={\hat {A}}\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \left\vert {\hat {A}}\right\vert \psi \right\rangle \\a\psi &=a\psi (\mathbf {r} )=a\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \mid a\mid \psi \right\rangle \\\end{aligned}}} that are equal if | ψ ⟩ {\displaystyle \left|\psi \right\rangle } is an eigenvector, or eigenket of the observable A. Due to linearity, vectors can be defined in any number of dimensions, as each component of the vector acts on the function separately. One mathematical example is the del operator, which is itself a vector (useful in momentum-related quantum operators, in the table below). An operator in n-dimensional space can be written: A ^ = ∑ j = 1 n e j A ^ j {\displaystyle \mathbf {\hat {A}} =\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}} where ej are basis vectors corresponding to each component operator Aj. Each component will yield a corresponding eigenvalue a j {\displaystyle a_{j}} . Acting this on the wave function ψ: A ^ ψ = ( ∑ j = 1 n e j A ^ j ) ψ = ∑ j = 1 n ( e j A ^ j ψ ) = ∑ j = 1 n ( e j a j ψ ) {\displaystyle \mathbf {\hat {A}} \psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\sum _{j=1}^{n}\left(\mathbf {e} _{j}{\hat {A}}_{j}\psi \right)=\sum _{j=1}^{n}\left(\mathbf {e} _{j}a_{j}\psi \right)} in which we have used A ^ j ψ = a j ψ . {\displaystyle {\hat {A}}_{j}\psi =a_{j}\psi .} In bra–ket notation: A ^ ψ = A ^ ψ ( r ) = A ^ ⟨ r ∣ ψ ⟩ = ⟨ r | A ^ | ψ ⟩ ( ∑ j = 1 n e j A ^ j ) ψ = ( ∑ j = 1 n e j A ^ j ) ψ ( r ) = ( ∑ j = 1 n e j A ^ j ) ⟨ r ∣ ψ ⟩ = ⟨ r | ∑ j = 1 n e j A ^ j | ψ ⟩ {\displaystyle {\begin{aligned}\mathbf {\hat {A}} \psi =\mathbf {\hat {A}} \psi (\mathbf {r} )=\mathbf {\hat {A}} \left\langle \mathbf {r} \mid \psi \right\rangle &=\left\langle \mathbf {r} \left\vert \mathbf {\hat {A}} \right\vert \psi \right\rangle \\\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi (\mathbf {r} )=\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\left\langle \mathbf {r} \mid \psi \right\rangle &=\left\langle \mathbf {r} \left\vert \sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right\vert \psi \right\rangle \end{aligned}}} === Commutation of operators on Ψ === If two observables A and B have linear operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} , the commutator is defined by, [ A ^ , B ^ ] = A ^ B ^ − B ^ A ^ {\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}} The commutator is itself a (composite) operator. Acting the commutator on ψ gives: [ A ^ , B ^ ] ψ = A ^ B ^ ψ − B ^ A ^ ψ . {\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi ={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi .} If ψ is an eigenfunction with eigenvalues a and b for observables A and B respectively, and if the operators commute: [ A ^ , B ^ ] ψ = 0 , {\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi =0,} then the observables A and B can be measured simultaneously with infinite precision, i.e., uncertainties Δ A = 0 {\displaystyle \Delta A=0} , Δ B = 0 {\displaystyle \Delta B=0} simultaneously. ψ is then said to be the simultaneous eigenfunction of A and B. To illustrate this: [ A ^ , B ^ ] ψ = A ^ B ^ ψ − B ^ A ^ ψ = a ( b ψ ) − b ( a ψ ) = 0. {\displaystyle {\begin{aligned}\left[{\hat {A}},{\hat {B}}\right]\psi &={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi \\&=a(b\psi )-b(a\psi )\\&=0.\\\end{aligned}}} It shows that measurement of A and B does not cause any shift of state, i.e., initial and final states are same (no disturbance due to measurement). Suppose we measure A to get value a. We then measure B to get the value b. We measure A again. We still get the same value a. Clearly the state (ψ) of the system is not destroyed and so we are able to measure A and B simultaneously with infinite precision. If the operators do not commute: [ A ^ , B ^ ] ψ ≠ 0 , {\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi \neq 0,} they cannot be prepared simultaneously to arbitrary precision, and there is an uncertainty relation between the observables Δ A Δ B ≥ | 1 2 ⟨ [ A , B ] ⟩ | {\displaystyle \Delta A\Delta B\geq \left|{\frac {1}{2}}\langle [A,B]\rangle \right|} even if ψ is an eigenfunction the above relation holds. Notable pairs are position-and-momentum and energy-and-time uncertainty relations, and the angular momenta (spin, orbital and total) about any two orthogonal axes (such as Lx and Ly, or sy and sz, etc.). === Expectation values of operators on Ψ === The expectation value (equivalently the average or mean value) is the average measurement of an observable, for particle in region R. The expectation value ⟨ A ^ ⟩ {\displaystyle \left\langle {\hat {A}}\right\rangle } of the operator A ^ {\displaystyle {\hat {A}}} is calculated from: ⟨ A ^ ⟩ = ∫ R ψ ∗ ( r ) A ^ ψ ( r ) d 3 r = ⟨ ψ | A ^ | ψ ⟩ . {\displaystyle \left\langle {\hat {A}}\right\rangle =\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left|{\hat {A}}\right|\psi \right\rangle .} This can be generalized to any function F of an operator: ⟨ F ( A ^ ) ⟩ = ∫ R ψ ( r ) ∗ [ F ( A ^ ) ψ ( r ) ] d 3 r = ⟨ ψ | F ( A ^ ) | ψ ⟩ , {\displaystyle \left\langle F\left({\hat {A}}\right)\right\rangle =\int _{R}\psi (\mathbf {r} )^{*}\left[F\left({\hat {A}}\right)\psi (\mathbf {r} )\right]\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left|F\left({\hat {A}}\right)\right|\psi \right\rangle ,} An example of F is the 2-fold action of A on ψ, i.e. squaring an operator or doing it twice: F ( A ^ ) = A ^ 2 ⇒ ⟨ A ^ 2 ⟩ = ∫ R ψ ∗ ( r ) A ^ 2 ψ ( r ) d 3 r = ⟨ ψ | A ^ 2 | ψ ⟩ {\displaystyle {\begin{aligned}F\left({\hat {A}}\right)&={\hat {A}}^{2}\\\Rightarrow \left\langle {\hat {A}}^{2}\right\rangle &=\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}^{2}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left\vert {\hat {A}}^{2}\right\vert \psi \right\rangle \\\end{aligned}}\,\!} === Hermitian operators === The definition of a Hermitian operator is: A ^ = A ^ † {\displaystyle {\hat {A}}={\hat {A}}^{\dagger }} Following from this, in bra–ket notation: ⟨ ϕ i | A ^ | ϕ j ⟩ = ⟨ ϕ j | A ^ | ϕ i ⟩ ∗ . {\displaystyle \left\langle \phi _{i}\left|{\hat {A}}\right|\phi _{j}\right\rangle =\left\langle \phi _{j}\left|{\hat {A}}\right|\phi _{i}\right\rangle ^{*}.} Important properties of Hermitian operators include: real eigenvalues, eigenvectors with different eigenvalues are orthogonal, eigenvectors can be chosen to be a complete orthonormal basis, === Operators in matrix mechanics === An operator can be written in matrix form to map one basis vector to another. Since the operators are linear, the matrix is a linear transformation (aka transition matrix) between bases. Each basis element ϕ j {\displaystyle \phi _{j}} can be connected to another, by the expression: A i j = ⟨ ϕ i | A ^ | ϕ j ⟩ , {\displaystyle A_{ij}=\left\langle \phi _{i}\left|{\hat {A}}\right|\phi _{j}\right\rangle ,} which is a matrix element: A ^ = ( A 11 A 12 ⋯ A 1 n A 21 A 22 ⋯ A 2 n ⋮ ⋮ ⋱ ⋮ A n 1 A n 2 ⋯ A n n ) {\displaystyle {\hat {A}}={\begin{pmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{pmatrix}}} A further property of a Hermitian operator is that eigenfunctions corresponding to different eigenvalues are orthogonal. In matrix form, operators allow real eigenvalues to be found, corresponding to measurements. Orthogonality allows a suitable basis set of vectors to represent the state of the quantum system. The eigenvalues of the operator are also evaluated in the same way as for the square matrix, by solving the characteristic polynomial: det ( A ^ − a I ^ ) = 0 , {\displaystyle \det \left({\hat {A}}-a{\hat {I}}\right)=0,} where I is the n × n identity matrix, as an operator it corresponds to the identity operator. For a discrete basis: I ^ = ∑ i | ϕ i ⟩ ⟨ ϕ i | {\displaystyle {\hat {I}}=\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|} while for a continuous basis: I ^ = ∫ | ϕ ⟩ ⟨ ϕ | d ϕ {\displaystyle {\hat {I}}=\int |\phi \rangle \langle \phi |\mathrm {d} \phi } === Inverse of an operator === A non-singular operator A ^ {\displaystyle {\hat {A}}} has an inverse A ^ − 1 {\displaystyle {\hat {A}}^{-1}} defined by: A ^ A ^ − 1 = A ^ − 1 A ^ = I ^ {\displaystyle {\hat {A}}{\hat {A}}^{-1}={\hat {A}}^{-1}{\hat {A}}={\hat {I}}} If an operator has no inverse, it is a singular operator. In a finite-dimensional space, an operator is non-singular if and only if its determinant is nonzero: det ( A ^ ) ≠ 0 {\displaystyle \det \left({\hat {A}}\right)\neq 0} and hence the determinant is zero for a singular operator. === Table of Quantum Mechanics operators === The operators used in quantum mechanics are collected in the table below (see for example). The bold-face vectors with circumflexes are not unit vectors, they are 3-vector operators; all three spatial components taken together. === Examples of applying quantum operators === The procedure for extracting information from a wave function is as follows. Consider the momentum p of a particle as an example. The momentum operator in position basis in one dimension is: p ^ = − i ℏ ∂ ∂ x {\displaystyle {\hat {p}}=-i\hbar {\frac {\partial }{\partial x}}} Letting this act on ψ we obtain: p ^ ψ = − i ℏ ∂ ∂ x ψ , {\displaystyle {\hat {p}}\psi =-i\hbar {\frac {\partial }{\partial x}}\psi ,} if ψ is an eigenfunction of p ^ {\displaystyle {\hat {p}}} , then the momentum eigenvalue p is the value of the particle's momentum, found by: − i ℏ ∂ ∂ x ψ = p ψ . {\displaystyle -i\hbar {\frac {\partial }{\partial x}}\psi =p\psi .} For three dimensions the momentum operator uses the nabla operator to become: p ^ = − i ℏ ∇ . {\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla .} In Cartesian coordinates (using the standard Cartesian basis vectors ex, ey, ez) this can be written; e x p ^ x + e y p ^ y + e z p ^ z = − i ℏ ( e x ∂ ∂ x + e y ∂ ∂ y + e z ∂ ∂ z ) , {\displaystyle \mathbf {e} _{\mathrm {x} }{\hat {p}}_{x}+\mathbf {e} _{\mathrm {y} }{\hat {p}}_{y}+\mathbf {e} _{\mathrm {z} }{\hat {p}}_{z}=-i\hbar \left(\mathbf {e} _{\mathrm {x} }{\frac {\partial }{\partial x}}+\mathbf {e} _{\mathrm {y} }{\frac {\partial }{\partial y}}+\mathbf {e} _{\mathrm {z} }{\frac {\partial }{\partial z}}\right),} that is: p ^ x = − i ℏ ∂ ∂ x , p ^ y = − i ℏ ∂ ∂ y , p ^ z = − i ℏ ∂ ∂ z {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {\partial }{\partial x}},\quad {\hat {p}}_{y}=-i\hbar {\frac {\partial }{\partial y}},\quad {\hat {p}}_{z}=-i\hbar {\frac {\partial }{\partial z}}\,\!} The process of finding eigenvalues is the same. Since this is a vector and operator equation, if ψ is an eigenfunction, then each component of the momentum operator will have an eigenvalue corresponding to that component of momentum. Acting p ^ {\displaystyle \mathbf {\hat {p}} } on ψ obtains: p ^ x ψ = − i ℏ ∂ ∂ x ψ = p x ψ p ^ y ψ = − i ℏ ∂ ∂ y ψ = p y ψ p ^ z ψ = − i ℏ ∂ ∂ z ψ = p z ψ {\displaystyle {\begin{aligned}{\hat {p}}_{x}\psi &=-i\hbar {\frac {\partial }{\partial x}}\psi =p_{x}\psi \\{\hat {p}}_{y}\psi &=-i\hbar {\frac {\partial }{\partial y}}\psi =p_{y}\psi \\{\hat {p}}_{z}\psi &=-i\hbar {\frac {\partial }{\partial z}}\psi =p_{z}\psi \\\end{aligned}}\,\!} == See also == == References ==
Wikipedia/Operator_(physics)
Introduction to Quantum Mechanics, often called Griffiths, is an introductory textbook on quantum mechanics by David J. Griffiths. The book is considered a standard undergraduate textbook in the subject. Originally published by Pearson Education in 1995 with a second edition in 2005, Cambridge University Press (CUP) reprinted the second edition in 2017. In 2018, CUP released a third edition of the book with Darrell F. Schroeter as co-author; this edition is known as Griffiths and Schroeter. == Content (3rd edition) == Part I: Theory Chapter 1: The Wave Function Chapter 2: Time-independent Schrödinger Equation Chapter 3: Formalism Chapter 4: Quantum Mechanics in Three Dimensions Chapter 5: Identical Particles Chapter 6: Symmetries and Conservation Laws Part II: Applications Chapter 7: Time-independent Perturbation Theory Chapter 8: The Variational Principle Chapter 9: The WKB Approximation Chapter 10: Scattering Chapter 11: Quantum Dynamics Chapter 12: Afterword Appendix: Linear Algebra Index == Reception == The book was reviewed by John R. Taylor, among others. It has also been recommended in other, more advanced, textbooks on the subject. According to physicists Yoni Kahn of Princeton University and Adam Anderson of the Fermi National Accelerator Laboratory, Griffiths' Introduction to Quantum Mechanics covers all materials needed for questions on quantum mechanics and atomic physics in the Physics Graduate Record Examinations (Physics GRE). == Publication history == Griffiths, David J. (1995). Introduction to quantum mechanics (1st ed.). New Jersey: Prentice Hall. ISBN 978-0-13-124405-4. OCLC 984428006. Griffiths, David J. (2005). Introduction to quantum mechanics (2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 0-13-111892-7. OCLC 53926857. Griffiths, David J. (2017). Introduction to quantum mechanics (2nd ed.). Cambridge: Cambridge University Press. ISBN 978-1-107-17986-8. OCLC 952389109. Griffiths, David J.; Schroeter, Darrell F. (2018). Introduction to quantum mechanics (3rd ed.). Cambridge: Cambridge University Press. ISBN 978-1-107-18963-8. OCLC 1030447903. == See also == Introduction to Electrodynamics by the same author List of textbooks in electromagnetism List of textbooks on classical mechanics and quantum mechanics == References ==
Wikipedia/Introduction_to_Quantum_Mechanics_(book)
In physics, a spin network is a type of diagram which can be used to represent states and interactions between particles and fields in quantum mechanics. From a mathematical perspective, the diagrams are a concise way to represent multilinear functions and functions between representations of matrix groups. The diagrammatic notation can thus greatly simplify calculations. Roger Penrose described spin networks in 1971. Spin networks have since been applied to the theory of quantum gravity by Carlo Rovelli, Lee Smolin, Jorge Pullin, Rodolfo Gambini and others. Spin networks can also be used to construct a particular functional on the space of connections which is invariant under local gauge transformations. == Definition == === Penrose's definition === A spin network, as described in Penrose (1971), is a kind of diagram in which each line segment represents the world line of a "unit" (either an elementary particle or a compound system of particles). Three line segments join at each vertex. A vertex may be interpreted as an event in which either a single unit splits into two or two units collide and join into a single unit. Diagrams whose line segments are all joined at vertices are called closed spin networks. Time may be viewed as going in one direction, such as from the bottom to the top of the diagram, but for closed spin networks the direction of time is irrelevant to calculations. Each line segment is labelled with an integer called a spin number. A unit with spin number n is called an n-unit and has angular momentum nħ/2, where ħ is the reduced Planck constant. For bosons, such as photons and gluons, n is an even number. For fermions, such as electrons and quarks, n is odd. Given any closed spin network, a non-negative integer can be calculated which is called the norm of the spin network. Norms can be used to calculate the probabilities of various spin values. A network whose norm is zero has zero probability of occurrence. The rules for calculating norms and probabilities are beyond the scope of this article. However, they imply that for a spin network to have nonzero norm, two requirements must be met at each vertex. Suppose a vertex joins three units with spin numbers a, b, and c. Then, these requirements are stated as: Triangle inequality: a ≤ b + c and b ≤ a + c and c ≤ a + b. Fermion conservation: a + b + c must be an even number. For example, a = 3, b = 4, c = 6 is impossible since 3 + 4 + 6 = 13 is odd, and a = 3, b = 4, c = 9 is impossible since 9 > 3 + 4. However, a = 3, b = 4, c = 5 is possible since 3 + 4 + 5 = 12 is even, and the triangle inequality is satisfied. Some conventions use labellings by half-integers, with the condition that the sum a + b + c must be a whole number. === Formal approach to definition === Formally, a spin network may be defined as a (directed) graph whose edges are associated with irreducible representations of a compact Lie group and whose vertices are associated with intertwiners of the edge representations adjacent to it. === Properties === A spin network, immersed into a manifold, can be used to define a functional on the space of connections on this manifold. One computes holonomies of the connection along every link (closed path) of the graph, determines representation matrices corresponding to every link, multiplies all matrices and intertwiners together, and contracts indices in a prescribed way. A remarkable feature of the resulting functional is that it is invariant under local gauge transformations. == Usage in physics == === In the context of loop quantum gravity === In loop quantum gravity (LQG), a spin network represents a "quantum state" of the gravitational field on a 3-dimensional hypersurface. The set of all possible spin networks (or, more accurately, "s-knots" – that is, equivalence classes of spin networks under diffeomorphisms) is countable; it constitutes a basis of LQG Hilbert space. One of the key results of loop quantum gravity is quantization of areas: the operator of the area A of a two-dimensional surface Σ should have a discrete spectrum. Every spin network is an eigenstate of each such operator, and the area eigenvalue equals A Σ = 8 π ℓ PL 2 γ ∑ i j i ( j i + 1 ) {\displaystyle A_{\Sigma }=8\pi \ell _{\text{PL}}^{2}\gamma \sum _{i}{\sqrt {j_{i}(j_{i}+1)}}} where the sum goes over all intersections i of Σ with the spin network. In this formula, ℓPL is the Planck length, γ {\displaystyle \gamma } is the Immirzi parameter and ji = 0, 1/2, 1, 3/2, ... is the spin associated with the link i of the spin network. The two-dimensional area is therefore "concentrated" in the intersections with the spin network. According to this formula, the lowest possible non-zero eigenvalue of the area operator corresponds to a link that carries spin 1/2 representation. Assuming an Immirzi parameter on the order of 1, this gives the smallest possible measurable area of ~10−66 cm2. The formula for area eigenvalues becomes somewhat more complicated if the surface is allowed to pass through the vertices, as with anomalous diffusion models. Also, the eigenvalues of the area operator A are constrained by ladder symmetry. Similar quantization applies to the volume operator. The volume of a 3D submanifold that contains part of a spin network is given by a sum of contributions from each node inside it. One can think that every node in a spin network is an elementary "quantum of volume" and every link is a "quantum of area" surrounding this volume. === More general gauge theories === Similar constructions can be made for general gauge theories with a compact Lie group G and a connection form. This is actually an exact duality over a lattice. Over a manifold however, assumptions like diffeomorphism invariance are needed to make the duality exact (smearing Wilson loops is tricky). Later, it was generalized by Robert Oeckl to representations of quantum groups in 2 and 3 dimensions using the Tannaka–Krein duality. Michael A. Levin and Xiao-Gang Wen have also defined string-nets using tensor categories that are objects very similar to spin networks. However the exact connection with spin networks is not clear yet. String-net condensation produces topologically ordered states in condensed matter. == Usage in mathematics == In mathematics, spin networks have been used to study skein modules and character varieties, which correspond to spaces of connections. == See also == Spin connection Spin structure Character variety Penrose graphical notation Spin foam String-net Trace diagram Tensor network == References == == Further reading == === Early papers === I. B. Levinson, "Sum of Wigner coefficients and their graphical representation," Proceed. Phys-Tech Inst. Acad Sci. Lithuanian SSR 2, 17-30 (1956) Kogut, John; Susskind, Leonard (1975). "Hamiltonian formulation of Wilson's lattice gauge theories". Physical Review D. 11 (2): 395–408. Bibcode:1975PhRvD..11..395K. doi:10.1103/PhysRevD.11.395. Kogut, John B. (1983). "The lattice gauge theory approach to quantum chromodynamics". Reviews of Modern Physics. 55 (3): 775–836. Bibcode:1983RvMP...55..775K. doi:10.1103/RevModPhys.55.775. (see the Euclidean high temperature (strong coupling) section) Savit, Robert (1980). "Duality in field theory and statistical systems". Reviews of Modern Physics. 52 (2): 453–487. Bibcode:1980RvMP...52..453S. doi:10.1103/RevModPhys.52.453. (see the sections on Abelian gauge theories) === Modern papers === Rovelli, Carlo; Smolin, Lee (1995). "Spin networks and quantum gravity". Phys. Rev. D. 52 (10): 5743–5759. arXiv:gr-qc/9505006. Bibcode:1995PhRvD..52.5743R. doi:10.1103/PhysRevD.52.5743. PMID 10019107. S2CID 16116269. Pfeiffer, Hendryk; Oeckl, Robert (2002). "The dual of non-Abelian Lattice Gauge Theory". Nuclear Physics B - Proceedings Supplements. 106–107: 1010–1012. arXiv:hep-lat/0110034. Bibcode:2002NuPhS.106.1010P. doi:10.1016/S0920-5632(01)01913-2. S2CID 14925121. Pfeiffer, Hendryk (2003). "Exact duality transformations for sigma models and gauge theories". Journal of Mathematical Physics. 44 (7): 2891–2938. arXiv:hep-lat/0205013. Bibcode:2003JMP....44.2891P. doi:10.1063/1.1580071. S2CID 15580641. Oeckl, Robert (2003). "Generalized lattice gauge theory, spin foams and state sum invariants". Journal of Geometry and Physics. 46 (3–4): 308–354. arXiv:hep-th/0110259. Bibcode:2003JGP....46..308O. doi:10.1016/S0393-0440(02)00148-1. S2CID 13226932. Baez, John C. (1996). "Spin Networks in Gauge Theory". Advances in Mathematics. 117 (2): 253–272. arXiv:gr-qc/9411007. doi:10.1006/aima.1996.0012. S2CID 17050932. Xiao-Gang Wen, "Quantum Field Theory of Many-body Systems – from the Origin of Sound to an Origin of Light and Fermions," [1]. (Dubbed string-nets here.) Major, Seth A. (1999). "A spin network primer". American Journal of Physics. 67 (11): 972–980. arXiv:gr-qc/9905020. Bibcode:1999AmJPh..67..972M. doi:10.1119/1.19175. S2CID 9188101. === Books === G. E. Stedman, Diagram Techniques in Group Theory, Cambridge University Press, 1990. Predrag Cvitanović, Group Theory: Birdtracks, Lie's, and Exceptional Groups, Princeton University Press, 2008.
Wikipedia/Spin_network
In mathematics, a category (sometimes called an abstract category to distinguish it from a concrete category) is a collection of "objects" that are linked by "arrows". A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets and whose arrows are functions. Category theory is a branch of mathematics that seeks to generalize all of mathematics in terms of categories, independent of what their objects and arrows represent. Virtually every branch of modern mathematics can be described in terms of categories, and doing so often reveals deep insights and similarities between seemingly different areas of mathematics. As such, category theory provides an alternative foundation for mathematics to set theory and other proposed axiomatic foundations. In general, the objects and arrows may be abstract entities of any kind, and the notion of category provides a fundamental and abstract way to describe mathematical entities and their relationships. In addition to formalizing mathematics, category theory is also used to formalize many other systems in computer science, such as the semantics of programming languages. Two categories are the same if they have the same collection of objects, the same collection of arrows, and the same associative method of composing any pair of arrows. Two different categories may also be considered "equivalent" for purposes of category theory, even if they do not have precisely the same structure. Well-known categories are denoted by a short capitalized word or abbreviation in bold or italics: examples include Set, the category of sets and set functions; Ring, the category of rings and ring homomorphisms; and Top, the category of topological spaces and continuous maps. All of the preceding categories have the identity map as identity arrows and composition as the associative operation on arrows. The classic and still much used text on category theory is Categories for the Working Mathematician by Saunders Mac Lane. Other references are given in the References below. The basic definitions in this article are contained within the first few chapters of any of these books. Any monoid can be understood as a special sort of category (with a single object whose self-morphisms are represented by the elements of the monoid), and so can any preorder. == Definition == There are many equivalent definitions of a category. One commonly used definition is as follows. A category C consists of a class ob(C) of objects, a class mor(C) of morphisms or arrows, a domain or source class function dom: mor(C) → ob(C), a codomain or target class function cod: mor(C) → ob(C), for every three objects a, b and c, a binary operation hom(a, b) × hom(b, c) → hom(a, c) called composition of morphisms. Here hom(a, b) denotes the subclass of morphisms f in mor(C) such that dom(f) = a and cod(f) = b. Morphisms in this subclass are written f : a → b, and the composite of f : a → b and g : b → c is often written as g ∘ f or gf. such that the following axioms hold: the associative law: if f : a → b, g : b → c and h : c → d then h ∘ (g ∘ f) = (h ∘ g) ∘ f, and the (left and right unit laws): for every object x, there exists a morphism 1x : x → x (some authors write idx) called the identity morphism for x, such that every morphism f : a → x satisfies 1x ∘ f = f, and every morphism g : x → b satisfies g ∘ 1x = g. We write f: a → b, and we say "f is a morphism from a to b". We write hom(a, b) (or homC(a, b) when there may be confusion about to which category hom(a, b) refers) to denote the hom-class of all morphisms from a to b. Some authors write the composite of morphisms in "diagrammatic order", writing f;g or fg instead of g ∘ f. From these axioms, one can prove that there is exactly one identity morphism for every object. Often the map assigning each object its identity morphism is treated as an extra part of the structure of a category, namely a class function i: ob(C) → mor(C). Some authors use a slight variant of the definition in which each object is identified with the corresponding identity morphism. This stems from the idea that the fundamental data of categories are morphisms and not objects. In fact, categories can be defined without reference to objects at all using a partial binary operation with additional properties. == Small and large categories == A category C is called small if both ob(C) and mor(C) are actually sets and not proper classes, and large otherwise. A locally small category is a category such that for all objects a and b, the hom-class hom(a, b) is a set, called a homset. Many important categories in mathematics (such as the category of sets), although not small, are at least locally small. Since, in small categories, the objects form a set, a small category can be viewed as an algebraic structure similar to a monoid but without requiring closure properties. Large categories on the other hand can be used to create "structures" of algebraic structures. == Examples == The class of all sets (as objects) together with all functions between them (as morphisms), where the composition of morphisms is the usual function composition, forms a large category, Set. It is the most basic and the most commonly used category in mathematics. The category Rel consists of all sets (as objects) with binary relations between them (as morphisms). Abstracting from relations instead of functions yields allegories, a special class of categories. Any class can be viewed as a category whose only morphisms are the identity morphisms. Such categories are called discrete. For any given set I, the discrete category on I is the small category that has the elements of I as objects and only the identity morphisms as morphisms. Discrete categories are the simplest kind of category. Any preordered set (P, ≤) forms a small category, where the objects are the members of P, the morphisms are arrows pointing from x to y when x ≤ y. Furthermore, if ≤ is antisymmetric, there can be at most one morphism between any two objects. The existence of identity morphisms and the composability of the morphisms are guaranteed by the reflexivity and the transitivity of the preorder. By the same argument, any partially ordered set and any equivalence relation can be seen as a small category. Any ordinal number can be seen as a category when viewed as an ordered set. Any monoid (any algebraic structure with a single associative binary operation and an identity element) forms a small category with a single object x. (Here, x is any fixed set.) The morphisms from x to x are precisely the elements of the monoid, the identity morphism of x is the identity of the monoid, and the categorical composition of morphisms is given by the monoid operation. Several definitions and theorems about monoids may be generalized for categories. Similarly any group can be seen as a category with a single object in which every morphism is invertible, that is, for every morphism f there is a morphism g that is both left and right inverse to f under composition. A morphism that is invertible in this sense is called an isomorphism. A groupoid is a category in which every morphism is an isomorphism. Groupoids are generalizations of groups, group actions and equivalence relations. Actually, in the view of category the only difference between groupoid and group is that a groupoid may have more than one object but the group must have only one. Consider a topological space X and fix a base point x 0 {\displaystyle x_{0}} of X, then π 1 ( X , x 0 ) {\displaystyle \pi _{1}(X,x_{0})} is the fundamental group of the topological space X and the base point x 0 {\displaystyle x_{0}} , and as a set it has the structure of group; if then let the base point x 0 {\displaystyle x_{0}} runs over all points of X, and take the union of all π 1 ( X , x 0 ) {\displaystyle \pi _{1}(X,x_{0})} , then the set we get has only the structure of groupoid (which is called as the fundamental groupoid of X): two loops (under equivalence relation of homotopy) may not have the same base point so they cannot multiply with each other. In the language of category, this means here two morphisms may not have the same source object (or target object, because in this case for any morphism the source object and the target object are same: the base point) so they can not compose with each other. Any directed graph generates a small category: the objects are the vertices of the graph, and the morphisms are the paths in the graph (augmented with loops as needed) where composition of morphisms is concatenation of paths. Such a category is called the free category generated by the graph. The class of all preordered sets with order-preserving functions (i.e., monotone-increasing functions) as morphisms forms a category, Ord. It is a concrete category, i.e. a category obtained by adding some type of structure onto Set, and requiring that morphisms are functions that respect this added structure. The class of all groups with group homomorphisms as morphisms and function composition as the composition operation forms a large category, Grp. Like Ord, Grp is a concrete category. The category Ab, consisting of all abelian groups and their group homomorphisms, is a full subcategory of Grp, and the prototype of an abelian category. The class of all graphs forms another concrete category, where morphisms are graph homomorphisms (i.e., mappings between graphs which send vertices to vertices and edges to edges in a way that preserves all adjacency and incidence relations). Other examples of concrete categories are given by the following table. Fiber bundles with bundle maps between them form a concrete category. The category Cat consists of all small categories, with functors between them as morphisms. == Construction of new categories == === Dual category === Any category C can itself be considered as a new category in a different way: the objects are the same as those in the original category but the arrows are those of the original category reversed. This is called the dual or opposite category and is denoted Cop. === Product categories === If C and D are categories, one can form the product category C × D: the objects are pairs consisting of one object from C and one from D, and the morphisms are also pairs, consisting of one morphism in C and one in D. Such pairs can be composed componentwise. == Types of morphisms == A morphism f : a → b is called a monomorphism (or monic) if it is left-cancellable, i.e. fg1 = fg2 implies g1 = g2 for all morphisms g1, g2 : x → a. an epimorphism (or epic) if it is right-cancellable, i.e. g1f = g2f implies g1 = g2 for all morphisms g1, g2 : b → x. a bimorphism if it is both a monomorphism and an epimorphism. a retraction if it has a right inverse, i.e. if there exists a morphism g : b → a with fg = 1b. a section if it has a left inverse, i.e. if there exists a morphism g : b → a with gf = 1a. an isomorphism if it has an inverse, i.e. if there exists a morphism g : b → a with fg = 1b and gf = 1a. an endomorphism if a = b. The class of endomorphisms of a is denoted end(a). For locally small categories, end(a) is a set and forms a monoid under morphism composition. an automorphism if f is both an endomorphism and an isomorphism. The class of automorphisms of a is denoted aut(a). For locally small categories, it forms a group under morphism composition called the automorphism group of a. Every retraction is an epimorphism. Every section is a monomorphism. The following three statements are equivalent: f is a monomorphism and a retraction; f is an epimorphism and a section; f is an isomorphism. Relations among morphisms (such as fg = h) can most conveniently be represented with commutative diagrams, where the objects are represented as points and the morphisms as arrows. == Types of categories == In many categories, e.g. Ab or VectK, the hom-sets hom(a, b) are not just sets but actually abelian groups, and the composition of morphisms is compatible with these group structures; i.e. is bilinear. Such a category is called preadditive. If, furthermore, the category has all finite products and coproducts, it is called an additive category. If all morphisms have a kernel and a cokernel, and all epimorphisms are cokernels and all monomorphisms are kernels, then we speak of an abelian category. A typical example of an abelian category is the category of abelian groups. A category is called complete if all small limits exist in it. The categories of sets, abelian groups and topological spaces are complete. A category is called cartesian closed if it has finite direct products and a morphism defined on a finite product can always be represented by a morphism defined on just one of the factors. Examples include Set and CPO, the category of complete partial orders with Scott-continuous functions. A topos is a certain type of cartesian closed category in which all of mathematics can be formulated (just like classically all of mathematics is formulated in the category of sets). A topos can also be used to represent a logical theory. == See also == Enriched category Higher category theory Quantaloid Table of mathematical symbols Space (mathematics) Structure (mathematics) == Notes == == References ==
Wikipedia/Category_(category_theory)
In mathematics, and more specifically in homological algebra, a resolution (or left resolution; dually a coresolution or right resolution) is an exact sequence of modules (or, more generally, of objects of an abelian category) that is used to define invariants characterizing the structure of a specific module or object of this category. When, as usually, arrows are oriented to the right, the sequence is supposed to be infinite to the left for (left) resolutions, and to the right for right resolutions. However, a finite resolution is one where only finitely many of the objects in the sequence are non-zero; it is usually represented by a finite exact sequence in which the leftmost object (for resolutions) or the rightmost object (for coresolutions) is the zero-object. Generally, the objects in the sequence are restricted to have some property P (for example to be free). Thus one speaks of a P resolution. In particular, every module has free resolutions, projective resolutions and flat resolutions, which are left resolutions consisting, respectively of free modules, projective modules or flat modules. Similarly every module has injective resolutions, which are right resolutions consisting of injective modules. == Resolutions of modules == === Definitions === Given a module M over a ring R, a left resolution (or simply resolution) of M is an exact sequence (possibly infinite) of R-modules ⋯ ⟶ d n + 1 E n ⟶ d n ⋯ ⟶ d 3 E 2 ⟶ d 2 E 1 ⟶ d 1 E 0 ⟶ ε M ⟶ 0. {\displaystyle \cdots {\overset {d_{n+1}}{\longrightarrow }}E_{n}{\overset {d_{n}}{\longrightarrow }}\cdots {\overset {d_{3}}{\longrightarrow }}E_{2}{\overset {d_{2}}{\longrightarrow }}E_{1}{\overset {d_{1}}{\longrightarrow }}E_{0}{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The homomorphisms di are called boundary maps. The map ε is called an augmentation map. For succinctness, the resolution above can be written as E ∙ ⟶ ε M ⟶ 0. {\displaystyle E_{\bullet }{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The dual notion is that of a right resolution (or coresolution, or simply resolution). Specifically, given a module M over a ring R, a right resolution is a possibly infinite exact sequence of R-modules 0 ⟶ M ⟶ ε C 0 ⟶ d 0 C 1 ⟶ d 1 C 2 ⟶ d 2 ⋯ ⟶ d n − 1 C n ⟶ d n ⋯ , {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{0}{\overset {d^{0}}{\longrightarrow }}C^{1}{\overset {d^{1}}{\longrightarrow }}C^{2}{\overset {d^{2}}{\longrightarrow }}\cdots {\overset {d^{n-1}}{\longrightarrow }}C^{n}{\overset {d^{n}}{\longrightarrow }}\cdots ,} where each Ci is an R-module (it is common to use superscripts on the objects in the resolution and the maps between them to indicate the dual nature of such a resolution). For succinctness, the resolution above can be written as 0 ⟶ M ⟶ ε C ∙ . {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{\bullet }.} A (co)resolution is said to be finite if only finitely many of the modules involved are non-zero. The length of a finite resolution is the maximum index n labeling a nonzero module in the finite resolution. === Free, projective, injective, and flat resolutions === In many circumstances conditions are imposed on the modules Ei resolving the given module M. For example, a free resolution of a module M is a left resolution in which all the modules Ei are free R-modules. Likewise, projective and flat resolutions are left resolutions such that all the Ei are projective and flat R-modules, respectively. Injective resolutions are right resolutions whose Ci are all injective modules. Every R-module possesses a free left resolution. A fortiori, every module also admits projective and flat resolutions. The proof idea is to define E0 to be the free R-module generated by the elements of M, and then E1 to be the free R-module generated by the elements of the kernel of the natural map E0 → M etc. Dually, every R-module possesses an injective resolution. Projective resolutions (and, more generally, flat resolutions) can be used to compute Tor functors. Projective resolution of a module M is unique up to a chain homotopy, i.e., given two projective resolutions P0 → M and P1 → M of M there exists a chain homotopy between them. Resolutions are used to define homological dimensions. The minimal length of a finite projective resolution of a module M is called its projective dimension and denoted pd(M). For example, a module has projective dimension zero if and only if it is a projective module. If M does not admit a finite projective resolution then the projective dimension is infinite. For example, for a commutative local ring R, the projective dimension is finite if and only if R is regular and in this case it coincides with the Krull dimension of R. Analogously, the injective dimension id(M) and flat dimension fd(M) are defined for modules also. The injective and projective dimensions are used on the category of right R-modules to define a homological dimension for R called the right global dimension of R. Similarly, flat dimension is used to define weak global dimension. The behavior of these dimensions reflects characteristics of the ring. For example, a ring has right global dimension 0 if and only if it is a semisimple ring, and a ring has weak global dimension 0 if and only if it is a von Neumann regular ring. === Graded modules and algebras === Let M be a graded module over a graded algebra, which is generated over a field by its elements of positive degree. Then M has a free resolution in which the free modules Ei may be graded in such a way that the di and ε are graded linear maps. Among these graded free resolutions, the minimal free resolutions are those for which the number of basis elements of each Ei is minimal. The number of basis elements of each Ei and their degrees are the same for all the minimal free resolutions of a graded module. If I is a homogeneous ideal in a polynomial ring over a field, the Castelnuovo–Mumford regularity of the projective algebraic set defined by I is the minimal integer r such that the degrees of the basis elements of the Ei in a minimal free resolution of I are all lower than r-i. === Examples === A classic example of a free resolution is given by the Koszul complex of a regular sequence in a local ring or of a homogeneous regular sequence in a graded algebra finitely generated over a field. Let X be an aspherical space, i.e., its universal cover E is contractible. Then every singular (or simplicial) chain complex of E is a free resolution of the module Z not only over the ring Z but also over the group ring Z [π1(X)]. == Resolutions in abelian categories == The definition of resolutions of an object M in an abelian category A is the same as above, but the Ei and Ci are objects in A, and all maps involved are morphisms in A. The analogous notion of projective and injective modules are projective and injective objects, and, accordingly, projective and injective resolutions. However, such resolutions need not exist in a general abelian category A. If every object of A has a projective (resp. injective) resolution, then A is said to have enough projectives (resp. enough injectives). Even if they do exist, such resolutions are often difficult to work with. For example, as pointed out above, every R-module has an injective resolution, but this resolution is not functorial, i.e., given a homomorphism M → M' , together with injective resolutions 0 → M → I ∗ , 0 → M ′ → I ∗ ′ , {\displaystyle 0\rightarrow M\rightarrow I_{*},\ \ 0\rightarrow M'\rightarrow I'_{*},} there is in general no functorial way of obtaining a map between I ∗ {\displaystyle I_{*}} and I ∗ ′ {\displaystyle I'_{*}} . === Abelian categories without projective resolutions in general === One class of examples of Abelian categories without projective resolutions are the categories Coh ( X ) {\displaystyle {\text{Coh}}(X)} of coherent sheaves on a scheme X {\displaystyle X} . For example, if X = P S n {\displaystyle X=\mathbb {P} _{S}^{n}} is projective space, any coherent sheaf M {\displaystyle {\mathcal {M}}} on X {\displaystyle X} has a presentation given by an exact sequence ⨁ i , j = 0 O X ( s i , j ) → ⨁ i = 0 O X ( s i ) → M → 0. {\displaystyle \bigoplus _{i,j=0}{\mathcal {O}}_{X}(s_{i,j})\to \bigoplus _{i=0}{\mathcal {O}}_{X}(s_{i})\to {\mathcal {M}}\to 0.} The first two terms are not in general projective since H n ( P S n , O X ( s ) ) ≠ 0 {\displaystyle H^{n}(\mathbb {P} _{S}^{n},{\mathcal {O}}_{X}(s))\neq 0} for s > 0 {\displaystyle s>0} . But, both terms are locally free, and locally flat. Both classes of sheaves can be used in place for certain computations, replacing projective resolutions for computing some derived functors. == Acyclic resolution == In many cases one is not really interested in the objects appearing in a resolution, but in the behavior of the resolution with respect to a given functor. Therefore, in many situations, the notion of acyclic resolutions is used: given a left exact functor F: A → B between two abelian categories, a resolution 0 → M → E 0 → E 1 → E 2 → ⋯ {\displaystyle 0\rightarrow M\rightarrow E_{0}\rightarrow E_{1}\rightarrow E_{2}\rightarrow \cdots } of an object M of A is called F-acyclic, if the derived functors RiF(En) vanish for all i > 0 and n ≥ 0. Dually, a left resolution is acyclic with respect to a right exact functor if its derived functors vanish on the objects of the resolution. For example, given a R-module M, the tensor product ⊗ R M {\displaystyle \otimes _{R}M} is a right exact functor Mod(R) → Mod(R). Every flat resolution is acyclic with respect to this functor. A flat resolution is acyclic for the tensor product by every M. Similarly, resolutions that are acyclic for all the functors Hom( ⋅ , M) are the projective resolutions and those that are acyclic for the functors Hom(M, ⋅ ) are the injective resolutions. Any injective (projective) resolution is F-acyclic for any left exact (right exact, respectively) functor. The importance of acyclic resolutions lies in the fact that the derived functors RiF (of a left exact functor, and likewise LiF of a right exact functor) can be obtained from as the homology of F-acyclic resolutions: given an acyclic resolution E ∗ {\displaystyle E_{*}} of an object M, we have R i F ( M ) = H i F ( E ∗ ) , {\displaystyle R_{i}F(M)=H_{i}F(E_{*}),} where right hand side is the i-th homology object of the complex F ( E ∗ ) . {\displaystyle F(E_{*}).} This situation applies in many situations. For example, for the constant sheaf R on a differentiable manifold M can be resolved by the sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} of smooth differential forms: 0 → R ⊂ C 0 ( M ) → d C 1 ( M ) → d ⋯ → d C dim ⁡ M ( M ) → 0. {\displaystyle 0\rightarrow R\subset {\mathcal {C}}^{0}(M){\stackrel {d}{\rightarrow }}{\mathcal {C}}^{1}(M){\stackrel {d}{\rightarrow }}\cdots {\stackrel {d}{\rightarrow }}{\mathcal {C}}^{\dim M}(M)\rightarrow 0.} The sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} are fine sheaves, which are known to be acyclic with respect to the global section functor Γ : F ↦ F ( M ) {\displaystyle \Gamma :{\mathcal {F}}\mapsto {\mathcal {F}}(M)} . Therefore, the sheaf cohomology, which is the derived functor of the global section functor Γ is computed as H i ( M , R ) = H i ( C ∗ ( M ) ) . {\displaystyle \mathrm {H} ^{i}(M,\mathbf {R} )=\mathrm {H} ^{i}({\mathcal {C}}^{*}(M)).} Similarly Godement resolutions are acyclic with respect to the global sections functor. == See also == Standard resolution Hilbert–Burch theorem Hilbert's syzygy theorem Free presentation Matrix factorizations (algebra) == Notes == == References == Iain T. Adamson (1972), Elementary rings and modules, University Mathematical Texts, Oliver and Boyd, ISBN 0-05-002192-3 Eisenbud, David (1995), Commutative algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 3-540-94268-8, MR 1322960, Zbl 0819.13001 Jacobson, Nathan (2009) [1985], Basic algebra II (Second ed.), Dover Publications, ISBN 978-0-486-47187-7 Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001 Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
Wikipedia/Resolution_of_a_module
In mathematics, and more specifically in homological algebra, a resolution (or left resolution; dually a coresolution or right resolution) is an exact sequence of modules (or, more generally, of objects of an abelian category) that is used to define invariants characterizing the structure of a specific module or object of this category. When, as usually, arrows are oriented to the right, the sequence is supposed to be infinite to the left for (left) resolutions, and to the right for right resolutions. However, a finite resolution is one where only finitely many of the objects in the sequence are non-zero; it is usually represented by a finite exact sequence in which the leftmost object (for resolutions) or the rightmost object (for coresolutions) is the zero-object. Generally, the objects in the sequence are restricted to have some property P (for example to be free). Thus one speaks of a P resolution. In particular, every module has free resolutions, projective resolutions and flat resolutions, which are left resolutions consisting, respectively of free modules, projective modules or flat modules. Similarly every module has injective resolutions, which are right resolutions consisting of injective modules. == Resolutions of modules == === Definitions === Given a module M over a ring R, a left resolution (or simply resolution) of M is an exact sequence (possibly infinite) of R-modules ⋯ ⟶ d n + 1 E n ⟶ d n ⋯ ⟶ d 3 E 2 ⟶ d 2 E 1 ⟶ d 1 E 0 ⟶ ε M ⟶ 0. {\displaystyle \cdots {\overset {d_{n+1}}{\longrightarrow }}E_{n}{\overset {d_{n}}{\longrightarrow }}\cdots {\overset {d_{3}}{\longrightarrow }}E_{2}{\overset {d_{2}}{\longrightarrow }}E_{1}{\overset {d_{1}}{\longrightarrow }}E_{0}{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The homomorphisms di are called boundary maps. The map ε is called an augmentation map. For succinctness, the resolution above can be written as E ∙ ⟶ ε M ⟶ 0. {\displaystyle E_{\bullet }{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The dual notion is that of a right resolution (or coresolution, or simply resolution). Specifically, given a module M over a ring R, a right resolution is a possibly infinite exact sequence of R-modules 0 ⟶ M ⟶ ε C 0 ⟶ d 0 C 1 ⟶ d 1 C 2 ⟶ d 2 ⋯ ⟶ d n − 1 C n ⟶ d n ⋯ , {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{0}{\overset {d^{0}}{\longrightarrow }}C^{1}{\overset {d^{1}}{\longrightarrow }}C^{2}{\overset {d^{2}}{\longrightarrow }}\cdots {\overset {d^{n-1}}{\longrightarrow }}C^{n}{\overset {d^{n}}{\longrightarrow }}\cdots ,} where each Ci is an R-module (it is common to use superscripts on the objects in the resolution and the maps between them to indicate the dual nature of such a resolution). For succinctness, the resolution above can be written as 0 ⟶ M ⟶ ε C ∙ . {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{\bullet }.} A (co)resolution is said to be finite if only finitely many of the modules involved are non-zero. The length of a finite resolution is the maximum index n labeling a nonzero module in the finite resolution. === Free, projective, injective, and flat resolutions === In many circumstances conditions are imposed on the modules Ei resolving the given module M. For example, a free resolution of a module M is a left resolution in which all the modules Ei are free R-modules. Likewise, projective and flat resolutions are left resolutions such that all the Ei are projective and flat R-modules, respectively. Injective resolutions are right resolutions whose Ci are all injective modules. Every R-module possesses a free left resolution. A fortiori, every module also admits projective and flat resolutions. The proof idea is to define E0 to be the free R-module generated by the elements of M, and then E1 to be the free R-module generated by the elements of the kernel of the natural map E0 → M etc. Dually, every R-module possesses an injective resolution. Projective resolutions (and, more generally, flat resolutions) can be used to compute Tor functors. Projective resolution of a module M is unique up to a chain homotopy, i.e., given two projective resolutions P0 → M and P1 → M of M there exists a chain homotopy between them. Resolutions are used to define homological dimensions. The minimal length of a finite projective resolution of a module M is called its projective dimension and denoted pd(M). For example, a module has projective dimension zero if and only if it is a projective module. If M does not admit a finite projective resolution then the projective dimension is infinite. For example, for a commutative local ring R, the projective dimension is finite if and only if R is regular and in this case it coincides with the Krull dimension of R. Analogously, the injective dimension id(M) and flat dimension fd(M) are defined for modules also. The injective and projective dimensions are used on the category of right R-modules to define a homological dimension for R called the right global dimension of R. Similarly, flat dimension is used to define weak global dimension. The behavior of these dimensions reflects characteristics of the ring. For example, a ring has right global dimension 0 if and only if it is a semisimple ring, and a ring has weak global dimension 0 if and only if it is a von Neumann regular ring. === Graded modules and algebras === Let M be a graded module over a graded algebra, which is generated over a field by its elements of positive degree. Then M has a free resolution in which the free modules Ei may be graded in such a way that the di and ε are graded linear maps. Among these graded free resolutions, the minimal free resolutions are those for which the number of basis elements of each Ei is minimal. The number of basis elements of each Ei and their degrees are the same for all the minimal free resolutions of a graded module. If I is a homogeneous ideal in a polynomial ring over a field, the Castelnuovo–Mumford regularity of the projective algebraic set defined by I is the minimal integer r such that the degrees of the basis elements of the Ei in a minimal free resolution of I are all lower than r-i. === Examples === A classic example of a free resolution is given by the Koszul complex of a regular sequence in a local ring or of a homogeneous regular sequence in a graded algebra finitely generated over a field. Let X be an aspherical space, i.e., its universal cover E is contractible. Then every singular (or simplicial) chain complex of E is a free resolution of the module Z not only over the ring Z but also over the group ring Z [π1(X)]. == Resolutions in abelian categories == The definition of resolutions of an object M in an abelian category A is the same as above, but the Ei and Ci are objects in A, and all maps involved are morphisms in A. The analogous notion of projective and injective modules are projective and injective objects, and, accordingly, projective and injective resolutions. However, such resolutions need not exist in a general abelian category A. If every object of A has a projective (resp. injective) resolution, then A is said to have enough projectives (resp. enough injectives). Even if they do exist, such resolutions are often difficult to work with. For example, as pointed out above, every R-module has an injective resolution, but this resolution is not functorial, i.e., given a homomorphism M → M' , together with injective resolutions 0 → M → I ∗ , 0 → M ′ → I ∗ ′ , {\displaystyle 0\rightarrow M\rightarrow I_{*},\ \ 0\rightarrow M'\rightarrow I'_{*},} there is in general no functorial way of obtaining a map between I ∗ {\displaystyle I_{*}} and I ∗ ′ {\displaystyle I'_{*}} . === Abelian categories without projective resolutions in general === One class of examples of Abelian categories without projective resolutions are the categories Coh ( X ) {\displaystyle {\text{Coh}}(X)} of coherent sheaves on a scheme X {\displaystyle X} . For example, if X = P S n {\displaystyle X=\mathbb {P} _{S}^{n}} is projective space, any coherent sheaf M {\displaystyle {\mathcal {M}}} on X {\displaystyle X} has a presentation given by an exact sequence ⨁ i , j = 0 O X ( s i , j ) → ⨁ i = 0 O X ( s i ) → M → 0. {\displaystyle \bigoplus _{i,j=0}{\mathcal {O}}_{X}(s_{i,j})\to \bigoplus _{i=0}{\mathcal {O}}_{X}(s_{i})\to {\mathcal {M}}\to 0.} The first two terms are not in general projective since H n ( P S n , O X ( s ) ) ≠ 0 {\displaystyle H^{n}(\mathbb {P} _{S}^{n},{\mathcal {O}}_{X}(s))\neq 0} for s > 0 {\displaystyle s>0} . But, both terms are locally free, and locally flat. Both classes of sheaves can be used in place for certain computations, replacing projective resolutions for computing some derived functors. == Acyclic resolution == In many cases one is not really interested in the objects appearing in a resolution, but in the behavior of the resolution with respect to a given functor. Therefore, in many situations, the notion of acyclic resolutions is used: given a left exact functor F: A → B between two abelian categories, a resolution 0 → M → E 0 → E 1 → E 2 → ⋯ {\displaystyle 0\rightarrow M\rightarrow E_{0}\rightarrow E_{1}\rightarrow E_{2}\rightarrow \cdots } of an object M of A is called F-acyclic, if the derived functors RiF(En) vanish for all i > 0 and n ≥ 0. Dually, a left resolution is acyclic with respect to a right exact functor if its derived functors vanish on the objects of the resolution. For example, given a R-module M, the tensor product ⊗ R M {\displaystyle \otimes _{R}M} is a right exact functor Mod(R) → Mod(R). Every flat resolution is acyclic with respect to this functor. A flat resolution is acyclic for the tensor product by every M. Similarly, resolutions that are acyclic for all the functors Hom( ⋅ , M) are the projective resolutions and those that are acyclic for the functors Hom(M, ⋅ ) are the injective resolutions. Any injective (projective) resolution is F-acyclic for any left exact (right exact, respectively) functor. The importance of acyclic resolutions lies in the fact that the derived functors RiF (of a left exact functor, and likewise LiF of a right exact functor) can be obtained from as the homology of F-acyclic resolutions: given an acyclic resolution E ∗ {\displaystyle E_{*}} of an object M, we have R i F ( M ) = H i F ( E ∗ ) , {\displaystyle R_{i}F(M)=H_{i}F(E_{*}),} where right hand side is the i-th homology object of the complex F ( E ∗ ) . {\displaystyle F(E_{*}).} This situation applies in many situations. For example, for the constant sheaf R on a differentiable manifold M can be resolved by the sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} of smooth differential forms: 0 → R ⊂ C 0 ( M ) → d C 1 ( M ) → d ⋯ → d C dim ⁡ M ( M ) → 0. {\displaystyle 0\rightarrow R\subset {\mathcal {C}}^{0}(M){\stackrel {d}{\rightarrow }}{\mathcal {C}}^{1}(M){\stackrel {d}{\rightarrow }}\cdots {\stackrel {d}{\rightarrow }}{\mathcal {C}}^{\dim M}(M)\rightarrow 0.} The sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} are fine sheaves, which are known to be acyclic with respect to the global section functor Γ : F ↦ F ( M ) {\displaystyle \Gamma :{\mathcal {F}}\mapsto {\mathcal {F}}(M)} . Therefore, the sheaf cohomology, which is the derived functor of the global section functor Γ is computed as H i ( M , R ) = H i ( C ∗ ( M ) ) . {\displaystyle \mathrm {H} ^{i}(M,\mathbf {R} )=\mathrm {H} ^{i}({\mathcal {C}}^{*}(M)).} Similarly Godement resolutions are acyclic with respect to the global sections functor. == See also == Standard resolution Hilbert–Burch theorem Hilbert's syzygy theorem Free presentation Matrix factorizations (algebra) == Notes == == References == Iain T. Adamson (1972), Elementary rings and modules, University Mathematical Texts, Oliver and Boyd, ISBN 0-05-002192-3 Eisenbud, David (1995), Commutative algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 3-540-94268-8, MR 1322960, Zbl 0819.13001 Jacobson, Nathan (2009) [1985], Basic algebra II (Second ed.), Dover Publications, ISBN 978-0-486-47187-7 Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001 Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
Wikipedia/Projective_resolution
In mathematics, and more specifically in homological algebra, a resolution (or left resolution; dually a coresolution or right resolution) is an exact sequence of modules (or, more generally, of objects of an abelian category) that is used to define invariants characterizing the structure of a specific module or object of this category. When, as usually, arrows are oriented to the right, the sequence is supposed to be infinite to the left for (left) resolutions, and to the right for right resolutions. However, a finite resolution is one where only finitely many of the objects in the sequence are non-zero; it is usually represented by a finite exact sequence in which the leftmost object (for resolutions) or the rightmost object (for coresolutions) is the zero-object. Generally, the objects in the sequence are restricted to have some property P (for example to be free). Thus one speaks of a P resolution. In particular, every module has free resolutions, projective resolutions and flat resolutions, which are left resolutions consisting, respectively of free modules, projective modules or flat modules. Similarly every module has injective resolutions, which are right resolutions consisting of injective modules. == Resolutions of modules == === Definitions === Given a module M over a ring R, a left resolution (or simply resolution) of M is an exact sequence (possibly infinite) of R-modules ⋯ ⟶ d n + 1 E n ⟶ d n ⋯ ⟶ d 3 E 2 ⟶ d 2 E 1 ⟶ d 1 E 0 ⟶ ε M ⟶ 0. {\displaystyle \cdots {\overset {d_{n+1}}{\longrightarrow }}E_{n}{\overset {d_{n}}{\longrightarrow }}\cdots {\overset {d_{3}}{\longrightarrow }}E_{2}{\overset {d_{2}}{\longrightarrow }}E_{1}{\overset {d_{1}}{\longrightarrow }}E_{0}{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The homomorphisms di are called boundary maps. The map ε is called an augmentation map. For succinctness, the resolution above can be written as E ∙ ⟶ ε M ⟶ 0. {\displaystyle E_{\bullet }{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The dual notion is that of a right resolution (or coresolution, or simply resolution). Specifically, given a module M over a ring R, a right resolution is a possibly infinite exact sequence of R-modules 0 ⟶ M ⟶ ε C 0 ⟶ d 0 C 1 ⟶ d 1 C 2 ⟶ d 2 ⋯ ⟶ d n − 1 C n ⟶ d n ⋯ , {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{0}{\overset {d^{0}}{\longrightarrow }}C^{1}{\overset {d^{1}}{\longrightarrow }}C^{2}{\overset {d^{2}}{\longrightarrow }}\cdots {\overset {d^{n-1}}{\longrightarrow }}C^{n}{\overset {d^{n}}{\longrightarrow }}\cdots ,} where each Ci is an R-module (it is common to use superscripts on the objects in the resolution and the maps between them to indicate the dual nature of such a resolution). For succinctness, the resolution above can be written as 0 ⟶ M ⟶ ε C ∙ . {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{\bullet }.} A (co)resolution is said to be finite if only finitely many of the modules involved are non-zero. The length of a finite resolution is the maximum index n labeling a nonzero module in the finite resolution. === Free, projective, injective, and flat resolutions === In many circumstances conditions are imposed on the modules Ei resolving the given module M. For example, a free resolution of a module M is a left resolution in which all the modules Ei are free R-modules. Likewise, projective and flat resolutions are left resolutions such that all the Ei are projective and flat R-modules, respectively. Injective resolutions are right resolutions whose Ci are all injective modules. Every R-module possesses a free left resolution. A fortiori, every module also admits projective and flat resolutions. The proof idea is to define E0 to be the free R-module generated by the elements of M, and then E1 to be the free R-module generated by the elements of the kernel of the natural map E0 → M etc. Dually, every R-module possesses an injective resolution. Projective resolutions (and, more generally, flat resolutions) can be used to compute Tor functors. Projective resolution of a module M is unique up to a chain homotopy, i.e., given two projective resolutions P0 → M and P1 → M of M there exists a chain homotopy between them. Resolutions are used to define homological dimensions. The minimal length of a finite projective resolution of a module M is called its projective dimension and denoted pd(M). For example, a module has projective dimension zero if and only if it is a projective module. If M does not admit a finite projective resolution then the projective dimension is infinite. For example, for a commutative local ring R, the projective dimension is finite if and only if R is regular and in this case it coincides with the Krull dimension of R. Analogously, the injective dimension id(M) and flat dimension fd(M) are defined for modules also. The injective and projective dimensions are used on the category of right R-modules to define a homological dimension for R called the right global dimension of R. Similarly, flat dimension is used to define weak global dimension. The behavior of these dimensions reflects characteristics of the ring. For example, a ring has right global dimension 0 if and only if it is a semisimple ring, and a ring has weak global dimension 0 if and only if it is a von Neumann regular ring. === Graded modules and algebras === Let M be a graded module over a graded algebra, which is generated over a field by its elements of positive degree. Then M has a free resolution in which the free modules Ei may be graded in such a way that the di and ε are graded linear maps. Among these graded free resolutions, the minimal free resolutions are those for which the number of basis elements of each Ei is minimal. The number of basis elements of each Ei and their degrees are the same for all the minimal free resolutions of a graded module. If I is a homogeneous ideal in a polynomial ring over a field, the Castelnuovo–Mumford regularity of the projective algebraic set defined by I is the minimal integer r such that the degrees of the basis elements of the Ei in a minimal free resolution of I are all lower than r-i. === Examples === A classic example of a free resolution is given by the Koszul complex of a regular sequence in a local ring or of a homogeneous regular sequence in a graded algebra finitely generated over a field. Let X be an aspherical space, i.e., its universal cover E is contractible. Then every singular (or simplicial) chain complex of E is a free resolution of the module Z not only over the ring Z but also over the group ring Z [π1(X)]. == Resolutions in abelian categories == The definition of resolutions of an object M in an abelian category A is the same as above, but the Ei and Ci are objects in A, and all maps involved are morphisms in A. The analogous notion of projective and injective modules are projective and injective objects, and, accordingly, projective and injective resolutions. However, such resolutions need not exist in a general abelian category A. If every object of A has a projective (resp. injective) resolution, then A is said to have enough projectives (resp. enough injectives). Even if they do exist, such resolutions are often difficult to work with. For example, as pointed out above, every R-module has an injective resolution, but this resolution is not functorial, i.e., given a homomorphism M → M' , together with injective resolutions 0 → M → I ∗ , 0 → M ′ → I ∗ ′ , {\displaystyle 0\rightarrow M\rightarrow I_{*},\ \ 0\rightarrow M'\rightarrow I'_{*},} there is in general no functorial way of obtaining a map between I ∗ {\displaystyle I_{*}} and I ∗ ′ {\displaystyle I'_{*}} . === Abelian categories without projective resolutions in general === One class of examples of Abelian categories without projective resolutions are the categories Coh ( X ) {\displaystyle {\text{Coh}}(X)} of coherent sheaves on a scheme X {\displaystyle X} . For example, if X = P S n {\displaystyle X=\mathbb {P} _{S}^{n}} is projective space, any coherent sheaf M {\displaystyle {\mathcal {M}}} on X {\displaystyle X} has a presentation given by an exact sequence ⨁ i , j = 0 O X ( s i , j ) → ⨁ i = 0 O X ( s i ) → M → 0. {\displaystyle \bigoplus _{i,j=0}{\mathcal {O}}_{X}(s_{i,j})\to \bigoplus _{i=0}{\mathcal {O}}_{X}(s_{i})\to {\mathcal {M}}\to 0.} The first two terms are not in general projective since H n ( P S n , O X ( s ) ) ≠ 0 {\displaystyle H^{n}(\mathbb {P} _{S}^{n},{\mathcal {O}}_{X}(s))\neq 0} for s > 0 {\displaystyle s>0} . But, both terms are locally free, and locally flat. Both classes of sheaves can be used in place for certain computations, replacing projective resolutions for computing some derived functors. == Acyclic resolution == In many cases one is not really interested in the objects appearing in a resolution, but in the behavior of the resolution with respect to a given functor. Therefore, in many situations, the notion of acyclic resolutions is used: given a left exact functor F: A → B between two abelian categories, a resolution 0 → M → E 0 → E 1 → E 2 → ⋯ {\displaystyle 0\rightarrow M\rightarrow E_{0}\rightarrow E_{1}\rightarrow E_{2}\rightarrow \cdots } of an object M of A is called F-acyclic, if the derived functors RiF(En) vanish for all i > 0 and n ≥ 0. Dually, a left resolution is acyclic with respect to a right exact functor if its derived functors vanish on the objects of the resolution. For example, given a R-module M, the tensor product ⊗ R M {\displaystyle \otimes _{R}M} is a right exact functor Mod(R) → Mod(R). Every flat resolution is acyclic with respect to this functor. A flat resolution is acyclic for the tensor product by every M. Similarly, resolutions that are acyclic for all the functors Hom( ⋅ , M) are the projective resolutions and those that are acyclic for the functors Hom(M, ⋅ ) are the injective resolutions. Any injective (projective) resolution is F-acyclic for any left exact (right exact, respectively) functor. The importance of acyclic resolutions lies in the fact that the derived functors RiF (of a left exact functor, and likewise LiF of a right exact functor) can be obtained from as the homology of F-acyclic resolutions: given an acyclic resolution E ∗ {\displaystyle E_{*}} of an object M, we have R i F ( M ) = H i F ( E ∗ ) , {\displaystyle R_{i}F(M)=H_{i}F(E_{*}),} where right hand side is the i-th homology object of the complex F ( E ∗ ) . {\displaystyle F(E_{*}).} This situation applies in many situations. For example, for the constant sheaf R on a differentiable manifold M can be resolved by the sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} of smooth differential forms: 0 → R ⊂ C 0 ( M ) → d C 1 ( M ) → d ⋯ → d C dim ⁡ M ( M ) → 0. {\displaystyle 0\rightarrow R\subset {\mathcal {C}}^{0}(M){\stackrel {d}{\rightarrow }}{\mathcal {C}}^{1}(M){\stackrel {d}{\rightarrow }}\cdots {\stackrel {d}{\rightarrow }}{\mathcal {C}}^{\dim M}(M)\rightarrow 0.} The sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} are fine sheaves, which are known to be acyclic with respect to the global section functor Γ : F ↦ F ( M ) {\displaystyle \Gamma :{\mathcal {F}}\mapsto {\mathcal {F}}(M)} . Therefore, the sheaf cohomology, which is the derived functor of the global section functor Γ is computed as H i ( M , R ) = H i ( C ∗ ( M ) ) . {\displaystyle \mathrm {H} ^{i}(M,\mathbf {R} )=\mathrm {H} ^{i}({\mathcal {C}}^{*}(M)).} Similarly Godement resolutions are acyclic with respect to the global sections functor. == See also == Standard resolution Hilbert–Burch theorem Hilbert's syzygy theorem Free presentation Matrix factorizations (algebra) == Notes == == References == Iain T. Adamson (1972), Elementary rings and modules, University Mathematical Texts, Oliver and Boyd, ISBN 0-05-002192-3 Eisenbud, David (1995), Commutative algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 3-540-94268-8, MR 1322960, Zbl 0819.13001 Jacobson, Nathan (2009) [1985], Basic algebra II (Second ed.), Dover Publications, ISBN 978-0-486-47187-7 Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001 Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
Wikipedia/Injective_resolution
The snake lemma is a tool used in mathematics, particularly homological algebra, to construct long exact sequences. The snake lemma is valid in every abelian category and is a crucial tool in homological algebra and its applications, for instance in algebraic topology. Homomorphisms constructed with its help are generally called connecting homomorphisms. == Statement == In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram: where the rows are exact sequences and 0 is the zero object. Then there is an exact sequence relating the kernels and cokernels of a, b, and c: ker ⁡ a ⟶ ker ⁡ b ⟶ ker ⁡ c ⟶ d coker ⁡ a ⟶ coker ⁡ b ⟶ coker ⁡ c {\displaystyle \ker a~{\color {Gray}\longrightarrow }~\ker b~{\color {Gray}\longrightarrow }~\ker c~{\overset {d}{\longrightarrow }}~\operatorname {coker} a~{\color {Gray}\longrightarrow }~\operatorname {coker} b~{\color {Gray}\longrightarrow }~\operatorname {coker} c} where d is a homomorphism, known as the connecting homomorphism. Furthermore, if the morphism f is a monomorphism, then so is the morphism ker ⁡ a ⟶ ker ⁡ b {\displaystyle \ker a~{\color {Gray}\longrightarrow }~\ker b} , and if g' is an epimorphism, then so is coker ⁡ b ⟶ coker ⁡ c {\displaystyle \operatorname {coker} b~{\color {Gray}\longrightarrow }~\operatorname {coker} c} . The cokernels here are: coker ⁡ a = A ′ / im ⁡ a {\displaystyle \operatorname {coker} a=A'/\operatorname {im} a} , coker ⁡ b = B ′ / im ⁡ b {\displaystyle \operatorname {coker} b=B'/\operatorname {im} b} , coker ⁡ c = C ′ / im ⁡ c {\displaystyle \operatorname {coker} c=C'/\operatorname {im} c} . == Explanation of the name == To see where the snake lemma gets its name, expand the diagram above as follows: and then the exact sequence that is the conclusion of the lemma can be drawn on this expanded diagram in the reversed "S" shape of a slithering snake. == Construction of the maps == The maps between the kernels and the maps between the cokernels are induced in a natural manner by the given (horizontal) maps because of the diagram's commutativity. The exactness of the two induced sequences follows in a straightforward way from the exactness of the rows of the original diagram. The important statement of the lemma is that a connecting homomorphism d {\displaystyle d} exists which completes the exact sequence. In the case of abelian groups or modules over some ring, the map d {\displaystyle d} can be constructed as follows: Pick an element x {\displaystyle x} in ker ⁡ c {\displaystyle \operatorname {ker} c} and view it as an element of C {\displaystyle C} . Since g {\displaystyle g} is surjective, there exists y {\displaystyle y} in B {\displaystyle B} with g ( y ) = x {\displaystyle g(y)=x} . By commutativity of the diagram, we have g ′ ( b ( y ) ) = c ( g ( y ) ) = c ( x ) = 0 {\displaystyle g'(b(y))=c(g(y))=c(x)=0} (since x {\displaystyle x} is in the kernel of c {\displaystyle c} ), and therefore b ( y ) {\displaystyle b(y)} is in the kernel of g ′ {\displaystyle g'} . Since the bottom row is exact, we find an element z {\displaystyle z} in A ′ {\displaystyle A'} with f ′ ( z ) = b ( y ) {\displaystyle f'(z)=b(y)} . By injectivity of f ′ {\displaystyle f'} , z {\displaystyle z} is unique. We then define d ( x ) = z + im ⁡ ( a ) {\displaystyle d(x)=z+\operatorname {im} (a)} . Now one has to check that d {\displaystyle d} is well-defined (i.e., d ( x ) {\displaystyle d(x)} only depends on x {\displaystyle x} and not on the choice of y {\displaystyle y} ), that it is a homomorphism, and that the resulting long sequence is indeed exact. One may routinely verify the exactness by diagram chasing (see the proof of Lemma 9.1 in ). Once that is done, the theorem is proven for abelian groups or modules over a ring. For the general case, the argument may be rephrased in terms of properties of arrows and cancellation instead of elements. Alternatively, one may invoke Mitchell's embedding theorem. == Naturality == In the applications, one often needs to show that long exact sequences are "natural" (in the sense of natural transformations). This follows from the naturality of the sequence produced by the snake lemma. If is a commutative diagram with exact rows, then the snake lemma can be applied twice, to the "front" and to the "back", yielding two long exact sequences; these are related by a commutative diagram of the form == Example == Let k {\displaystyle k} be field, V {\displaystyle V} be a k {\displaystyle k} -vector space. V {\displaystyle V} is k [ t ] {\displaystyle k[t]} -module by t : V → V {\displaystyle t:V\to V} being a k {\displaystyle k} -linear transformation, so we can tensor V {\displaystyle V} and k {\displaystyle k} over k [ t ] {\displaystyle k[t]} . V ⊗ k [ t ] k = V ⊗ k [ t ] ( k [ t ] / ( t ) ) = V / t V = coker ⁡ ( t ) . {\displaystyle V\otimes _{k[t]}k=V\otimes _{k[t]}(k[t]/(t))=V/tV=\operatorname {coker} (t).} Given a short exact sequence of k {\displaystyle k} -vector spaces 0 → M → N → P → 0 {\displaystyle 0\to M\to N\to P\to 0} , we can induce an exact sequence M ⊗ k [ t ] k → N ⊗ k [ t ] k → P ⊗ k [ t ] k → 0 {\displaystyle M\otimes _{k[t]}k\to N\otimes _{k[t]}k\to P\otimes _{k[t]}k\to 0} by right exactness of tensor product. But the sequence 0 → M ⊗ k [ t ] k → N ⊗ k [ t ] k → P ⊗ k [ t ] k → 0 {\displaystyle 0\to M\otimes _{k[t]}k\to N\otimes _{k[t]}k\to P\otimes _{k[t]}k\to 0} is not exact in general. Hence, a natural question arises. Why is this sequence not exact? According to the diagram above, we can induce an exact sequence ker ⁡ ( t M ) → ker ⁡ ( t N ) → ker ⁡ ( t P ) → M ⊗ k [ t ] k → N ⊗ k [ t ] k → P ⊗ k [ t ] k → 0 {\displaystyle \ker(t_{M})\to \ker(t_{N})\to \ker(t_{P})\to M\otimes _{k[t]}k\to N\otimes _{k[t]}k\to P\otimes _{k[t]}k\to 0} by applying the snake lemma. Thus, the snake lemma reflects the tensor product's failure to be exact. == In the category of groups == Whether the snake lemma holds in the category of groups depends on the definition of cokernel. If f : A → B {\displaystyle f:A\to B} is a homomorphism of groups, the universal property of the cokernel is satisfied by the natural map B → B / N ( im ⁡ f ) {\displaystyle B\to B/N(\operatorname {im} f)} , where N ( im ⁡ f ) {\displaystyle N(\operatorname {im} f)} is the normalization of the image of f {\displaystyle f} . The snake lemma fails with this definition of cokernel: The connecting homomorphism can still be defined, and one can write down a sequence as in the statement of the snake lemma. This will always be a chain complex, but it may fail to be exact. If one simply replaces the cokernels in the statement of the snake lemma with the (right) cosets A ′ / im ⁡ a , B ′ / im ⁡ b , C ′ / im ⁡ c ′ {\displaystyle A'/\operatorname {im} a,B'/\operatorname {im} b,C'/\operatorname {im} c'} , the lemma is still valid. The quotients however are not groups, but pointed sets (a short sequence ( X , x ) → ( Y , y ) → ( Z , z ) {\displaystyle (X,x)\to (Y,y)\to (Z,z)} of pointed sets with maps f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} is called exact if f ( X ) = g − 1 ( z ) {\displaystyle f(X)=g^{-1}(z)} ). === Counterexample to snake lemma with categorical cokernel === Consider the alternating group A 5 {\displaystyle A_{5}} : this contains a subgroup isomorphic to the symmetric group S 3 {\displaystyle S_{3}} , which in turn can be written as a semidirect product of cyclic groups: S 3 ≃ C 3 ⋊ C 2 {\displaystyle S_{3}\simeq C_{3}\rtimes C_{2}} . This gives rise to the following diagram with exact rows: 1 → C 3 → C 3 → 1 ↓ ↓ ↓ 1 → 1 → S 3 → A 5 {\displaystyle {\begin{matrix}&1&\to &C_{3}&\to &C_{3}&\to 1\\&\downarrow &&\downarrow &&\downarrow \\1\to &1&\to &S_{3}&\to &A_{5}\end{matrix}}} Note that the middle column is not exact: C 2 {\displaystyle C_{2}} is not a normal subgroup in the semidirect product. Since A 5 {\displaystyle A_{5}} is simple, the right vertical arrow has trivial cokernel. Meanwhile the quotient group S 3 / C 3 {\displaystyle S_{3}/C_{3}} is isomorphic to C 2 {\displaystyle C_{2}} . The sequence in the statement of the snake lemma is therefore 1 ⟶ 1 ⟶ 1 ⟶ 1 ⟶ C 2 ⟶ 1 {\displaystyle 1\longrightarrow 1\longrightarrow 1\longrightarrow 1\longrightarrow C_{2}\longrightarrow 1} , which indeed fails to be exact. == In popular culture == The proof of the snake lemma is taught by Jill Clayburgh's character at the very beginning of the 1980 film It's My Turn. == See also == Zig-zag lemma == References == == External links == Weisstein, Eric W. "Snake Lemma". MathWorld. Snake Lemma at PlanetMath Proof of the Snake Lemma in the film It's My Turn
Wikipedia/Snake_lemma
In mathematics, specifically in homology theory and algebraic topology, cohomology is a general term for a sequence of abelian groups, usually one associated with a topological space, often defined from a cochain complex. Cohomology can be viewed as a method of assigning richer algebraic invariants to a space than homology. Some versions of cohomology arise by dualizing the construction of homology. In other words, cochains are functions on the group of chains in homology theory. From its start in topology, this idea became a dominant method in the mathematics of the second half of the twentieth century. From the initial idea of homology as a method of constructing algebraic invariants of topological spaces, the range of applications of homology and cohomology theories has spread throughout geometry and algebra. The terminology tends to hide the fact that cohomology, a contravariant theory, is more natural than homology in many applications. At a basic level, this has to do with functions and pullbacks in geometric situations: given spaces X {\displaystyle X} and Y {\displaystyle Y} , and some function F {\displaystyle F} on Y {\displaystyle Y} , for any mapping f : X → Y {\displaystyle f:X\to Y} , composition with f {\displaystyle f} gives rise to a function F ∘ f {\displaystyle F\circ f} on X {\displaystyle X} . The most important cohomology theories have a product, the cup product, which gives them a ring structure. Because of this feature, cohomology is usually a stronger invariant than homology. == Singular cohomology == Singular cohomology is a powerful invariant in topology, associating a graded-commutative ring with any topological space. Every continuous map f : X → Y {\displaystyle f:X\to Y} determines a homomorphism from the cohomology ring of Y {\displaystyle Y} to that of X {\displaystyle X} ; this puts strong restrictions on the possible maps from X {\displaystyle X} to Y {\displaystyle Y} . Unlike more subtle invariants such as homotopy groups, the cohomology ring tends to be computable in practice for spaces of interest. For a topological space X {\displaystyle X} , the definition of singular cohomology starts with the singular chain complex: ⋯ → C i + 1 → ∂ i + 1 C i → ∂ i C i − 1 → ⋯ {\displaystyle \cdots \to C_{i+1}{\stackrel {\partial _{i+1}}{\to }}C_{i}{\stackrel {\partial _{i}}{\to }}\ C_{i-1}\to \cdots } By definition, the singular homology of X {\displaystyle X} is the homology of this chain complex (the kernel of one homomorphism modulo the image of the previous one). In more detail, C i {\displaystyle C_{i}} is the free abelian group on the set of continuous maps from the standard i {\displaystyle i} -simplex to X {\displaystyle X} (called "singular i {\displaystyle i} -simplices in X {\displaystyle X} "), and ∂ i {\displaystyle \partial _{i}} is the i {\displaystyle i} -th boundary homomorphism. The groups C i {\displaystyle C_{i}} are zero for i {\displaystyle i} negative. Now fix an abelian group A {\displaystyle A} , and replace each group C i {\displaystyle C_{i}} by its dual group C i ∗ = H o m ( C i , A ) , {\displaystyle C_{i}^{*}=\mathrm {Hom} (C_{i},A),} and ∂ i {\displaystyle \partial _{i}} by its dual homomorphism d i − 1 : C i − 1 ∗ → C i ∗ . {\displaystyle d_{i-1}:C_{i-1}^{*}\to C_{i}^{*}.} This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex ⋯ ← C i + 1 ∗ ← d i C i ∗ ← d i − 1 C i − 1 ∗ ← ⋯ {\displaystyle \cdots \leftarrow C_{i+1}^{*}{\stackrel {d_{i}}{\leftarrow }}\ C_{i}^{*}{\stackrel {d_{i-1}}{\leftarrow }}C_{i-1}^{*}\leftarrow \cdots } For an integer i {\displaystyle i} , the i {\displaystyle i} th cohomology group of X {\displaystyle X} with coefficients in A {\displaystyle A} is defined to be ker ⁡ ( d i ) / im ⁡ ( d i − 1 ) {\displaystyle \operatorname {ker} (d_{i})/\operatorname {im} (d_{i-1})} and denoted by H i ( X , A ) {\displaystyle H^{i}(X,A)} . The group H i ( X , A ) {\displaystyle H^{i}(X,A)} is zero for i {\displaystyle i} negative. The elements of C i ∗ {\displaystyle C_{i}^{*}} are called singular i {\displaystyle i} -cochains with coefficients in A {\displaystyle A} . (Equivalently, an i {\displaystyle i} -cochain on X {\displaystyle X} can be identified with a function from the set of singular i {\displaystyle i} -simplices in X {\displaystyle X} to A {\displaystyle A} .) Elements of ker ⁡ ( d ) {\displaystyle \ker(d)} and im ( d ) {\displaystyle {\textrm {im}}(d)} are called cocycles and coboundaries, respectively, while elements of ker ⁡ ( d i ) / im ⁡ ( d i − 1 ) = H i ( X , A ) {\displaystyle \operatorname {ker} (d_{i})/\operatorname {im} (d_{i-1})=H^{i}(X,A)} are called cohomology classes (because they are equivalence classes of cocycles). In what follows, the coefficient group A {\displaystyle A} is sometimes not written. It is common to take A {\displaystyle A} to be a commutative ring R {\displaystyle R} ; then the cohomology groups are R {\displaystyle R} -modules. A standard choice is the ring Z {\displaystyle \mathbb {Z} } of integers. Some of the formal properties of cohomology are only minor variants of the properties of homology: A continuous map f : X → Y {\displaystyle f:X\to Y} determines a pushforward homomorphism f ∗ : H i ( X ) → H i ( Y ) {\displaystyle f_{*}:H_{i}(X)\to H_{i}(Y)} on homology and a pullback homomorphism f ∗ : H i ( Y ) → H i ( X ) {\displaystyle f^{*}:H^{i}(Y)\to H^{i}(X)} on cohomology. This makes cohomology into a contravariant functor from topological spaces to abelian groups (or R {\displaystyle R} -modules). Two homotopic maps from X {\displaystyle X} to Y {\displaystyle Y} induce the same homomorphism on cohomology (just as on homology). The Mayer–Vietoris sequence is an important computational tool in cohomology, as in homology. Note that the boundary homomorphism increases (rather than decreases) degree in cohomology. That is, if a space X {\displaystyle X} is the union of open subsets U {\displaystyle U} and V {\displaystyle V} , then there is a long exact sequence: ⋯ → H i ( X ) → H i ( U ) ⊕ H i ( V ) → H i ( U ∩ V ) → H i + 1 ( X ) → ⋯ {\displaystyle \cdots \to H^{i}(X)\to H^{i}(U)\oplus H^{i}(V)\to H^{i}(U\cap V)\to H^{i+1}(X)\to \cdots } There are relative cohomology groups H i ( X , Y ; A ) {\displaystyle H^{i}(X,Y;A)} for any subspace Y {\displaystyle Y} of a space X {\displaystyle X} . They are related to the usual cohomology groups by a long exact sequence: ⋯ → H i ( X , Y ) → H i ( X ) → H i ( Y ) → H i + 1 ( X , Y ) → ⋯ {\displaystyle \cdots \to H^{i}(X,Y)\to H^{i}(X)\to H^{i}(Y)\to H^{i+1}(X,Y)\to \cdots } The universal coefficient theorem describes cohomology in terms of homology, using Ext groups. Namely, there is a short exact sequence 0 → Ext Z 1 ⁡ ( H i − 1 ⁡ ( X , Z ) , A ) → H i ( X , A ) → Hom Z ⁡ ( H i ( X , Z ) , A ) → 0. {\displaystyle 0\to \operatorname {Ext} _{\mathbb {Z} }^{1}(\operatorname {H} _{i-1}(X,\mathbb {Z} ),A)\to H^{i}(X,A)\to \operatorname {Hom} _{\mathbb {Z} }(H_{i}(X,\mathbb {Z} ),A)\to 0.} A related statement is that for a field F {\displaystyle F} , H i ( X , F ) {\displaystyle H^{i}(X,F)} is precisely the dual space of the vector space H i ( X , F ) {\displaystyle H_{i}(X,F)} . If X {\displaystyle X} is a topological manifold or a CW complex, then the cohomology groups H i ( X , A ) {\displaystyle H^{i}(X,A)} are zero for i {\displaystyle i} greater than the dimension of X {\displaystyle X} . If X {\displaystyle X} is a compact manifold (possibly with boundary), or a CW complex with finitely many cells in each dimension, and R {\displaystyle R} is a commutative Noetherian ring, then the R {\displaystyle R} -module H i ( X , R ) {\displaystyle H^{i}(X,R)} is finitely generated for each i {\displaystyle i} . On the other hand, cohomology has a crucial structure that homology does not: for any topological space X {\displaystyle X} and commutative ring R {\displaystyle R} , there is a bilinear map, called the cup product: H i ( X , R ) × H j ( X , R ) → H i + j ( X , R ) , {\displaystyle H^{i}(X,R)\times H^{j}(X,R)\to H^{i+j}(X,R),} defined by an explicit formula on singular cochains. The product of cohomology classes u {\displaystyle u} and v {\displaystyle v} is written as u ∪ v {\displaystyle u\cup v} or simply as u v {\displaystyle uv} . This product makes the direct sum H ∗ ( X , R ) = ⨁ i H i ( X , R ) {\displaystyle H^{*}(X,R)=\bigoplus _{i}H^{i}(X,R)} into a graded ring, called the cohomology ring of X {\displaystyle X} . It is graded-commutative in the sense that: u v = ( − 1 ) i j v u , u ∈ H i ( X , R ) , v ∈ H j ( X , R ) . {\displaystyle uv=(-1)^{ij}vu,\qquad u\in H^{i}(X,R),v\in H^{j}(X,R).} For any continuous map f : X → Y , {\displaystyle f\colon X\to Y,} the pullback f ∗ : H ∗ ( Y , R ) → H ∗ ( X , R ) {\displaystyle f^{*}:H^{*}(Y,R)\to H^{*}(X,R)} is a homomorphism of graded R {\displaystyle R} -algebras. It follows that if two spaces are homotopy equivalent, then their cohomology rings are isomorphic. Here are some of the geometric interpretations of the cup product. In what follows, manifolds are understood to be without boundary, unless stated otherwise. A closed manifold means a compact manifold (without boundary), whereas a closed submanifold N of a manifold M means a submanifold that is a closed subset of M, not necessarily compact (although N is automatically compact if M is). Let X be a closed oriented manifold of dimension n. Then Poincaré duality gives an isomorphism HiX ≅ Hn−iX. As a result, a closed oriented submanifold S of codimension i in X determines a cohomology class in HiX, called [S]. In these terms, the cup product describes the intersection of submanifolds. Namely, if S and T are submanifolds of codimension i and j that intersect transversely, then [ S ] [ T ] = [ S ∩ T ] ∈ H i + j ( X ) , {\displaystyle [S][T]=[S\cap T]\in H^{i+j}(X),} where the intersection S ∩ T is a submanifold of codimension i + j, with an orientation determined by the orientations of S, T, and X. In the case of smooth manifolds, if S and T do not intersect transversely, this formula can still be used to compute the cup product [S][T], by perturbing S or T to make the intersection transverse. More generally, without assuming that X has an orientation, a closed submanifold of X with an orientation on its normal bundle determines a cohomology class on X. If X is a noncompact manifold, then a closed submanifold (not necessarily compact) determines a cohomology class on X. In both cases, the cup product can again be described in terms of intersections of submanifolds. Note that Thom constructed an integral cohomology class of degree 7 on a smooth 14-manifold that is not the class of any smooth submanifold. On the other hand, he showed that every integral cohomology class of positive degree on a smooth manifold has a positive multiple that is the class of a smooth submanifold. Also, every integral cohomology class on a manifold can be represented by a "pseudomanifold", that is, a simplicial complex that is a manifold outside a closed subset of codimension at least 2. For a smooth manifold X, de Rham's theorem says that the singular cohomology of X with real coefficients is isomorphic to the de Rham cohomology of X, defined using differential forms. The cup product corresponds to the product of differential forms. This interpretation has the advantage that the product on differential forms is graded-commutative, whereas the product on singular cochains is only graded-commutative up to chain homotopy. In fact, it is impossible to modify the definition of singular cochains with coefficients in the integers Z {\displaystyle \mathbb {Z} } or in Z / p {\displaystyle \mathbb {Z} /p} for a prime number p to make the product graded-commutative on the nose. The failure of graded-commutativity at the cochain level leads to the Steenrod operations on mod p cohomology. Very informally, for any topological space X, elements of H i ( X ) {\displaystyle H^{i}(X)} can be thought of as represented by codimension-i subspaces of X that can move freely on X. For example, one way to define an element of H i ( X ) {\displaystyle H^{i}(X)} is to give a continuous map f from X to a manifold M and a closed codimension-i submanifold N of M with an orientation on the normal bundle. Informally, one thinks of the resulting class f ∗ ( [ N ] ) ∈ H i ( X ) {\displaystyle f^{*}([N])\in H^{i}(X)} as lying on the subspace f − 1 ( N ) {\displaystyle f^{-1}(N)} of X; this is justified in that the class f ∗ ( [ N ] ) {\displaystyle f^{*}([N])} restricts to zero in the cohomology of the open subset X − f − 1 ( N ) . {\displaystyle X-f^{-1}(N).} The cohomology class f ∗ ( [ N ] ) {\displaystyle f^{*}([N])} can move freely on X in the sense that N could be replaced by any continuous deformation of N inside M. == Examples == In what follows, cohomology is taken with coefficients in the integers Z, unless stated otherwise. The cohomology ring of a point is the ring Z in degree 0. By homotopy invariance, this is also the cohomology ring of any contractible space, such as Euclidean space Rn. For a positive integer n, the cohomology ring of the sphere S n {\displaystyle S^{n}} is Z[x]/(x2) (the quotient ring of a polynomial ring by the given ideal), with x in degree n. In terms of Poincaré duality as above, x is the class of a point on the sphere. The cohomology ring of the torus ( S 1 ) n {\displaystyle (S^{1})^{n}} is the exterior algebra over Z on n generators in degree 1. For example, let P denote a point in the circle S 1 {\displaystyle S^{1}} , and Q the point (P,P) in the 2-dimensional torus ( S 1 ) 2 {\displaystyle (S^{1})^{2}} . Then the cohomology of (S1)2 has a basis as a free Z-module of the form: the element 1 in degree 0, x := [P × S1] and y := [S1 × P] in degree 1, and xy = [Q] in degree 2. (Implicitly, orientations of the torus and of the two circles have been fixed here.) Note that yx = −xy = −[Q], by graded-commutativity. More generally, let R be a commutative ring, and let X and Y be any topological spaces such that H*(X,R) is a finitely generated free R-module in each degree. (No assumption is needed on Y.) Then the Künneth formula gives that the cohomology ring of the product space X × Y is a tensor product of R-algebras: H ∗ ( X × Y , R ) ≅ H ∗ ( X , R ) ⊗ R H ∗ ( Y , R ) . {\displaystyle H^{*}(X\times Y,R)\cong H^{*}(X,R)\otimes _{R}H^{*}(Y,R).} The cohomology ring of real projective space RPn with Z/2 coefficients is Z/2[x]/(xn+1), with x in degree 1. Here x is the class of a hyperplane RPn−1 in RPn; this makes sense even though RPj is not orientable for j even and positive, because Poincaré duality with Z/2 coefficients works for arbitrary manifolds. With integer coefficients, the answer is a bit more complicated. The Z-cohomology of RP2a has an element y of degree 2 such that the whole cohomology is the direct sum of a copy of Z spanned by the element 1 in degree 0 together with copies of Z/2 spanned by the elements yi for i=1,...,a. The Z-cohomology of RP2a+1 is the same together with an extra copy of Z in degree 2a+1. The cohomology ring of complex projective space CPn is Z[x]/(xn+1), with x in degree 2. Here x is the class of a hyperplane CPn−1 in CPn. More generally, xj is the class of a linear subspace CPn−j in CPn. The cohomology ring of the closed oriented surface X of genus g ≥ 0 has a basis as a free Z-module of the form: the element 1 in degree 0, A1,...,Ag and B1,...,Bg in degree 1, and the class P of a point in degree 2. The product is given by: AiAj = BiBj = 0 for all i and j, AiBj = 0 if i ≠ j, and AiBi = P for all i. By graded-commutativity, it follows that BiAi = −P. On any topological space, graded-commutativity of the cohomology ring implies that 2x2 = 0 for all odd-degree cohomology classes x. It follows that for a ring R containing 1/2, all odd-degree elements of H*(X,R) have square zero. On the other hand, odd-degree elements need not have square zero if R is Z/2 or Z, as one sees in the example of RP2 (with Z/2 coefficients) or RP4 × RP2 (with Z coefficients). == The diagonal == The cup product on cohomology can be viewed as coming from the diagonal map Δ : X → X × X {\displaystyle \Delta :X\to X\times X} , x ↦ ( x , x ) {\displaystyle x\mapsto (x,x)} . Namely, for any spaces X {\displaystyle X} and Y {\displaystyle Y} with cohomology classes u ∈ H i ( X , R ) {\displaystyle u\in H^{i}(X,R)} and v ∈ H j ( Y , R ) {\displaystyle v\in H^{j}(Y,R)} , there is an external product (or cross product) cohomology class u × v ∈ H i + j ( X × Y , R ) {\displaystyle u\times v\in H^{i+j}(X\times Y,R)} . The cup product of classes u ∈ H i ( X , R ) {\displaystyle u\in H^{i}(X,R)} and v ∈ H j ( X , R ) {\displaystyle v\in H^{j}(X,R)} can be defined as the pullback of the external product by the diagonal: u v = Δ ∗ ( u × v ) ∈ H i + j ( X , R ) . {\displaystyle uv=\Delta ^{*}(u\times v)\in H^{i+j}(X,R).} Alternatively, the external product can be defined in terms of the cup product. For spaces X {\displaystyle X} and Y {\displaystyle Y} , write f : X × Y → X {\displaystyle f:X\times Y\to X} and g : X × Y → Y {\displaystyle g:X\times Y\to Y} for the two projections. Then the external product of classes u ∈ H i ( X , R ) {\displaystyle u\in H^{i}(X,R)} and v ∈ H j ( Y , R ) {\displaystyle v\in H^{j}(Y,R)} is: u × v = ( f ∗ ( u ) ) ( g ∗ ( v ) ) ∈ H i + j ( X × Y , R ) . {\displaystyle u\times v=(f^{*}(u))(g^{*}(v))\in H^{i+j}(X\times Y,R).} == Poincaré duality == Another interpretation of Poincaré duality is that the cohomology ring of a closed oriented manifold is self-dual in a strong sense. Namely, let X {\displaystyle X} be a closed connected oriented manifold of dimension n {\displaystyle n} , and let F {\displaystyle F} be a field. Then H n ( X , F ) {\displaystyle H^{n}(X,F)} is isomorphic to F {\displaystyle F} , and the product H i ( X , F ) × H n − i ( X , F ) → H n ( X , F ) ≅ F {\displaystyle H^{i}(X,F)\times H^{n-i}(X,F)\to H^{n}(X,F)\cong F} is a perfect pairing for each integer i {\displaystyle i} . In particular, the vector spaces H i ( X , F ) {\displaystyle H^{i}(X,F)} and H n − i ( X , F ) {\displaystyle H^{n-i}(X,F)} have the same (finite) dimension. Likewise, the product on integral cohomology modulo torsion with values in H n ( X , Z ) ≅ Z {\displaystyle H^{n}(X,\mathbb {Z} )\cong \mathbb {Z} } is a perfect pairing over Z {\displaystyle \mathbb {Z} } . == Characteristic classes == An oriented real vector bundle E of rank r over a topological space X determines a cohomology class on X, the Euler class χ(E) ∈ Hr(X,Z). Informally, the Euler class is the class of the zero set of a general section of E. That interpretation can be made more explicit when E is a smooth vector bundle over a smooth manifold X, since then a general smooth section of X vanishes on a codimension-r submanifold of X. There are several other types of characteristic classes for vector bundles that take values in cohomology, including Chern classes, Stiefel–Whitney classes, and Pontryagin classes. == Eilenberg–MacLane spaces == For each abelian group A and natural number j, there is a space K ( A , j ) {\displaystyle K(A,j)} whose j-th homotopy group is isomorphic to A and whose other homotopy groups are zero. Such a space is called an Eilenberg–MacLane space. This space has the remarkable property that it is a classifying space for cohomology: there is a natural element u of H j ( K ( A , j ) , A ) {\displaystyle H^{j}(K(A,j),A)} , and every cohomology class of degree j on every space X is the pullback of u by some continuous map X → K ( A , j ) {\displaystyle X\to K(A,j)} . More precisely, pulling back the class u gives a bijection [ X , K ( A , j ) ] → ≅ H j ( X , A ) {\displaystyle [X,K(A,j)]{\stackrel {\cong }{\to }}H^{j}(X,A)} for every space X with the homotopy type of a CW complex. Here [ X , Y ] {\displaystyle [X,Y]} denotes the set of homotopy classes of continuous maps from X to Y. For example, the space K ( Z , 1 ) {\displaystyle K(\mathbb {Z} ,1)} (defined up to homotopy equivalence) can be taken to be the circle S 1 {\displaystyle S^{1}} . So the description above says that every element of H 1 ( X , Z ) {\displaystyle H^{1}(X,\mathbb {Z} )} is pulled back from the class u of a point on S 1 {\displaystyle S^{1}} by some map X → S 1 {\displaystyle X\to S^{1}} . There is a related description of the first cohomology with coefficients in any abelian group A, say for a CW complex X. Namely, H 1 ( X , A ) {\displaystyle H^{1}(X,A)} is in one-to-one correspondence with the set of isomorphism classes of Galois covering spaces of X with group A, also called principal A-bundles over X. For X connected, it follows that H 1 ( X , A ) {\displaystyle H^{1}(X,A)} is isomorphic to Hom ⁡ ( π 1 ( X ) , A ) {\displaystyle \operatorname {Hom} (\pi _{1}(X),A)} , where π 1 ( X ) {\displaystyle \pi _{1}(X)} is the fundamental group of X. For example, H 1 ( X , Z / 2 ) {\displaystyle H^{1}(X,\mathbb {Z} /2)} classifies the double covering spaces of X, with the element 0 ∈ H 1 ( X , Z / 2 ) {\displaystyle 0\in H^{1}(X,\mathbb {Z} /2)} corresponding to the trivial double covering, the disjoint union of two copies of X. == Cap product == For any topological space X, the cap product is a bilinear map ∩ : H i ( X , R ) × H j ( X , R ) → H j − i ( X , R ) {\displaystyle \cap :H^{i}(X,R)\times H_{j}(X,R)\to H_{j-i}(X,R)} for any integers i and j and any commutative ring R. The resulting map H ∗ ( X , R ) × H ∗ ( X , R ) → H ∗ ( X , R ) {\displaystyle H^{*}(X,R)\times H_{*}(X,R)\to H_{*}(X,R)} makes the singular homology of X into a module over the singular cohomology ring of X. For i = j, the cap product gives the natural homomorphism H i ( X , R ) → Hom R ⁡ ( H i ( X , R ) , R ) , {\displaystyle H^{i}(X,R)\to \operatorname {Hom} _{R}(H_{i}(X,R),R),} which is an isomorphism for R a field. For example, let X be an oriented manifold, not necessarily compact. Then a closed oriented codimension-i submanifold Y of X (not necessarily compact) determines an element of Hi(X,R), and a compact oriented j-dimensional submanifold Z of X determines an element of Hj(X,R). The cap product [Y] ∩ [Z] ∈ Hj−i(X,R) can be computed by perturbing Y and Z to make them intersect transversely and then taking the class of their intersection, which is a compact oriented submanifold of dimension j − i. A closed oriented manifold X of dimension n has a fundamental class [X] in Hn(X,R). The Poincaré duality isomorphism H i ( X , R ) → ≅ H n − i ( X , R ) {\displaystyle H^{i}(X,R){\overset {\cong }{\to }}H_{n-i}(X,R)} is defined by cap product with the fundamental class of X. == Brief history of singular cohomology == Although cohomology is fundamental to modern algebraic topology, its importance was not seen for some 40 years after the development of homology. The concept of dual cell structure, which Henri Poincaré used in his proof of his Poincaré duality theorem, contained the beginning of the idea of cohomology, but this was not seen until later. There were various precursors to cohomology. In the mid-1920s, J. W. Alexander and Solomon Lefschetz founded intersection theory of cycles on manifolds. On a closed oriented n-dimensional manifold M an i-cycle and a j-cycle with nonempty intersection will, if in the general position, have as their intersection a (i + j − n)-cycle. This leads to a multiplication of homology classes H i ( M ) × H j ( M ) → H i + j − n ( M ) , {\displaystyle H_{i}(M)\times H_{j}(M)\to H_{i+j-n}(M),} which (in retrospect) can be identified with the cup product on the cohomology of M. Alexander had by 1930 defined a first notion of a cochain, by thinking of an i-cochain on a space X as a function on small neighborhoods of the diagonal in Xi+1. In 1931, Georges de Rham related homology and differential forms, proving de Rham's theorem. This result can be stated more simply in terms of cohomology. In 1934, Lev Pontryagin proved the Pontryagin duality theorem; a result on topological groups. This (in rather special cases) provided an interpretation of Poincaré duality and Alexander duality in terms of group characters. At a 1935 conference in Moscow, Andrey Kolmogorov and Alexander both introduced cohomology and tried to construct a cohomology product structure. In 1936, Norman Steenrod constructed Čech cohomology by dualizing Čech homology. From 1936 to 1938, Hassler Whitney and Eduard Čech developed the cup product (making cohomology into a graded ring) and cap product, and realized that Poincaré duality can be stated in terms of the cap product. Their theory was still limited to finite cell complexes. In 1944, Samuel Eilenberg overcame the technical limitations, and gave the modern definition of singular homology and cohomology. In 1945, Eilenberg and Steenrod stated the axioms defining a homology or cohomology theory, discussed below. In their 1952 book, Foundations of Algebraic Topology, they proved that the existing homology and cohomology theories did indeed satisfy their axioms. In 1946, Jean Leray defined sheaf cohomology. In 1948 Edwin Spanier, building on work of Alexander and Kolmogorov, developed Alexander–Spanier cohomology. == Sheaf cohomology == Sheaf cohomology is a rich generalization of singular cohomology, allowing more general "coefficients" than simply an abelian group. For every sheaf of abelian groups E on a topological space X, one has cohomology groups Hi(X,E) for integers i. In particular, in the case of the constant sheaf on X associated with an abelian group A, the resulting groups Hi(X,A) coincide with singular cohomology for X a manifold or CW complex (though not for arbitrary spaces X). Starting in the 1950s, sheaf cohomology has become a central part of algebraic geometry and complex analysis, partly because of the importance of the sheaf of regular functions or the sheaf of holomorphic functions. Grothendieck elegantly defined and characterized sheaf cohomology in the language of homological algebra. The essential point is to fix the space X and think of sheaf cohomology as a functor from the abelian category of sheaves on X to abelian groups. Start with the functor taking a sheaf E on X to its abelian group of global sections over X, E(X). This functor is left exact, but not necessarily right exact. Grothendieck defined sheaf cohomology groups to be the right derived functors of the left exact functor E ↦ E(X). That definition suggests various generalizations. For example, one can define the cohomology of a topological space X with coefficients in any complex of sheaves, earlier called hypercohomology (but usually now just "cohomology"). From that point of view, sheaf cohomology becomes a sequence of functors from the derived category of sheaves on X to abelian groups. In a broad sense of the word, "cohomology" is often used for the right derived functors of a left exact functor on an abelian category, while "homology" is used for the left derived functors of a right exact functor. For example, for a ring R, the Tor groups ToriR(M,N) form a "homology theory" in each variable, the left derived functors of the tensor product M⊗RN of R-modules. Likewise, the Ext groups ExtiR(M,N) can be viewed as a "cohomology theory" in each variable, the right derived functors of the Hom functor HomR(M,N). Sheaf cohomology can be identified with a type of Ext group. Namely, for a sheaf E on a topological space X, Hi(X,E) is isomorphic to Exti(ZX, E), where ZX denotes the constant sheaf associated with the integers Z, and Ext is taken in the abelian category of sheaves on X. == Cohomology of varieties == There are numerous machines built for computing the cohomology of algebraic varieties. The simplest case being the determination of cohomology for smooth projective varieties over a field of characteristic 0 {\displaystyle 0} . Tools from Hodge theory, called Hodge structures, help give computations of cohomology of these types of varieties (with the addition of more refined information). In the simplest case the cohomology of a smooth hypersurface in P n {\displaystyle \mathbb {P} ^{n}} can be determined from the degree of the polynomial alone. When considering varieties over a finite field, or a field of characteristic p {\displaystyle p} , more powerful tools are required because the classical definitions of homology/cohomology break down. This is because varieties over finite fields will only be a finite set of points. Grothendieck came up with the idea for a Grothendieck topology and used sheaf cohomology over the étale topology to define the cohomology theory for varieties over a finite field. Using the étale topology for a variety over a field of characteristic p {\displaystyle p} one can construct ℓ {\displaystyle \ell } -adic cohomology for ℓ ≠ p {\displaystyle \ell \neq p} . This is defined as the projective limit H k ( X ; Q ℓ ) := lim ← n ∈ N ⁡ H e t k ( X ; Z / ( ℓ n ) ) ⊗ Z ℓ Q ℓ . {\displaystyle H^{k}(X;\mathbb {Q} _{\ell }):=\varprojlim _{n\in \mathbb {N} }H_{et}^{k}(X;\mathbb {Z} /(\ell ^{n}))\otimes _{\mathbb {Z} _{\ell }}\mathbb {Q} _{\ell }.} If we have a scheme of finite type X = Proj ⁡ ( Z [ x 0 , … , x n ] ( f 1 , … , f k ) ) {\displaystyle X=\operatorname {Proj} \left({\frac {\mathbb {Z} \left[x_{0},\ldots ,x_{n}\right]}{\left(f_{1},\ldots ,f_{k}\right)}}\right)} then there is an equality of dimensions for the Betti cohomology of X ( C ) {\displaystyle X(\mathbb {C} )} and the ℓ {\displaystyle \ell } -adic cohomology of X ( F q ) {\displaystyle X(\mathbb {F} _{q})} whenever the variety is smooth over both fields. In addition to these cohomology theories there are other cohomology theories called Weil cohomology theories which behave similarly to singular cohomology. There is a conjectured theory of motives which underlie all of the Weil cohomology theories. Another useful computational tool is the blowup sequence. Given a codimension ≥ 2 {\displaystyle \geq 2} subscheme Z ⊂ X {\displaystyle Z\subset X} there is a Cartesian square E ⟶ B l Z ( X ) ↓ ↓ Z ⟶ X {\displaystyle {\begin{matrix}E&\longrightarrow &Bl_{Z}(X)\\\downarrow &&\downarrow \\Z&\longrightarrow &X\end{matrix}}} From this there is an associated long exact sequence ⋯ → H n ( X ) → H n ( Z ) ⊕ H n ( B l Z ( X ) ) → H n ( E ) → H n + 1 ( X ) → ⋯ {\displaystyle \cdots \to H^{n}(X)\to H^{n}(Z)\oplus H^{n}(Bl_{Z}(X))\to H^{n}(E)\to H^{n+1}(X)\to \cdots } If the subvariety Z {\displaystyle Z} is smooth, then the connecting morphisms are all trivial, hence H n ( B l Z ( X ) ) ⊕ H n ( Z ) ≅ H n ( X ) ⊕ H n ( E ) {\displaystyle H^{n}(Bl_{Z}(X))\oplus H^{n}(Z)\cong H^{n}(X)\oplus H^{n}(E)} == Axioms and generalized cohomology theories == There are various ways to define cohomology for topological spaces (such as singular cohomology, Čech cohomology, Alexander–Spanier cohomology or sheaf cohomology). (Here sheaf cohomology is considered only with coefficients in a constant sheaf.) These theories give different answers for some spaces, but there is a large class of spaces on which they all agree. This is most easily understood axiomatically: there is a list of properties known as the Eilenberg–Steenrod axioms, and any two constructions that share those properties will agree at least on all CW complexes. There are versions of the axioms for a homology theory as well as for a cohomology theory. Some theories can be viewed as tools for computing singular cohomology for special topological spaces, such as simplicial cohomology for simplicial complexes, cellular cohomology for CW complexes, and de Rham cohomology for smooth manifolds. One of the Eilenberg–Steenrod axioms for a cohomology theory is the dimension axiom: if P is a single point, then Hi(P) = 0 for all i ≠ 0. Around 1960, George W. Whitehead observed that it is fruitful to omit the dimension axiom completely: this gives the notion of a generalized homology theory or a generalized cohomology theory, defined below. There are generalized cohomology theories such as K-theory or complex cobordism that give rich information about a topological space, not directly accessible from singular cohomology. (In this context, singular cohomology is often called "ordinary cohomology".) By definition, a generalized homology theory is a sequence of functors hi (for integers i) from the category of CW-pairs (X, A) (so X is a CW complex and A is a subcomplex) to the category of abelian groups, together with a natural transformation ∂i: hi(X, A) → hi−1(A) called the boundary homomorphism (here hi−1(A) is a shorthand for hi−1(A,∅)). The axioms are: Homotopy: If f : ( X , A ) → ( Y , B ) {\displaystyle f:(X,A)\to (Y,B)} is homotopic to g : ( X , A ) → ( Y , B ) {\displaystyle g:(X,A)\to (Y,B)} , then the induced homomorphisms on homology are the same. Exactness: Each pair (X,A) induces a long exact sequence in homology, via the inclusions f: A → X and g: (X,∅) → (X,A): ⋯ → h i ( A ) → f ∗ h i ( X ) → g ∗ h i ( X , A ) → ∂ h i − 1 ( A ) → ⋯ . {\displaystyle \cdots \to h_{i}(A){\overset {f_{*}}{\to }}h_{i}(X){\overset {g_{*}}{\to }}h_{i}(X,A){\overset {\partial }{\to }}h_{i-1}(A)\to \cdots .} Excision: If X is the union of subcomplexes A and B, then the inclusion f: (A,A∩B) → (X,B) induces an isomorphism h i ( A , A ∩ B ) → f ∗ h i ( X , B ) {\displaystyle h_{i}(A,A\cap B){\overset {f_{*}}{\to }}h_{i}(X,B)} for every i. Additivity: If (X,A) is the disjoint union of a set of pairs (Xα,Aα), then the inclusions (Xα,Aα) → (X,A) induce an isomorphism from the direct sum: ⨁ α h i ( X α , A α ) → h i ( X , A ) {\displaystyle \bigoplus _{\alpha }h_{i}(X_{\alpha },A_{\alpha })\to h_{i}(X,A)} for every i. The axioms for a generalized cohomology theory are obtained by reversing the arrows, roughly speaking. In more detail, a generalized cohomology theory is a sequence of contravariant functors hi (for integers i) from the category of CW-pairs to the category of abelian groups, together with a natural transformation d: hi(A) → hi+1(X,A) called the boundary homomorphism (writing hi(A) for hi(A,∅)). The axioms are: Homotopy: Homotopic maps induce the same homomorphism on cohomology. Exactness: Each pair (X,A) induces a long exact sequence in cohomology, via the inclusions f: A → X and g: (X,∅) → (X,A): ⋯ → h i ( X , A ) → g ∗ h i ( X ) → f ∗ h i ( A ) → d h i + 1 ( X , A ) → ⋯ . {\displaystyle \cdots \to h^{i}(X,A){\overset {g_{*}}{\to }}h^{i}(X){\overset {f_{*}}{\to }}h^{i}(A){\overset {d}{\to }}h^{i+1}(X,A)\to \cdots .} Excision: If X is the union of subcomplexes A and B, then the inclusion f: (A,A∩B) → (X,B) induces an isomorphism h i ( X , B ) → f ∗ h i ( A , A ∩ B ) {\displaystyle h^{i}(X,B){\overset {f_{*}}{\to }}h^{i}(A,A\cap B)} for every i. Additivity: If (X,A) is the disjoint union of a set of pairs (Xα,Aα), then the inclusions (Xα,Aα) → (X,A) induce an isomorphism to the product group: h i ( X , A ) → ∏ α h i ( X α , A α ) {\displaystyle h^{i}(X,A)\to \prod _{\alpha }h^{i}(X_{\alpha },A_{\alpha })} for every i. A spectrum determines both a generalized homology theory and a generalized cohomology theory. A fundamental result by Brown, Whitehead, and Adams says that every generalized homology theory comes from a spectrum, and likewise every generalized cohomology theory comes from a spectrum. This generalizes the representability of ordinary cohomology by Eilenberg–MacLane spaces. A subtle point is that the functor from the stable homotopy category (the homotopy category of spectra) to generalized homology theories on CW-pairs is not an equivalence, although it gives a bijection on isomorphism classes; there are nonzero maps in the stable homotopy category (called phantom maps) that induce the zero map between homology theories on CW-pairs. Likewise, the functor from the stable homotopy category to generalized cohomology theories on CW-pairs is not an equivalence. It is the stable homotopy category, not these other categories, that has good properties such as being triangulated. If one prefers homology or cohomology theories to be defined on all topological spaces rather than on CW complexes, one standard approach is to include the axiom that every weak homotopy equivalence induces an isomorphism on homology or cohomology. (That is true for singular homology or singular cohomology, but not for sheaf cohomology, for example.) Since every space admits a weak homotopy equivalence from a CW complex, this axiom reduces homology or cohomology theories on all spaces to the corresponding theory on CW complexes. Some examples of generalized cohomology theories are: Stable cohomotopy groups π S ∗ ( X ) . {\displaystyle \pi _{S}^{*}(X).} The corresponding homology theory is used more often: stable homotopy groups π ∗ S ( X ) . {\displaystyle \pi _{*}^{S}(X).} Various different flavors of cobordism groups, based on studying a space by considering all maps from it to manifolds: unoriented cobordism M O ∗ ( X ) {\displaystyle MO^{*}(X)} oriented cobordism M S O ∗ ( X ) , {\displaystyle MSO^{*}(X),} complex cobordism M U ∗ ( X ) , {\displaystyle MU^{*}(X),} and so on. Complex cobordism has turned out to be especially powerful in homotopy theory. It is closely related to formal groups, via a theorem of Daniel Quillen. Various different flavors of topological K-theory, based on studying a space by considering all vector bundles over it: K O ∗ ( X ) {\displaystyle KO^{*}(X)} (real periodic K-theory), k o ∗ ( X ) {\displaystyle ko^{*}(X)} (real connective K-theory), K ∗ ( X ) {\displaystyle K^{*}(X)} (complex periodic K-theory), k u ∗ ( X ) {\displaystyle ku^{*}(X)} (complex connective K-theory), and so on. Brown–Peterson cohomology, Morava K-theory, Morava E-theory, and other theories built from complex cobordism. Various flavors of elliptic cohomology. Many of these theories carry richer information than ordinary cohomology, but are harder to compute. A cohomology theory E is said to be multiplicative if E ∗ ( X ) {\displaystyle E^{*}(X)} has the structure of a graded ring for each space X. In the language of spectra, there are several more precise notions of a ring spectrum, such as an E∞ ring spectrum, where the product is commutative and associative in a strong sense. == Other cohomology theories == Cohomology theories in a broader sense (invariants of other algebraic or geometric structures, rather than of topological spaces) include: == See also == complex-oriented cohomology theory == Citations == == References == Dieudonné, Jean (1989), History of Algebraic and Differential Topology, Birkhäuser, ISBN 0-8176-3388-X, MR 0995842 Dold, Albrecht (1972), Lectures on Algebraic Topology, Springer-Verlag, ISBN 978-3-540-58660-9, MR 0415602 Eilenberg, Samuel; Steenrod, Norman (1952), Foundations of Algebraic Topology, Princeton University Press, ISBN 9780691627236, MR 0050886 {{citation}}: ISBN / Date incompatibility (help) Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York, Heidelberg: Springer-Verlag, ISBN 0-387-90244-9, MR 0463157 Hatcher, Allen (2001), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0, MR 1867354 "Cohomology", Encyclopedia of Mathematics, EMS Press, 2001 [1994]. May, J. Peter (1999), A Concise Course in Algebraic Topology (PDF), University of Chicago Press, ISBN 0-226-51182-0, MR 1702278 Switzer, Robert (1975), Algebraic Topology — Homology and Homotopy, Springer-Verlag, ISBN 3-540-42750-3, MR 0385836 Thom, René (1954), "Quelques propriétés globales des variétés différentiables", Commentarii Mathematici Helvetici, 28: 17–86, doi:10.1007/BF02566923, MR 0061823, S2CID 120243638
Wikipedia/Cohomology_theory
In mathematics, homotopical algebra is a collection of concepts comprising the nonabelian aspects of homological algebra, and possibly the abelian aspects as special cases. The homotopical nomenclature stems from the fact that a common approach to such generalizations is via abstract homotopy theory, as in nonabelian algebraic topology, and in particular the theory of closed model categories. This subject has received much attention in recent years due to new foundational work of Vladimir Voevodsky, Eric Friedlander, Andrei Suslin, and others resulting in the A1 homotopy theory for quasiprojective varieties over a field. Voevodsky has used this new algebraic homotopy theory to prove the Milnor conjecture (for which he was awarded the Fields Medal) and later, in collaboration with Markus Rost, the full Bloch–Kato conjecture. == See also == Derived algebraic geometry Derivator Cotangent complex - one of the first objects discovered using homotopical algebra L∞ Algebra A∞ Algebra Categorical algebra Nonabelian homological algebra == References == Goerss, P. G.; Jardine, J. F. (1999), Simplicial Homotopy Theory, Progress in Mathematics, vol. 174, Basel, Boston, Berlin: Birkhäuser, ISBN 978-3-7643-6064-1 Hovey, Mark (1999), Model categories, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1359-1 Quillen, Daniel (1967), Homotopical Algebra, Berlin, New York: Springer-Verlag, ISBN 978-0-387-03914-5 == External links == An abstract for a talk on the proof of the full Bloch–Kato conjecture
Wikipedia/Homotopical_algebra
In mathematics, particularly homological algebra, the zig-zag lemma asserts the existence of a particular long exact sequence in the homology groups of certain chain complexes. The result is valid in every abelian category. == Statement == In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), let ( A , ∂ ∙ ) , ( B , ∂ ∙ ′ ) {\displaystyle ({\mathcal {A}},\partial _{\bullet }),({\mathcal {B}},\partial _{\bullet }')} and ( C , ∂ ∙ ″ ) {\displaystyle ({\mathcal {C}},\partial _{\bullet }'')} be chain complexes that fit into the following short exact sequence: 0 ⟶ A ⟶ α B ⟶ β C ⟶ 0 {\displaystyle 0\longrightarrow {\mathcal {A}}\mathrel {\stackrel {\alpha }{\longrightarrow }} {\mathcal {B}}\mathrel {\stackrel {\beta }{\longrightarrow }} {\mathcal {C}}\longrightarrow 0} Such a sequence is shorthand for the following commutative diagram: where the rows are exact sequences and each column is a chain complex. The zig-zag lemma asserts that there is a collection of boundary maps δ n : H n ( C ) ⟶ H n − 1 ( A ) , {\displaystyle \delta _{n}:H_{n}({\mathcal {C}})\longrightarrow H_{n-1}({\mathcal {A}}),} that makes the following sequence exact: The maps α ∗ {\displaystyle \alpha _{*}^{}} and β ∗ {\displaystyle \beta _{*}^{}} are the usual maps induced by homology. The boundary maps δ n {\displaystyle \delta _{n}^{}} are explained below. The name of the lemma arises from the "zig-zag" behavior of the maps in the sequence. A variant version of the zig-zag lemma is commonly known as the "snake lemma" (it extracts the essence of the proof of the zig-zag lemma given below). == Construction of the boundary maps == The maps δ n {\displaystyle \delta _{n}^{}} are defined using a standard diagram chasing argument. Let c ∈ C n {\displaystyle c\in C_{n}} represent a class in H n ( C ) {\displaystyle H_{n}({\mathcal {C}})} , so ∂ n ″ ( c ) = 0 {\displaystyle \partial _{n}''(c)=0} . Exactness of the row implies that β n {\displaystyle \beta _{n}^{}} is surjective, so there must be some b ∈ B n {\displaystyle b\in B_{n}} with β n ( b ) = c {\displaystyle \beta _{n}^{}(b)=c} . By commutativity of the diagram, β n − 1 ∂ n ′ ( b ) = ∂ n ″ β n ( b ) = ∂ n ″ ( c ) = 0. {\displaystyle \beta _{n-1}\partial _{n}'(b)=\partial _{n}''\beta _{n}(b)=\partial _{n}''(c)=0.} By exactness, ∂ n ′ ( b ) ∈ ker ⁡ β n − 1 = i m α n − 1 . {\displaystyle \partial _{n}'(b)\in \ker \beta _{n-1}=\mathrm {im} \;\alpha _{n-1}.} Thus, since α n − 1 {\displaystyle \alpha _{n-1}^{}} is injective, there is a unique element a ∈ A n − 1 {\displaystyle a\in A_{n-1}} such that α n − 1 ( a ) = ∂ n ′ ( b ) {\displaystyle \alpha _{n-1}(a)=\partial _{n}'(b)} . This is a cycle, since α n − 2 {\displaystyle \alpha _{n-2}^{}} is injective and α n − 2 ∂ n − 1 ( a ) = ∂ n − 1 ′ α n − 1 ( a ) = ∂ n − 1 ′ ∂ n ′ ( b ) = 0 , {\displaystyle \alpha _{n-2}\partial _{n-1}(a)=\partial _{n-1}'\alpha _{n-1}(a)=\partial _{n-1}'\partial _{n}'(b)=0,} since ∂ 2 = 0 {\displaystyle \partial ^{2}=0} . That is, ∂ n − 1 ( a ) ∈ ker ⁡ α n − 2 = { 0 } {\displaystyle \partial _{n-1}(a)\in \ker \alpha _{n-2}=\{0\}} . This means a {\displaystyle a} is a cycle, so it represents a class in H n − 1 ( A ) {\displaystyle H_{n-1}({\mathcal {A}})} . We can now define δ [ c ] = [ a ] . {\displaystyle \delta _{}^{}[c]=[a].} With the boundary maps defined, one can show that they are well-defined (that is, independent of the choices of c and b). The proof uses diagram chasing arguments similar to that above. Such arguments are also used to show that the sequence in homology is exact at each group. == See also == Mayer–Vietoris sequence == References == Hatcher, Allen (2002). Algebraic Topology. Cambridge University Press. ISBN 0-521-79540-0. Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 Munkres, James R. (1993). Elements of Algebraic Topology. New York: Westview Press. ISBN 0-201-62728-0.
Wikipedia/Zig-zag_lemma
In mathematics, especially homological algebra and other applications of abelian category theory, the five lemma is an important and widely used lemma about commutative diagrams. The five lemma is not only valid for abelian categories but also works in the category of groups, for example. The five lemma can be thought of as a combination of two other theorems, the four lemmas, which are dual to each other. == Statements == Consider the following commutative diagram in any abelian category (such as the category of abelian groups or the category of vector spaces over a given field) or in the category of groups. The five lemma states that, if the rows are exact, m and p are isomorphisms, l is an epimorphism, and q is a monomorphism, then n is also an isomorphism. The two four-lemmas state: == Proof == The method of proof we shall use is commonly referred to as diagram chasing. We shall prove the five lemma by individually proving each of the two four lemmas. To perform diagram chasing, we assume that we are in a category of modules over some ring, so that we may speak of elements of the objects in the diagram and think of the morphisms of the diagram as functions (in fact, homomorphisms) acting on those elements. Then a morphism is a monomorphism if and only if it is injective, and it is an epimorphism if and only if it is surjective. Similarly, to deal with exactness, we can think of kernels and images in a function-theoretic sense. The proof will still apply to any (small) abelian category because of Mitchell's embedding theorem, which states that any small abelian category can be represented as a category of modules over some ring. For the category of groups, just turn all additive notation below into multiplicative notation, and note that commutativity of abelian group is never used. So, to prove (1), assume that m and p are surjective and q is injective. Let c′ be an element of C′. Since p is surjective, there exists an element d in D with p(d) = t(c′). By commutativity of the diagram, u(p(d)) = q(j(d)). Since im t = ker u by exactness, 0 = u(t(c′)) = u(p(d)) = q(j(d)). Since q is injective, j(d) = 0, so d is in ker j = im h. Therefore, there exists c in C with h(c) = d. Then t(n(c)) = p(h(c)) = t(c′). Since t is a homomorphism, it follows that t(c′ − n(c)) = 0. By exactness, c′ − n(c) is in the image of s, so there exists b′ in B′ with s(b′) = c′ − n(c). Since m is surjective, we can find b in B such that b′ = m(b). By commutativity, n(g(b)) = s(m(b)) = c′ − n(c). Since n is a homomorphism, n(g(b) + c) = n(g(b)) + n(c) = c′ − n(c) + n(c) = c′. Therefore, n is surjective. Then, to prove (2), assume that m and p are injective and l is surjective. Let c in C be such that n(c) = 0. t(n(c)) is then 0. By commutativity, p(h(c)) = 0. Since p is injective, h(c) = 0. By exactness, there is an element b of B such that g(b) = c. By commutativity, s(m(b)) = n(g(b)) = n(c) = 0. By exactness, there is then an element a′ of A′ such that r(a′) = m(b). Since l is surjective, there is a in A such that l(a) = a′. By commutativity, m(f(a)) = r(l(a)) = m(b). Since m is injective, f(a) = b. So c = g(f(a)). Since the composition of g and f is trivial, c = 0. Therefore, n is injective. Combining the two four lemmas now proves the entire five lemma. == Applications == The five lemma is often applied to long exact sequences: when computing homology or cohomology of a given object, one typically employs a simpler subobject whose homology/cohomology is known, and arrives at a long exact sequence which involves the unknown homology groups of the original object. This alone is often not sufficient to determine the unknown homology groups, but if one can compare the original object and sub object to well-understood ones via morphisms, then a morphism between the respective long exact sequences is induced, and the five lemma can then be used to determine the unknown homology groups. == See also == Short five lemma, a special case of the five lemma for short exact sequences Snake lemma, another lemma proved by diagram chasing Nine lemma == Notes == == References == Scott, W.R. (1987) [1964]. Group Theory. Dover. ISBN 978-0-486-65377-8. Massey, William S. (1991), A basic course in algebraic topology, Graduate texts in mathematics, vol. 127 (3rd ed.), Springer, ISBN 978-0-387-97430-9
Wikipedia/Five_lemma
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin(t) and cos(t) are cos(t) and –sin(t) respectively, the derivatives of sinh(t) and cosh(t) are cosh(t) and sinh(t) respectively. Hyperbolic functions are used to express the angle of parallelism in hyperbolic geometry. They are used to express Lorentz boosts as hyperbolic rotations in special relativity. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, and fluid dynamics. The basic hyperbolic functions are: hyperbolic sine "sinh" (), hyperbolic cosine "cosh" (), from which are derived: hyperbolic tangent "tanh" (), hyperbolic cotangent "coth" (), hyperbolic secant "sech" (), hyperbolic cosecant "csch" or "cosech" () corresponding to the derived trigonometric functions. The inverse hyperbolic functions are: inverse hyperbolic sine "arsinh" (also denoted "sinh−1", "asinh" or sometimes "arcsinh") inverse hyperbolic cosine "arcosh" (also denoted "cosh−1", "acosh" or sometimes "arccosh") inverse hyperbolic tangent "artanh" (also denoted "tanh−1", "atanh" or sometimes "arctanh") inverse hyperbolic cotangent "arcoth" (also denoted "coth−1", "acoth" or sometimes "arccoth") inverse hyperbolic secant "arsech" (also denoted "sech−1", "asech" or sometimes "arcsech") inverse hyperbolic cosecant "arcsch" (also denoted "arcosech", "csch−1", "cosech−1","acsch", "acosech", or sometimes "arccsch" or "arccosech") The hyperbolic functions take a real argument called a hyperbolic angle. The magnitude of a hyperbolic angle is the area of its hyperbolic sector to xy = 1. The hyperbolic functions may be defined in terms of the legs of a right triangle covering this sector. In complex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine are entire functions. As a result, the other hyperbolic functions are meromorphic in the whole complex plane. By Lindemann–Weierstrass theorem, the hyperbolic functions have a transcendental value for every non-zero algebraic value of the argument. == History == The first known calculation of a hyperbolic trigonometry problem is attributed to Gerardus Mercator when issuing the Mercator map projection circa 1566. It requires tabulating solutions to a transcendental equation involving hyperbolic functions. The first to suggest a similarity between the sector of the circle and that of the hyperbola was Isaac Newton in his 1687 Principia Mathematica. Roger Cotes suggested to modify the trigonometric functions using the imaginary unit i = − 1 {\displaystyle i={\sqrt {-1}}} to obtain an oblate spheroid from a prolate one. Hyperbolic functions were formally introduced in 1757 by Vincenzo Riccati. Riccati used Sc. and Cc. (sinus/cosinus circulare) to refer to circular functions and Sh. and Ch. (sinus/cosinus hyperbolico) to refer to hyperbolic functions. As early as 1759, Daviet de Foncenex showed the interchangeability of the trigonometric and hyperbolic functions using the imaginary unit and extended de Moivre's formula to hyperbolic functions. During the 1760s, Johann Heinrich Lambert systematized the use functions and provided exponential expressions in various publications. Lambert credited Riccati for the terminology and names of the functions, but altered the abbreviations to those used today. == Notation == == Definitions == There are various equivalent ways to define the hyperbolic functions. === Exponential definitions === In terms of the exponential function: Hyperbolic sine: the odd part of the exponential function, that is, sinh ⁡ x = e x − e − x 2 = e 2 x − 1 2 e x = 1 − e − 2 x 2 e − x . {\displaystyle \sinh x={\frac {e^{x}-e^{-x}}{2}}={\frac {e^{2x}-1}{2e^{x}}}={\frac {1-e^{-2x}}{2e^{-x}}}.} Hyperbolic cosine: the even part of the exponential function, that is, cosh ⁡ x = e x + e − x 2 = e 2 x + 1 2 e x = 1 + e − 2 x 2 e − x . {\displaystyle \cosh x={\frac {e^{x}+e^{-x}}{2}}={\frac {e^{2x}+1}{2e^{x}}}={\frac {1+e^{-2x}}{2e^{-x}}}.} Hyperbolic tangent: tanh ⁡ x = sinh ⁡ x cosh ⁡ x = e x − e − x e x + e − x = e 2 x − 1 e 2 x + 1 . {\displaystyle \tanh x={\frac {\sinh x}{\cosh x}}={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}={\frac {e^{2x}-1}{e^{2x}+1}}.} Hyperbolic cotangent: for x ≠ 0, coth ⁡ x = cosh ⁡ x sinh ⁡ x = e x + e − x e x − e − x = e 2 x + 1 e 2 x − 1 . {\displaystyle \coth x={\frac {\cosh x}{\sinh x}}={\frac {e^{x}+e^{-x}}{e^{x}-e^{-x}}}={\frac {e^{2x}+1}{e^{2x}-1}}.} Hyperbolic secant: sech ⁡ x = 1 cosh ⁡ x = 2 e x + e − x = 2 e x e 2 x + 1 . {\displaystyle \operatorname {sech} x={\frac {1}{\cosh x}}={\frac {2}{e^{x}+e^{-x}}}={\frac {2e^{x}}{e^{2x}+1}}.} Hyperbolic cosecant: for x ≠ 0, csch ⁡ x = 1 sinh ⁡ x = 2 e x − e − x = 2 e x e 2 x − 1 . {\displaystyle \operatorname {csch} x={\frac {1}{\sinh x}}={\frac {2}{e^{x}-e^{-x}}}={\frac {2e^{x}}{e^{2x}-1}}.} === Differential equation definitions === The hyperbolic functions may be defined as solutions of differential equations: The hyperbolic sine and cosine are the solution (s, c) of the system c ′ ( x ) = s ( x ) , s ′ ( x ) = c ( x ) , {\displaystyle {\begin{aligned}c'(x)&=s(x),\\s'(x)&=c(x),\\\end{aligned}}} with the initial conditions s ( 0 ) = 0 , c ( 0 ) = 1. {\displaystyle s(0)=0,c(0)=1.} The initial conditions make the solution unique; without them any pair of functions ( a e x + b e − x , a e x − b e − x ) {\displaystyle (ae^{x}+be^{-x},ae^{x}-be^{-x})} would be a solution. sinh(x) and cosh(x) are also the unique solution of the equation f ″(x) = f (x), such that f (0) = 1, f ′(0) = 0 for the hyperbolic cosine, and f (0) = 0, f ′(0) = 1 for the hyperbolic sine. === Complex trigonometric definitions === Hyperbolic functions may also be deduced from trigonometric functions with complex arguments: Hyperbolic sine: sinh ⁡ x = − i sin ⁡ ( i x ) . {\displaystyle \sinh x=-i\sin(ix).} Hyperbolic cosine: cosh ⁡ x = cos ⁡ ( i x ) . {\displaystyle \cosh x=\cos(ix).} Hyperbolic tangent: tanh ⁡ x = − i tan ⁡ ( i x ) . {\displaystyle \tanh x=-i\tan(ix).} Hyperbolic cotangent: coth ⁡ x = i cot ⁡ ( i x ) . {\displaystyle \coth x=i\cot(ix).} Hyperbolic secant: sech ⁡ x = sec ⁡ ( i x ) . {\displaystyle \operatorname {sech} x=\sec(ix).} Hyperbolic cosecant: csch ⁡ x = i csc ⁡ ( i x ) . {\displaystyle \operatorname {csch} x=i\csc(ix).} where i is the imaginary unit with i2 = −1. The above definitions are related to the exponential definitions via Euler's formula (See § Hyperbolic functions for complex numbers below). == Characterizing properties == === Hyperbolic cosine === It can be shown that the area under the curve of the hyperbolic cosine (over a finite interval) is always equal to the arc length corresponding to that interval: area = ∫ a b cosh ⁡ x d x = ∫ a b 1 + ( d d x cosh ⁡ x ) 2 d x = arc length. {\displaystyle {\text{area}}=\int _{a}^{b}\cosh x\,dx=\int _{a}^{b}{\sqrt {1+\left({\frac {d}{dx}}\cosh x\right)^{2}}}\,dx={\text{arc length.}}} === Hyperbolic tangent === The hyperbolic tangent is the (unique) solution to the differential equation f ′ = 1 − f 2, with f (0) = 0. == Useful relations == The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) for θ {\displaystyle \theta } , 2 θ {\displaystyle 2\theta } , 3 θ {\displaystyle 3\theta } or θ {\displaystyle \theta } and φ {\displaystyle \varphi } into a hyperbolic identity, by: expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term containing a product of two sinhs. Odd and even functions: sinh ⁡ ( − x ) = − sinh ⁡ x cosh ⁡ ( − x ) = cosh ⁡ x {\displaystyle {\begin{aligned}\sinh(-x)&=-\sinh x\\\cosh(-x)&=\cosh x\end{aligned}}} Hence: tanh ⁡ ( − x ) = − tanh ⁡ x coth ⁡ ( − x ) = − coth ⁡ x sech ⁡ ( − x ) = sech ⁡ x csch ⁡ ( − x ) = − csch ⁡ x {\displaystyle {\begin{aligned}\tanh(-x)&=-\tanh x\\\coth(-x)&=-\coth x\\\operatorname {sech} (-x)&=\operatorname {sech} x\\\operatorname {csch} (-x)&=-\operatorname {csch} x\end{aligned}}} Thus, cosh x and sech x are even functions; the others are odd functions. arsech ⁡ x = arcosh ⁡ ( 1 x ) arcsch ⁡ x = arsinh ⁡ ( 1 x ) arcoth ⁡ x = artanh ⁡ ( 1 x ) {\displaystyle {\begin{aligned}\operatorname {arsech} x&=\operatorname {arcosh} \left({\frac {1}{x}}\right)\\\operatorname {arcsch} x&=\operatorname {arsinh} \left({\frac {1}{x}}\right)\\\operatorname {arcoth} x&=\operatorname {artanh} \left({\frac {1}{x}}\right)\end{aligned}}} Hyperbolic sine and cosine satisfy: cosh ⁡ x + sinh ⁡ x = e x cosh ⁡ x − sinh ⁡ x = e − x {\displaystyle {\begin{aligned}\cosh x+\sinh x&=e^{x}\\\cosh x-\sinh x&=e^{-x}\end{aligned}}} which are analogous to Euler's formula, and cosh 2 ⁡ x − sinh 2 ⁡ x = 1 {\displaystyle \cosh ^{2}x-\sinh ^{2}x=1} which is analogous to the Pythagorean trigonometric identity. One also has sech 2 ⁡ x = 1 − tanh 2 ⁡ x csch 2 ⁡ x = coth 2 ⁡ x − 1 {\displaystyle {\begin{aligned}\operatorname {sech} ^{2}x&=1-\tanh ^{2}x\\\operatorname {csch} ^{2}x&=\coth ^{2}x-1\end{aligned}}} for the other functions. === Sums of arguments === sinh ⁡ ( x + y ) = sinh ⁡ x cosh ⁡ y + cosh ⁡ x sinh ⁡ y cosh ⁡ ( x + y ) = cosh ⁡ x cosh ⁡ y + sinh ⁡ x sinh ⁡ y tanh ⁡ ( x + y ) = tanh ⁡ x + tanh ⁡ y 1 + tanh ⁡ x tanh ⁡ y {\displaystyle {\begin{aligned}\sinh(x+y)&=\sinh x\cosh y+\cosh x\sinh y\\\cosh(x+y)&=\cosh x\cosh y+\sinh x\sinh y\\\tanh(x+y)&={\frac {\tanh x+\tanh y}{1+\tanh x\tanh y}}\\\end{aligned}}} particularly cosh ⁡ ( 2 x ) = sinh 2 ⁡ x + cosh 2 ⁡ x = 2 sinh 2 ⁡ x + 1 = 2 cosh 2 ⁡ x − 1 sinh ⁡ ( 2 x ) = 2 sinh ⁡ x cosh ⁡ x tanh ⁡ ( 2 x ) = 2 tanh ⁡ x 1 + tanh 2 ⁡ x {\displaystyle {\begin{aligned}\cosh(2x)&=\sinh ^{2}{x}+\cosh ^{2}{x}=2\sinh ^{2}x+1=2\cosh ^{2}x-1\\\sinh(2x)&=2\sinh x\cosh x\\\tanh(2x)&={\frac {2\tanh x}{1+\tanh ^{2}x}}\\\end{aligned}}} Also: sinh ⁡ x + sinh ⁡ y = 2 sinh ⁡ ( x + y 2 ) cosh ⁡ ( x − y 2 ) cosh ⁡ x + cosh ⁡ y = 2 cosh ⁡ ( x + y 2 ) cosh ⁡ ( x − y 2 ) {\displaystyle {\begin{aligned}\sinh x+\sinh y&=2\sinh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\cosh x+\cosh y&=2\cosh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} === Subtraction formulas === sinh ⁡ ( x − y ) = sinh ⁡ x cosh ⁡ y − cosh ⁡ x sinh ⁡ y cosh ⁡ ( x − y ) = cosh ⁡ x cosh ⁡ y − sinh ⁡ x sinh ⁡ y tanh ⁡ ( x − y ) = tanh ⁡ x − tanh ⁡ y 1 − tanh ⁡ x tanh ⁡ y {\displaystyle {\begin{aligned}\sinh(x-y)&=\sinh x\cosh y-\cosh x\sinh y\\\cosh(x-y)&=\cosh x\cosh y-\sinh x\sinh y\\\tanh(x-y)&={\frac {\tanh x-\tanh y}{1-\tanh x\tanh y}}\\\end{aligned}}} Also: sinh ⁡ x − sinh ⁡ y = 2 cosh ⁡ ( x + y 2 ) sinh ⁡ ( x − y 2 ) cosh ⁡ x − cosh ⁡ y = 2 sinh ⁡ ( x + y 2 ) sinh ⁡ ( x − y 2 ) {\displaystyle {\begin{aligned}\sinh x-\sinh y&=2\cosh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\cosh x-\cosh y&=2\sinh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} === Half argument formulas === sinh ⁡ ( x 2 ) = sinh ⁡ x 2 ( cosh ⁡ x + 1 ) = sgn ⁡ x cosh ⁡ x − 1 2 cosh ⁡ ( x 2 ) = cosh ⁡ x + 1 2 tanh ⁡ ( x 2 ) = sinh ⁡ x cosh ⁡ x + 1 = sgn ⁡ x cosh ⁡ x − 1 cosh ⁡ x + 1 = e x − 1 e x + 1 {\displaystyle {\begin{aligned}\sinh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\sqrt {2(\cosh x+1)}}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{2}}}\\[6px]\cosh \left({\frac {x}{2}}\right)&={\sqrt {\frac {\cosh x+1}{2}}}\\[6px]\tanh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\cosh x+1}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{\cosh x+1}}}={\frac {e^{x}-1}{e^{x}+1}}\end{aligned}}} where sgn is the sign function. If x ≠ 0, then tanh ⁡ ( x 2 ) = cosh ⁡ x − 1 sinh ⁡ x = coth ⁡ x − csch ⁡ x {\displaystyle \tanh \left({\frac {x}{2}}\right)={\frac {\cosh x-1}{\sinh x}}=\coth x-\operatorname {csch} x} === Square formulas === sinh 2 ⁡ x = 1 2 ( cosh ⁡ 2 x − 1 ) cosh 2 ⁡ x = 1 2 ( cosh ⁡ 2 x + 1 ) {\displaystyle {\begin{aligned}\sinh ^{2}x&={\tfrac {1}{2}}(\cosh 2x-1)\\\cosh ^{2}x&={\tfrac {1}{2}}(\cosh 2x+1)\end{aligned}}} === Inequalities === The following inequality is useful in statistics: cosh ⁡ ( t ) ≤ e t 2 / 2 . {\displaystyle \operatorname {cosh} (t)\leq e^{t^{2}/2}.} It can be proved by comparing the Taylor series of the two functions term by term. == Inverse functions as logarithms == arsinh ⁡ ( x ) = ln ⁡ ( x + x 2 + 1 ) arcosh ⁡ ( x ) = ln ⁡ ( x + x 2 − 1 ) x ≥ 1 artanh ⁡ ( x ) = 1 2 ln ⁡ ( 1 + x 1 − x ) | x | < 1 arcoth ⁡ ( x ) = 1 2 ln ⁡ ( x + 1 x − 1 ) | x | > 1 arsech ⁡ ( x ) = ln ⁡ ( 1 x + 1 x 2 − 1 ) = ln ⁡ ( 1 + 1 − x 2 x ) 0 < x ≤ 1 arcsch ⁡ ( x ) = ln ⁡ ( 1 x + 1 x 2 + 1 ) x ≠ 0 {\displaystyle {\begin{aligned}\operatorname {arsinh} (x)&=\ln \left(x+{\sqrt {x^{2}+1}}\right)\\\operatorname {arcosh} (x)&=\ln \left(x+{\sqrt {x^{2}-1}}\right)&&x\geq 1\\\operatorname {artanh} (x)&={\frac {1}{2}}\ln \left({\frac {1+x}{1-x}}\right)&&|x|<1\\\operatorname {arcoth} (x)&={\frac {1}{2}}\ln \left({\frac {x+1}{x-1}}\right)&&|x|>1\\\operatorname {arsech} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}-1}}\right)=\ln \left({\frac {1+{\sqrt {1-x^{2}}}}{x}}\right)&&0<x\leq 1\\\operatorname {arcsch} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}+1}}\right)&&x\neq 0\end{aligned}}} == Derivatives == d d x sinh ⁡ x = cosh ⁡ x d d x cosh ⁡ x = sinh ⁡ x d d x tanh ⁡ x = 1 − tanh 2 ⁡ x = sech 2 ⁡ x = 1 cosh 2 ⁡ x d d x coth ⁡ x = 1 − coth 2 ⁡ x = − csch 2 ⁡ x = − 1 sinh 2 ⁡ x x ≠ 0 d d x sech ⁡ x = − tanh ⁡ x sech ⁡ x d d x csch ⁡ x = − coth ⁡ x csch ⁡ x x ≠ 0 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\sinh x&=\cosh x\\{\frac {d}{dx}}\cosh x&=\sinh x\\{\frac {d}{dx}}\tanh x&=1-\tanh ^{2}x=\operatorname {sech} ^{2}x={\frac {1}{\cosh ^{2}x}}\\{\frac {d}{dx}}\coth x&=1-\coth ^{2}x=-\operatorname {csch} ^{2}x=-{\frac {1}{\sinh ^{2}x}}&&x\neq 0\\{\frac {d}{dx}}\operatorname {sech} x&=-\tanh x\operatorname {sech} x\\{\frac {d}{dx}}\operatorname {csch} x&=-\coth x\operatorname {csch} x&&x\neq 0\end{aligned}}} d d x arsinh ⁡ x = 1 x 2 + 1 d d x arcosh ⁡ x = 1 x 2 − 1 1 < x d d x artanh ⁡ x = 1 1 − x 2 | x | < 1 d d x arcoth ⁡ x = 1 1 − x 2 1 < | x | d d x arsech ⁡ x = − 1 x 1 − x 2 0 < x < 1 d d x arcsch ⁡ x = − 1 | x | 1 + x 2 x ≠ 0 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arsinh} x&={\frac {1}{\sqrt {x^{2}+1}}}\\{\frac {d}{dx}}\operatorname {arcosh} x&={\frac {1}{\sqrt {x^{2}-1}}}&&1<x\\{\frac {d}{dx}}\operatorname {artanh} x&={\frac {1}{1-x^{2}}}&&|x|<1\\{\frac {d}{dx}}\operatorname {arcoth} x&={\frac {1}{1-x^{2}}}&&1<|x|\\{\frac {d}{dx}}\operatorname {arsech} x&=-{\frac {1}{x{\sqrt {1-x^{2}}}}}&&0<x<1\\{\frac {d}{dx}}\operatorname {arcsch} x&=-{\frac {1}{|x|{\sqrt {1+x^{2}}}}}&&x\neq 0\end{aligned}}} == Second derivatives == Each of the functions sinh and cosh is equal to its second derivative, that is: d 2 d x 2 sinh ⁡ x = sinh ⁡ x {\displaystyle {\frac {d^{2}}{dx^{2}}}\sinh x=\sinh x} d 2 d x 2 cosh ⁡ x = cosh ⁡ x . {\displaystyle {\frac {d^{2}}{dx^{2}}}\cosh x=\cosh x\,.} All functions with this property are linear combinations of sinh and cosh, in particular the exponential functions e x {\displaystyle e^{x}} and e − x {\displaystyle e^{-x}} . == Standard integrals == ∫ sinh ⁡ ( a x ) d x = a − 1 cosh ⁡ ( a x ) + C ∫ cosh ⁡ ( a x ) d x = a − 1 sinh ⁡ ( a x ) + C ∫ tanh ⁡ ( a x ) d x = a − 1 ln ⁡ ( cosh ⁡ ( a x ) ) + C ∫ coth ⁡ ( a x ) d x = a − 1 ln ⁡ | sinh ⁡ ( a x ) | + C ∫ sech ⁡ ( a x ) d x = a − 1 arctan ⁡ ( sinh ⁡ ( a x ) ) + C ∫ csch ⁡ ( a x ) d x = a − 1 ln ⁡ | tanh ⁡ ( a x 2 ) | + C = a − 1 ln ⁡ | coth ⁡ ( a x ) − csch ⁡ ( a x ) | + C = − a − 1 arcoth ⁡ ( cosh ⁡ ( a x ) ) + C {\displaystyle {\begin{aligned}\int \sinh(ax)\,dx&=a^{-1}\cosh(ax)+C\\\int \cosh(ax)\,dx&=a^{-1}\sinh(ax)+C\\\int \tanh(ax)\,dx&=a^{-1}\ln(\cosh(ax))+C\\\int \coth(ax)\,dx&=a^{-1}\ln \left|\sinh(ax)\right|+C\\\int \operatorname {sech} (ax)\,dx&=a^{-1}\arctan(\sinh(ax))+C\\\int \operatorname {csch} (ax)\,dx&=a^{-1}\ln \left|\tanh \left({\frac {ax}{2}}\right)\right|+C=a^{-1}\ln \left|\coth \left(ax\right)-\operatorname {csch} \left(ax\right)\right|+C=-a^{-1}\operatorname {arcoth} \left(\cosh \left(ax\right)\right)+C\end{aligned}}} The following integrals can be proved using hyperbolic substitution: ∫ 1 a 2 + u 2 d u = arsinh ⁡ ( u a ) + C ∫ 1 u 2 − a 2 d u = sgn ⁡ u arcosh ⁡ | u a | + C ∫ 1 a 2 − u 2 d u = a − 1 artanh ⁡ ( u a ) + C u 2 < a 2 ∫ 1 a 2 − u 2 d u = a − 1 arcoth ⁡ ( u a ) + C u 2 > a 2 ∫ 1 u a 2 − u 2 d u = − a − 1 arsech ⁡ | u a | + C ∫ 1 u a 2 + u 2 d u = − a − 1 arcsch ⁡ | u a | + C {\displaystyle {\begin{aligned}\int {{\frac {1}{\sqrt {a^{2}+u^{2}}}}\,du}&=\operatorname {arsinh} \left({\frac {u}{a}}\right)+C\\\int {{\frac {1}{\sqrt {u^{2}-a^{2}}}}\,du}&=\operatorname {sgn} {u}\operatorname {arcosh} \left|{\frac {u}{a}}\right|+C\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {artanh} \left({\frac {u}{a}}\right)+C&&u^{2}<a^{2}\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {arcoth} \left({\frac {u}{a}}\right)+C&&u^{2}>a^{2}\\\int {{\frac {1}{u{\sqrt {a^{2}-u^{2}}}}}\,du}&=-a^{-1}\operatorname {arsech} \left|{\frac {u}{a}}\right|+C\\\int {{\frac {1}{u{\sqrt {a^{2}+u^{2}}}}}\,du}&=-a^{-1}\operatorname {arcsch} \left|{\frac {u}{a}}\right|+C\end{aligned}}} where C is the constant of integration. == Taylor series expressions == It is possible to express explicitly the Taylor series at zero (or the Laurent series, if the function is not defined at zero) of the above functions. sinh ⁡ x = x + x 3 3 ! + x 5 5 ! + x 7 7 ! + ⋯ = ∑ n = 0 ∞ x 2 n + 1 ( 2 n + 1 ) ! {\displaystyle \sinh x=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}} This series is convergent for every complex value of x. Since the function sinh x is odd, only odd exponents for x occur in its Taylor series. cosh ⁡ x = 1 + x 2 2 ! + x 4 4 ! + x 6 6 ! + ⋯ = ∑ n = 0 ∞ x 2 n ( 2 n ) ! {\displaystyle \cosh x=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}} This series is convergent for every complex value of x. Since the function cosh x is even, only even exponents for x occur in its Taylor series. The sum of the sinh and cosh series is the infinite series expression of the exponential function. The following series are followed by a description of a subset of their domain of convergence, where the series is convergent and its sum equals the function. tanh ⁡ x = x − x 3 3 + 2 x 5 15 − 17 x 7 315 + ⋯ = ∑ n = 1 ∞ 2 2 n ( 2 2 n − 1 ) B 2 n x 2 n − 1 ( 2 n ) ! , | x | < π 2 coth ⁡ x = x − 1 + x 3 − x 3 45 + 2 x 5 945 + ⋯ = ∑ n = 0 ∞ 2 2 n B 2 n x 2 n − 1 ( 2 n ) ! , 0 < | x | < π sech ⁡ x = 1 − x 2 2 + 5 x 4 24 − 61 x 6 720 + ⋯ = ∑ n = 0 ∞ E 2 n x 2 n ( 2 n ) ! , | x | < π 2 csch ⁡ x = x − 1 − x 6 + 7 x 3 360 − 31 x 5 15120 + ⋯ = ∑ n = 0 ∞ 2 ( 1 − 2 2 n − 1 ) B 2 n x 2 n − 1 ( 2 n ) ! , 0 < | x | < π {\displaystyle {\begin{aligned}\tanh x&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots =\sum _{n=1}^{\infty }{\frac {2^{2n}(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\coth x&=x^{-1}+{\frac {x}{3}}-{\frac {x^{3}}{45}}+{\frac {2x^{5}}{945}}+\cdots =\sum _{n=0}^{\infty }{\frac {2^{2n}B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \\\operatorname {sech} x&=1-{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}-{\frac {61x^{6}}{720}}+\cdots =\sum _{n=0}^{\infty }{\frac {E_{2n}x^{2n}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\operatorname {csch} x&=x^{-1}-{\frac {x}{6}}+{\frac {7x^{3}}{360}}-{\frac {31x^{5}}{15120}}+\cdots =\sum _{n=0}^{\infty }{\frac {2(1-2^{2n-1})B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \end{aligned}}} where: B n {\displaystyle B_{n}} is the nth Bernoulli number E n {\displaystyle E_{n}} is the nth Euler number == Infinite products and continued fractions == The following expansions are valid in the whole complex plane: sinh ⁡ x = x ∏ n = 1 ∞ ( 1 + x 2 n 2 π 2 ) = x 1 − x 2 2 ⋅ 3 + x 2 − 2 ⋅ 3 x 2 4 ⋅ 5 + x 2 − 4 ⋅ 5 x 2 6 ⋅ 7 + x 2 − ⋱ {\displaystyle \sinh x=x\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{n^{2}\pi ^{2}}}\right)={\cfrac {x}{1-{\cfrac {x^{2}}{2\cdot 3+x^{2}-{\cfrac {2\cdot 3x^{2}}{4\cdot 5+x^{2}-{\cfrac {4\cdot 5x^{2}}{6\cdot 7+x^{2}-\ddots }}}}}}}}} cosh ⁡ x = ∏ n = 1 ∞ ( 1 + x 2 ( n − 1 / 2 ) 2 π 2 ) = 1 1 − x 2 1 ⋅ 2 + x 2 − 1 ⋅ 2 x 2 3 ⋅ 4 + x 2 − 3 ⋅ 4 x 2 5 ⋅ 6 + x 2 − ⋱ {\displaystyle \cosh x=\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{(n-1/2)^{2}\pi ^{2}}}\right)={\cfrac {1}{1-{\cfrac {x^{2}}{1\cdot 2+x^{2}-{\cfrac {1\cdot 2x^{2}}{3\cdot 4+x^{2}-{\cfrac {3\cdot 4x^{2}}{5\cdot 6+x^{2}-\ddots }}}}}}}}} tanh ⁡ x = 1 1 x + 1 3 x + 1 5 x + 1 7 x + ⋱ {\displaystyle \tanh x={\cfrac {1}{{\cfrac {1}{x}}+{\cfrac {1}{{\cfrac {3}{x}}+{\cfrac {1}{{\cfrac {5}{x}}+{\cfrac {1}{{\cfrac {7}{x}}+\ddots }}}}}}}}} == Comparison with circular functions == The hyperbolic functions represent an expansion of trigonometry beyond the circular functions. Both types depend on an argument, either circular angle or hyperbolic angle. Since the area of a circular sector with radius r and angle u (in radians) is r2u/2, it will be equal to u when r = √2. In the diagram, such a circle is tangent to the hyperbola xy = 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict a hyperbolic sector with area corresponding to hyperbolic angle magnitude. The legs of the two right triangles with hypotenuse on the ray defining the angles are of length √2 times the circular and hyperbolic functions. The hyperbolic angle is an invariant measure with respect to the squeeze mapping, just as the circular angle is invariant under rotation. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers. The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity. == Relationship to the exponential function == The decomposition of the exponential function in its even and odd parts gives the identities e x = cosh ⁡ x + sinh ⁡ x , {\displaystyle e^{x}=\cosh x+\sinh x,} and e − x = cosh ⁡ x − sinh ⁡ x . {\displaystyle e^{-x}=\cosh x-\sinh x.} Combined with Euler's formula e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} this gives e x + i y = ( cosh ⁡ x + sinh ⁡ x ) ( cos ⁡ y + i sin ⁡ y ) {\displaystyle e^{x+iy}=(\cosh x+\sinh x)(\cos y+i\sin y)} for the general complex exponential function. Additionally, e x = 1 + tanh ⁡ x 1 − tanh ⁡ x = 1 + tanh ⁡ x 2 1 − tanh ⁡ x 2 {\displaystyle e^{x}={\sqrt {\frac {1+\tanh x}{1-\tanh x}}}={\frac {1+\tanh {\frac {x}{2}}}{1-\tanh {\frac {x}{2}}}}} == Hyperbolic functions for complex numbers == Since the exponential function can be defined for any complex argument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functions sinh z and cosh z are then holomorphic. Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers: e i x = cos ⁡ x + i sin ⁡ x e − i x = cos ⁡ x − i sin ⁡ x {\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\e^{-ix}&=\cos x-i\sin x\end{aligned}}} so: cosh ⁡ ( i x ) = 1 2 ( e i x + e − i x ) = cos ⁡ x sinh ⁡ ( i x ) = 1 2 ( e i x − e − i x ) = i sin ⁡ x cosh ⁡ ( x + i y ) = cosh ⁡ ( x ) cos ⁡ ( y ) + i sinh ⁡ ( x ) sin ⁡ ( y ) sinh ⁡ ( x + i y ) = sinh ⁡ ( x ) cos ⁡ ( y ) + i cosh ⁡ ( x ) sin ⁡ ( y ) tanh ⁡ ( i x ) = i tan ⁡ x cosh ⁡ x = cos ⁡ ( i x ) sinh ⁡ x = − i sin ⁡ ( i x ) tanh ⁡ x = − i tan ⁡ ( i x ) {\displaystyle {\begin{aligned}\cosh(ix)&={\frac {1}{2}}\left(e^{ix}+e^{-ix}\right)=\cos x\\\sinh(ix)&={\frac {1}{2}}\left(e^{ix}-e^{-ix}\right)=i\sin x\\\cosh(x+iy)&=\cosh(x)\cos(y)+i\sinh(x)\sin(y)\\\sinh(x+iy)&=\sinh(x)\cos(y)+i\cosh(x)\sin(y)\\\tanh(ix)&=i\tan x\\\cosh x&=\cos(ix)\\\sinh x&=-i\sin(ix)\\\tanh x&=-i\tan(ix)\end{aligned}}} Thus, hyperbolic functions are periodic with respect to the imaginary component, with period 2 π i {\displaystyle 2\pi i} ( π i {\displaystyle \pi i} for hyperbolic tangent and cotangent). == See also == e (mathematical constant) Equal incircles theorem, based on sinh Hyperbolastic functions Hyperbolic growth Inverse hyperbolic functions List of integrals of hyperbolic functions Poinsot's spirals Sigmoid function Soboleva modified hyperbolic tangent Trigonometric functions == References == == External links == "Hyperbolic functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Hyperbolic functions on PlanetMath GonioLab: Visualization of the unit circle, trigonometric and hyperbolic functions (Java Web Start) Web-based calculator of hyperbolic functions
Wikipedia/Hyperbolic_functions
In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, and their inverses (e.g., arcsin, log, or x1/n). All elementary functions are continuous on their domains. Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s. Many textbooks and dictionaries do not give a precise definition of the elementary functions, and mathematicians differ on it. == Examples == === Basic examples === Elementary functions of a single variable x include: Constant functions: 2 , π , e , {\displaystyle 2,\ \pi ,\ e,} etc. Rational powers of x: x , x 2 , x ( x 1 2 ) , x 2 3 , {\displaystyle x,\ x^{2},\ {\sqrt {x}}\ (x^{\frac {1}{2}}),\ x^{\frac {2}{3}},} etc. Exponential functions: e x , a x {\displaystyle e^{x},\ a^{x}} Logarithms: log ⁡ x , log a ⁡ x {\displaystyle \log x,\ \log _{a}x} Trigonometric functions: sin ⁡ x , cos ⁡ x , tan ⁡ x , {\displaystyle \sin x,\ \cos x,\ \tan x,} etc. Inverse trigonometric functions: arcsin ⁡ x , arccos ⁡ x , {\displaystyle \arcsin x,\ \arccos x,} etc. Hyperbolic functions: sinh ⁡ x , cosh ⁡ x , {\displaystyle \sinh x,\ \cosh x,} etc. Inverse hyperbolic functions: arsinh ⁡ x , arcosh ⁡ x , {\displaystyle \operatorname {arsinh} x,\ \operatorname {arcosh} x,} etc. All functions obtained by adding, subtracting, multiplying or dividing a finite number of any of the previous functions All functions obtained by root extraction of a polynomial with coefficients in elementary functions All functions obtained by composing a finite number of any of the previously listed functions Certain elementary functions of a single complex variable z, such as z {\displaystyle {\sqrt {z}}} and log ⁡ z {\displaystyle \log z} , may be multivalued. Additionally, certain classes of functions may be obtained by others using the final two rules. For example, the exponential function e z {\displaystyle e^{z}} composed with addition, subtraction, and division provides the hyperbolic functions, while initial composition with i z {\displaystyle iz} instead provides the trigonometric functions. === Composite examples === Examples of elementary functions include: Addition, e.g. (x + 1) Multiplication, e.g. (2x) Polynomial functions e tan ⁡ x 1 + x 2 sin ⁡ ( 1 + ( log ⁡ x ) 2 ) {\displaystyle {\frac {e^{\tan x}}{1+x^{2}}}\sin \left({\sqrt {1+(\log x)^{2}}}\right)} − i log ⁡ ( x + i 1 − x 2 ) {\displaystyle -i\log \left(x+i{\sqrt {1-x^{2}}}\right)} The last function is equal to arccos ⁡ x {\displaystyle \arccos x} , the inverse cosine, in the entire complex plane. All monomials, polynomials, rational functions and algebraic functions are elementary. The absolute value function, for real x {\displaystyle x} , is also elementary as it can be expressed as the composition of a power and root of x {\displaystyle x} : | x | = x 2 {\textstyle |x|={\sqrt {x^{2}}}} . === Non-elementary functions === Many mathematicians exclude non-analytic functions such as the absolute value function or discontinuous functions such as the step function, but others allow them. Some have proposed extending the set to include, for example, the Lambert W function. Some examples of functions that are not elementary: tetration the gamma function non-elementary Liouvillian functions, including the exponential integral (Ei), logarithmic integral (Li or li) and Fresnel integrals (S and C). the error function, e r f ( x ) = 2 π ∫ 0 x e − t 2 d t , {\displaystyle \mathrm {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt,} a fact that may not be immediately obvious, but can be proven using the Risch algorithm. other nonelementary integrals, including the Dirichlet integral and elliptic integral. == Closure == It follows directly from the definition that the set of elementary functions is closed under arithmetic operations, root extraction and composition. The elementary functions are closed under differentiation. They are not closed under limits and infinite sums. Importantly, the elementary functions are not closed under integration, as shown by Liouville's theorem, see nonelementary integral. The Liouvillian functions are defined as the elementary functions and, recursively, the integrals of the Liouvillian functions. == Differential algebra == The mathematical definition of an elementary function, or a function in elementary form, is considered in the context of differential algebra. A differential algebra is an algebra with the extra operation of derivation (algebraic version of differentiation). Using the derivation operation new equations can be written and their solutions used in extensions of the algebra. By starting with the field of rational functions, two special types of transcendental extensions (the logarithm and the exponential) can be added to the field building a tower containing elementary functions. A differential field F is a field F0 (rational functions over the rationals Q for example) together with a derivation map u → ∂u. (Here ∂u is a new function. Sometimes the notation u′ is used.) The derivation captures the properties of differentiation, so that for any two elements of the base field, the derivation is linear ∂ ( u + v ) = ∂ u + ∂ v {\displaystyle \partial (u+v)=\partial u+\partial v} and satisfies the Leibniz product rule ∂ ( u ⋅ v ) = ∂ u ⋅ v + u ⋅ ∂ v . {\displaystyle \partial (u\cdot v)=\partial u\cdot v+u\cdot \partial v\,.} An element h is a constant if ∂h = 0. If the base field is over the rationals, care must be taken when extending the field to add the needed transcendental constants. A function u of a differential extension F[u] of a differential field F is an elementary function over F if the function u is algebraic over F, or is an exponential, that is, ∂u = u ∂a for a ∈ F, or is a logarithm, that is, ∂u = ∂a / a for a ∈ F. (see also Liouville's theorem) == See also == Algebraic function – Mathematical function Closed-form expression – Mathematical formula involving a given set of operations Differential Galois theory – Study of Galois symmetry groups of differential fields Elementary function arithmetic – System of arithmetic in proof theory Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions Tarski's high school algebra problem – Mathematical problem Transcendental function – Analytic function that does not satisfy a polynomial equation Tupper's self-referential formula – Formula that visually represents itself when graphed == Notes == == References == Liouville, Joseph (1833a). "Premier mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 124–148. Liouville, Joseph (1833b). "Second mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 149–193. Liouville, Joseph (1833c). "Note sur la détermination des intégrales dont la valeur est algébrique". Journal für die reine und angewandte Mathematik. 10: 347–359. Ritt, Joseph (1950). Differential Algebra. AMS. Rosenlicht, Maxwell (1972). "Integration in finite terms". American Mathematical Monthly. 79 (9): 963–972. doi:10.2307/2318066. JSTOR 2318066. == Further reading == Davenport, James H. (2007). "What Might "Understand a Function" Mean?". Towards Mechanized Mathematical Assistants. Lecture Notes in Computer Science. Vol. 4573. pp. 55–65. doi:10.1007/978-3-540-73086-6_5. ISBN 978-3-540-73083-5. S2CID 8049737. == External links == Elementary functions at Encyclopaedia of Mathematics Weisstein, Eric W. "Elementary function". MathWorld.
Wikipedia/Elementary_function
In algebraic geometry, Cramer's theorem on algebraic curves gives the necessary and sufficient number of points in the real plane falling on an algebraic curve to uniquely determine the curve in non-degenerate cases. This number is n ( n + 3 ) 2 , {\displaystyle {\frac {n(n+3)}{2}},} where n is the degree of the curve. The theorem is due to Gabriel Cramer, who published it in 1750. For example, a line (of degree 1) is determined by 2 distinct points on it: one and only one line goes through those two points. Likewise, a non-degenerate conic (polynomial equation in x and y with the sum of their powers in any term not exceeding 2, hence with degree 2) is uniquely determined by 5 points in general position (no three of which are on a straight line). The intuition of the conic case is this: Suppose the given points fall on, specifically, an ellipse. Then five pieces of information are necessary and sufficient to identify the ellipse—the horizontal location of the ellipse's center, the vertical location of the center, the major axis (the length of the longest chord), the minor axis (the length of the shortest chord through the center, perpendicular to the major axis), and the ellipse's rotational orientation (the extent to which the major axis departs from the horizontal). Five points in general position suffice to provide these five pieces of information, while four points do not. == Derivation of the formula == The number of distinct terms (including those with a zero coefficient) in an n-th degree equation in two variables is (n + 1)(n + 2) / 2. This is because the n-th degree terms are x n , x n − 1 y 1 , … , y n , {\displaystyle x^{n},\,x^{n-1}y^{1},\,\dots ,\,y^{n},} numbering n + 1 in total; the (n − 1) degree terms are x n − 1 , x n − 2 y 1 , … , y n − 1 , {\displaystyle x^{n-1},\,x^{n-2}y^{1},\,\dots ,\,y^{n-1},} numbering n in total; and so on through the first degree terms x {\displaystyle x} and y , {\displaystyle y,} numbering 2 in total, and the single zero degree term (the constant). The sum of these is (n + 1) + n + (n − 1) + ... + 2 + 1 = (n + 1)(n + 2) / 2 terms, each with its own coefficient. However, one of these coefficients is redundant in determining the curve, because we can always divide through the polynomial equation by any one of the coefficients, giving an equivalent equation with one coefficient fixed at 1, and thus [(n + 1)(n + 2) / 2] − 1 = n(n + 3) / 2 remaining coefficients. For example, a fourth-degree equation has the general form x 4 + c 1 x 3 y + c 2 x 2 y 2 + c 3 x y 3 + c 4 y 4 + c 5 x 3 + c 6 x 2 y + c 7 x y 2 + c 8 y 3 + c 9 x 2 + c 10 x y + c 11 y 2 + c 12 x + c 13 y + c 14 = 0 , {\displaystyle x^{4}+c_{1}x^{3}y+c_{2}x^{2}y^{2}+c_{3}xy^{3}+c_{4}y^{4}+c_{5}x^{3}+c_{6}x^{2}y+c_{7}xy^{2}+c_{8}y^{3}+c_{9}x^{2}+c_{10}xy+c_{11}y^{2}+c_{12}x+c_{13}y+c_{14}=0,} with 4(4+3)/2 = 14 coefficients. Determining an algebraic curve through a set of points consists of determining values for these coefficients in the algebraic equation such that each of the points satisfies the equation. Given n(n + 3) / 2 points (xi, yi), each of these points can be used to create a separate equation by substituting it into the general polynomial equation of degree n, giving n(n + 3) / 2 equations linear in the n(n + 3) / 2 unknown coefficients. If this system is non-degenerate in the sense of having a non-zero determinant, the unknown coefficients are uniquely determined and hence the polynomial equation and its curve are uniquely determined. More than this number of points would be redundant, and fewer would be insufficient to solve the system of equations uniquely for the coefficients. == Degenerate cases == An example of a degenerate case, in which n(n + 3) / 2 points on the curve are not sufficient to determine the curve uniquely, was provided by Cramer as part of Cramer's paradox. Let the degree be n = 3, and let nine points be all combinations of x = −1, 0, 1 and y = −1, 0, 1. More than one cubic contains all of these points, namely all cubics of the form a ( x 3 − x ) + b ( y 3 − y ) = 0. {\displaystyle a(x^{3}-x)+b(y^{3}-y)=0.} Thus these points do not determine a unique cubic, even though there are n(n + 3) / 2 = 9 of them. More generally, there are infinitely many cubics that pass through the nine intersection points of two cubics (Bézout's theorem implies that two cubics have, in general, nine intersection points) Likewise, for the conic case of n = 2, if three of five given points all fall on the same straight line, they may not uniquely determine the curve. == Restricted cases == If the curve is required to be in a particular sub-category of n-th degree polynomial equations, then fewer than n(n + 3) / 2 points may be necessary and sufficient to determine a unique curve. For example, three (non-collinear) points determine a circle: the generic circle is given by the equation ( x − a ) 2 + ( y − b ) 2 = r 2 {\displaystyle (x-a)^{2}+(y-b)^{2}=r^{2}} where the center is located at (a, b) and the radius is r. Equivalently, by expanding the squared terms, the generic equation is x 2 − 2 a x + y 2 − 2 b y = k , {\displaystyle x^{2}-2ax+y^{2}-2by=k,} where k = r 2 − a 2 − b 2 . {\displaystyle k=r^{2}-a^{2}-b^{2}.} Two restrictions have been imposed here compared to the general conic case of n = 2: the coefficient of the term in xy is restricted to equal 0, and the coefficient of y2 is restricted to equal the coefficient of x2. Thus instead of five points being needed, only 5 − 2 = 3 are needed, coinciding with the 3 parameters a, b, k (equivalently a, b, r) that need to be identified. == See also == Five points determine a conic == References ==
Wikipedia/Cramer's_theorem_(algebraic_curves)
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros. For functions from the real numbers to real numbers or from the complex numbers to the complex numbers, these are expressed either as floating-point numbers without error bounds or as floating-point values together with error bounds. The latter, approximations with error bounds, are equivalent to small isolating intervals for real roots or disks for complex roots. Solving an equation f(x) = g(x) is the same as finding the roots of the function h(x) = f(x) – g(x). Thus root-finding algorithms can be used to solve any equation of continuous functions. However, most root-finding algorithms do not guarantee that they will find all roots of a function, and if such an algorithm does not find any root, that does not necessarily mean that no root exists. Most numerical root-finding methods are iterative methods, producing a sequence of numbers that ideally converges towards a root as a limit. They require one or more initial guesses of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points and for converging rapidly to these fixed points. The behavior of general root-finding algorithms is studied in numerical analysis. However, for polynomials specifically, the study of root-finding algorithms belongs to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency and applicability of an algorithm may depend sensitively on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed and for locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton's method) to the unique root within each interval (or disk). == Bracketing methods == Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root is considered found. These generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require starting with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of polynomials there are other methods such as Descartes' rule of signs, Budan's theorem and Sturm's theorem for bounding or determining the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which find all real roots with a guaranteed accuracy. === Bisection method === The simplest root-finding algorithm is the bisection method. Let f be a continuous function for which one knows an interval [a, b] such that f(a) and f(b) have opposite signs (a bracket). Let c = (a + b)/2 be the middle of the interval (the midpoint or the point that bisects the interval). Then either f(a) and f(c), or f(c) and f(b) have opposite signs, and one has divided by two the size of the interval. Although the bisection method is robust, it gains one and only one bit of accuracy with each iteration. Therefore, the number of function evaluations required for finding an ε-approximate root is log 2 ⁡ b − a ε {\displaystyle \log _{2}{\frac {b-a}{\varepsilon }}} . Other methods, under appropriate conditions, can gain accuracy faster. === False position (regula falsi) === The false position method, also called the regula falsi method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the x-intercept of the line that connects the plotted function values at the endpoints of the interval, that is c = a f ( b ) − b f ( a ) f ( b ) − f ( a ) . {\displaystyle c={\frac {af(b)-bf(a)}{f(b)-f(a)}}.} False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method. However, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for f(c). Typically, this may occur if the derivative of f is large in the neighborhood of the root. == Interpolation == Many root-finding processes work by interpolation. This consists in using the last computed approximate values of the root for approximating the function by a polynomial of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated. Interpolating two values yields a line: a polynomial of degree one. This is the basis of the secant method. Regula falsi is also an interpolation method that interpolates two points at a time but it differs from the secant method by using two points that are not necessarily the last two computed points. Three values define a parabolic curve: a quadratic function. This is the basis of Muller's method. == Iterative methods == Although all root-finding algorithms proceed by iteration, an iterative root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a fixed point of the auxiliary function is reached to the desired precision, i.e., when a new computed value is sufficiently close to the preceding ones. === Newton's method (and similar derivative-based methods) === Newton's method assumes the function f to have a continuous derivative. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method; its order of convergence is usually quadratic whereas the bisection method's is linear. Newton's method is also important because it readily generalizes to higher-dimensional problems. Householder's methods are a class of Newton-like methods with higher orders of convergence. The first one after Newton's method is Halley's method with cubic order of convergence. === Secant method === Replacing the derivative in Newton's method with a finite difference, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order of convergence is the golden ratio, approximately 1.62). A generalization of the secant method in higher dimensions is Broyden's method. === Steffensen's method === If we use a polynomial fit to remove the quadratic part of the finite difference used in the secant method, so that it better approximates the derivative, we obtain Steffensen's method, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative. === Fixed point iteration method === We can use the fixed-point iteration to find the root of a function. Given a function f ( x ) {\displaystyle f(x)} which we have set to zero to find the root ( f ( x ) = 0 {\displaystyle f(x)=0} ), we rewrite the equation in terms of x {\displaystyle x} so that f ( x ) = 0 {\displaystyle f(x)=0} becomes x = g ( x ) {\displaystyle x=g(x)} (note, there are often many g ( x ) {\displaystyle g(x)} functions for each f ( x ) = 0 {\displaystyle f(x)=0} function). Next, we relabel each side of the equation as x n + 1 = g ( x n ) {\displaystyle x_{n+1}=g(x_{n})} so that we can perform the iteration. Next, we pick a value for x 1 {\displaystyle x_{1}} and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if | g ′ ( r o o t ) | < 1 {\displaystyle |g'(root)|<1} . As an example of converting f ( x ) = 0 {\displaystyle f(x)=0} to x = g ( x ) {\displaystyle x=g(x)} , if given the function f ( x ) = x 2 + x − 1 {\displaystyle f(x)=x^{2}+x-1} , we will rewrite it as one of the following equations. x n + 1 = ( 1 / x n ) − 1 {\displaystyle x_{n+1}=(1/x_{n})-1} , x n + 1 = 1 / ( x n + 1 ) {\displaystyle x_{n+1}=1/(x_{n}+1)} , x n + 1 = 1 − x n 2 {\displaystyle x_{n+1}=1-x_{n}^{2}} , x n + 1 = x n 2 + 2 x n − 1 {\displaystyle x_{n+1}=x_{n}^{2}+2x_{n}-1} , or x n + 1 = ± 1 − x n {\displaystyle x_{n+1}=\pm {\sqrt {1-x_{n}}}} . === Inverse interpolation === The appearance of complex values in interpolation methods can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root. == Combinations of methods == === Brent's method === Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity. === Ridders' method === Ridders' method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method. == Roots of polynomials == == Finding roots in higher dimensions == The bisection method has been generalized to higher dimensions; these methods are called generalized bisection methods. At each iteration, the domain is partitioned into two parts, and the algorithm decides - based on a small number of function evaluations - which of these two parts must contain a root. In one dimension, the criterion for decision is that the function has opposite signs. The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily and guarantees the existence of a root. The Poincaré–Miranda theorem gives a criterion for the existence of a root in a rectangle, but it is hard to verify because it requires evaluating the function on the entire boundary of the rectangle. Another criterion is given by a theorem of Kronecker. It says that, if the topological degree of a function f on a rectangle is non-zero, then the rectangle must contain at least one root of f. This criterion is the basis for several root-finding methods, such as those of Stenger and Kearfott. However, computing the topological degree can be time-consuming. A third criterion is based on a characteristic polyhedron. This criterion is used by a method called Characteristic Bisection.: 19--  It does not require computing the topological degree; it only requires computing the signs of function values. The number of required evaluations is at least log 2 ⁡ ( D / ϵ ) {\displaystyle \log _{2}(D/\epsilon )} , where D is the length of the longest edge of the characteristic polyhedron.: 11, Lemma.4.7  Note that Vrahatis and Iordanidis prove a lower bound on the number of evaluations, and not an upper bound. A fourth method uses an intermediate value theorem on simplices. Again, no upper bound on the number of queries is given. == See also == == References == == Further reading == Victor Yakovlevich Pan: "Solving a Polynomial Equation: Some History and Recent Progress", SIAM Review, Vol.39, No.2, pp.187-220 (June, 1997). John Michael McNamee: Numerical Methods for Roots of Polynomials - Part I, Elsevier, ISBN 978-0-444-52729-5 (2007). John Michael McNamee and Victor Yakovlevich Pan: Numerical Methods for Roots of Polynomials - Part II, Elsevier, ISBN 978-0-444-52730-1 (2013).
Wikipedia/Root-finding_algorithms
In the mathematical field of complex analysis, elliptic functions are special kinds of meromorphic functions, that satisfy two periodicity conditions. They are named elliptic functions because they come from elliptic integrals. Those integrals are in turn named elliptic because they first were encountered for the calculation of the arc length of an ellipse. Important elliptic functions are Jacobi elliptic functions and the Weierstrass ℘ {\displaystyle \wp } -function. Further development of this theory led to hyperelliptic functions and modular forms. == Definition == A meromorphic function is called an elliptic function, if there are two R {\displaystyle \mathbb {R} } -linear independent complex numbers ω 1 , ω 2 ∈ C {\displaystyle \omega _{1},\omega _{2}\in \mathbb {C} } such that f ( z + ω 1 ) = f ( z ) {\displaystyle f(z+\omega _{1})=f(z)} and f ( z + ω 2 ) = f ( z ) , ∀ z ∈ C {\displaystyle f(z+\omega _{2})=f(z),\quad \forall z\in \mathbb {C} } . So elliptic functions have two periods and are therefore doubly periodic functions. == Period lattice and fundamental domain == If f {\displaystyle f} is an elliptic function with periods ω 1 , ω 2 {\displaystyle \omega _{1},\omega _{2}} it also holds that f ( z + γ ) = f ( z ) {\displaystyle f(z+\gamma )=f(z)} for every linear combination γ = m ω 1 + n ω 2 {\displaystyle \gamma =m\omega _{1}+n\omega _{2}} with m , n ∈ Z {\displaystyle m,n\in \mathbb {Z} } . The abelian group Λ := ⟨ ω 1 , ω 2 ⟩ Z := Z ω 1 + Z ω 2 := { m ω 1 + n ω 2 ∣ m , n ∈ Z } {\displaystyle \Lambda :=\langle \omega _{1},\omega _{2}\rangle _{\mathbb {Z} }:=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}\mid m,n\in \mathbb {Z} \}} is called the period lattice. The parallelogram generated by ω 1 {\displaystyle \omega _{1}} and ω 2 {\displaystyle \omega _{2}} { μ ω 1 + ν ω 2 ∣ 0 ≤ μ , ν ≤ 1 } {\displaystyle \{\mu \omega _{1}+\nu \omega _{2}\mid 0\leq \mu ,\nu \leq 1\}} is a fundamental domain of Λ {\displaystyle \Lambda } acting on C {\displaystyle \mathbb {C} } . Geometrically the complex plane is tiled with parallelograms. Everything that happens in one fundamental domain repeats in all the others. For that reason we can view elliptic function as functions with the quotient group C / Λ {\displaystyle \mathbb {C} /\Lambda } as their domain. This quotient group, called an elliptic curve, can be visualised as a parallelogram where opposite sides are identified, which topologically is a torus. == Liouville's theorems == The following three theorems are known as Liouville's theorems (1847). === 1st theorem === A holomorphic elliptic function is constant. This is the original form of Liouville's theorem and can be derived from it. A holomorphic elliptic function is bounded since it takes on all of its values on the fundamental domain which is compact. So it is constant by Liouville's theorem. === 2nd theorem === Every elliptic function has finitely many poles in C / Λ {\displaystyle \mathbb {C} /\Lambda } and the sum of its residues is zero. This theorem implies that there is no elliptic function not equal to zero with exactly one pole of order one or exactly one zero of order one in the fundamental domain. === 3rd theorem === A non-constant elliptic function takes on every value the same number of times in C / Λ {\displaystyle \mathbb {C} /\Lambda } counted with multiplicity. == Weierstrass ℘-function == One of the most important elliptic functions is the Weierstrass ℘ {\displaystyle \wp } -function. For a given period lattice Λ {\displaystyle \Lambda } it is defined by ℘ ( z ) = 1 z 2 + ∑ λ ∈ Λ ∖ { 0 } ( 1 ( z − λ ) 2 − 1 λ 2 ) . {\displaystyle \wp (z)={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right).} It is constructed in such a way that it has a pole of order two at every lattice point. The term − 1 λ 2 {\displaystyle -{\frac {1}{\lambda ^{2}}}} is there to make the series convergent. ℘ {\displaystyle \wp } is an even elliptic function; that is, ℘ ( − z ) = ℘ ( z ) {\displaystyle \wp (-z)=\wp (z)} . Its derivative ℘ ′ ( z ) = − 2 ∑ λ ∈ Λ 1 ( z − λ ) 3 {\displaystyle \wp '(z)=-2\sum _{\lambda \in \Lambda }{\frac {1}{(z-\lambda )^{3}}}} is an odd function, i.e. ℘ ′ ( − z ) = − ℘ ′ ( z ) . {\displaystyle \wp '(-z)=-\wp '(z).} One of the main results of the theory of elliptic functions is the following: Every elliptic function with respect to a given period lattice Λ {\displaystyle \Lambda } can be expressed as a rational function in terms of ℘ {\displaystyle \wp } and ℘ ′ {\displaystyle \wp '} . The ℘ {\displaystyle \wp } -function satisfies the differential equation ℘ ′ ( z ) 2 = 4 ℘ ( z ) 3 − g 2 ℘ ( z ) − g 3 , {\displaystyle \wp '(z)^{2}=4\wp (z)^{3}-g_{2}\wp (z)-g_{3},} where g 2 {\displaystyle g_{2}} and g 3 {\displaystyle g_{3}} are constants that depend on Λ {\displaystyle \Lambda } . More precisely, g 2 ( ω 1 , ω 2 ) = 60 G 4 ( ω 1 , ω 2 ) {\displaystyle g_{2}(\omega _{1},\omega _{2})=60G_{4}(\omega _{1},\omega _{2})} and g 3 ( ω 1 , ω 2 ) = 140 G 6 ( ω 1 , ω 2 ) {\displaystyle g_{3}(\omega _{1},\omega _{2})=140G_{6}(\omega _{1},\omega _{2})} , where G 4 {\displaystyle G_{4}} and G 6 {\displaystyle G_{6}} are so called Eisenstein series. In algebraic language, the field of elliptic functions is isomorphic to the field C ( X ) [ Y ] / ( Y 2 − 4 X 3 + g 2 X + g 3 ) {\displaystyle \mathbb {C} (X)[Y]/(Y^{2}-4X^{3}+g_{2}X+g_{3})} , where the isomorphism maps ℘ {\displaystyle \wp } to X {\displaystyle X} and ℘ ′ {\displaystyle \wp '} to Y {\displaystyle Y} . == Relation to elliptic integrals == The relation to elliptic integrals has mainly a historical background. Elliptic integrals had been studied by Legendre, whose work was taken on by Niels Henrik Abel and Carl Gustav Jacobi. Abel discovered elliptic functions by taking the inverse function φ {\displaystyle \varphi } of the elliptic integral function α ( x ) = ∫ 0 x d t ( 1 − c 2 t 2 ) ( 1 + e 2 t 2 ) {\displaystyle \alpha (x)=\int _{0}^{x}{\frac {dt}{\sqrt {(1-c^{2}t^{2})(1+e^{2}t^{2})}}}} with x = φ ( α ) {\displaystyle x=\varphi (\alpha )} . Additionally he defined the functions f ( α ) = 1 − c 2 φ 2 ( α ) {\displaystyle f(\alpha )={\sqrt {1-c^{2}\varphi ^{2}(\alpha )}}} and F ( α ) = 1 + e 2 φ 2 ( α ) {\displaystyle F(\alpha )={\sqrt {1+e^{2}\varphi ^{2}(\alpha )}}} . After continuation to the complex plane they turned out to be doubly periodic and are known as Abel elliptic functions. Jacobi elliptic functions are similarly obtained as inverse functions of elliptic integrals. Jacobi considered the integral function ξ ( x ) = ∫ 0 x d t ( 1 − t 2 ) ( 1 − k 2 t 2 ) {\displaystyle \xi (x)=\int _{0}^{x}{\frac {dt}{\sqrt {(1-t^{2})(1-k^{2}t^{2})}}}} and inverted it: x = sn ⁡ ( ξ ) {\displaystyle x=\operatorname {sn} (\xi )} . sn {\displaystyle \operatorname {sn} } stands for sinus amplitudinis and is the name of the new function. He then introduced the functions cosinus amplitudinis and delta amplitudinis, which are defined as follows: cn ⁡ ( ξ ) := 1 − x 2 {\displaystyle \operatorname {cn} (\xi ):={\sqrt {1-x^{2}}}} dn ⁡ ( ξ ) := 1 − k 2 x 2 {\displaystyle \operatorname {dn} (\xi ):={\sqrt {1-k^{2}x^{2}}}} . Only by taking this step, Jacobi could prove his general transformation formula of elliptic integrals in 1827. == History == Shortly after the development of infinitesimal calculus the theory of elliptic functions was started by the Italian mathematician Giulio di Fagnano and the Swiss mathematician Leonhard Euler. When they tried to calculate the arc length of a lemniscate they encountered problems involving integrals that contained the square root of polynomials of degree 3 and 4. It was clear that those so called elliptic integrals could not be solved using elementary functions. Fagnano observed an algebraic relation between elliptic integrals, what he published in 1750. Euler immediately generalized Fagnano's results and posed his algebraic addition theorem for elliptic integrals. Except for a comment by Landen his ideas were not pursued until 1786, when Legendre published his paper Mémoires sur les intégrations par arcs d’ellipse. Legendre subsequently studied elliptic integrals and called them elliptic functions. Legendre introduced a three-fold classification – three kinds – which was a crucial simplification of the rather complicated theory at that time. Other important works of Legendre are: Mémoire sur les transcendantes elliptiques (1792), Exercices de calcul intégral (1811–1817), Traité des fonctions elliptiques (1825–1832). Legendre's work was mostly left untouched by mathematicians until 1826. Subsequently, Niels Henrik Abel and Carl Gustav Jacobi resumed the investigations and quickly discovered new results. At first they inverted the elliptic integral function. Following a suggestion of Jacobi in 1829 these inverse functions are now called elliptic functions. One of Jacobi's most important works is Fundamenta nova theoriae functionum ellipticarum which was published 1829. The addition theorem Euler found was posed and proved in its general form by Abel in 1829. In those days the theory of elliptic functions and the theory of doubly periodic functions were considered to be different theories. They were brought together by Briot and Bouquet in 1856. Gauss discovered many of the properties of elliptic functions 30 years earlier but never published anything on the subject. == See also == Elliptic integral Elliptic curve Modular group Theta function == References == == Literature == Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 16". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 567, 627. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. See also chapter 18. (only considers the case of real invariants). N. I. Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2 Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Springer-Verlag, New York, 1976. ISBN 0-387-97127-0 (See Chapter 1.) E. T. Whittaker and G. N. Watson. A course of modern analysis, Cambridge University Press, 1952 == External links == "Elliptic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] MAA, Translation of Abel's paper on elliptic functions. Elliptic Functions and Elliptic Integrals on YouTube, lecture by William A. Schwalm (4 hours) Johansson, Fredrik (2018). "Numerical Evaluation of Elliptic Functions, Elliptic Integrals and Modular Forms". arXiv:1806.06725 [cs.NA].
Wikipedia/Elliptical_function
Aequationes Mathematicae is a mathematical journal. It is primarily devoted to functional equations, but also publishes papers in dynamical systems, combinatorics, and geometry. As well as publishing regular journal submissions on these topics, it also regularly reports on international symposia on functional equations and produces bibliographies on the subject. János Aczél founded the journal in 1968 at the University of Waterloo, in part because of the long publication delays of up to four years in other journals at the time of its founding. It is currently published by Springer Science+Business Media, with Zsolt Páles of the University of Debrecen as its editor in chief. János Aczél remains its honorary editor in chief. It is frequently listed as a second-quartile mathematics journal by SCImago Journal Rank. == References ==
Wikipedia/Aequationes_Mathematicae
In mathematics, and more specifically in homological algebra, a resolution (or left resolution; dually a coresolution or right resolution) is an exact sequence of modules (or, more generally, of objects of an abelian category) that is used to define invariants characterizing the structure of a specific module or object of this category. When, as usually, arrows are oriented to the right, the sequence is supposed to be infinite to the left for (left) resolutions, and to the right for right resolutions. However, a finite resolution is one where only finitely many of the objects in the sequence are non-zero; it is usually represented by a finite exact sequence in which the leftmost object (for resolutions) or the rightmost object (for coresolutions) is the zero-object. Generally, the objects in the sequence are restricted to have some property P (for example to be free). Thus one speaks of a P resolution. In particular, every module has free resolutions, projective resolutions and flat resolutions, which are left resolutions consisting, respectively of free modules, projective modules or flat modules. Similarly every module has injective resolutions, which are right resolutions consisting of injective modules. == Resolutions of modules == === Definitions === Given a module M over a ring R, a left resolution (or simply resolution) of M is an exact sequence (possibly infinite) of R-modules ⋯ ⟶ d n + 1 E n ⟶ d n ⋯ ⟶ d 3 E 2 ⟶ d 2 E 1 ⟶ d 1 E 0 ⟶ ε M ⟶ 0. {\displaystyle \cdots {\overset {d_{n+1}}{\longrightarrow }}E_{n}{\overset {d_{n}}{\longrightarrow }}\cdots {\overset {d_{3}}{\longrightarrow }}E_{2}{\overset {d_{2}}{\longrightarrow }}E_{1}{\overset {d_{1}}{\longrightarrow }}E_{0}{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The homomorphisms di are called boundary maps. The map ε is called an augmentation map. For succinctness, the resolution above can be written as E ∙ ⟶ ε M ⟶ 0. {\displaystyle E_{\bullet }{\overset {\varepsilon }{\longrightarrow }}M\longrightarrow 0.} The dual notion is that of a right resolution (or coresolution, or simply resolution). Specifically, given a module M over a ring R, a right resolution is a possibly infinite exact sequence of R-modules 0 ⟶ M ⟶ ε C 0 ⟶ d 0 C 1 ⟶ d 1 C 2 ⟶ d 2 ⋯ ⟶ d n − 1 C n ⟶ d n ⋯ , {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{0}{\overset {d^{0}}{\longrightarrow }}C^{1}{\overset {d^{1}}{\longrightarrow }}C^{2}{\overset {d^{2}}{\longrightarrow }}\cdots {\overset {d^{n-1}}{\longrightarrow }}C^{n}{\overset {d^{n}}{\longrightarrow }}\cdots ,} where each Ci is an R-module (it is common to use superscripts on the objects in the resolution and the maps between them to indicate the dual nature of such a resolution). For succinctness, the resolution above can be written as 0 ⟶ M ⟶ ε C ∙ . {\displaystyle 0\longrightarrow M{\overset {\varepsilon }{\longrightarrow }}C^{\bullet }.} A (co)resolution is said to be finite if only finitely many of the modules involved are non-zero. The length of a finite resolution is the maximum index n labeling a nonzero module in the finite resolution. === Free, projective, injective, and flat resolutions === In many circumstances conditions are imposed on the modules Ei resolving the given module M. For example, a free resolution of a module M is a left resolution in which all the modules Ei are free R-modules. Likewise, projective and flat resolutions are left resolutions such that all the Ei are projective and flat R-modules, respectively. Injective resolutions are right resolutions whose Ci are all injective modules. Every R-module possesses a free left resolution. A fortiori, every module also admits projective and flat resolutions. The proof idea is to define E0 to be the free R-module generated by the elements of M, and then E1 to be the free R-module generated by the elements of the kernel of the natural map E0 → M etc. Dually, every R-module possesses an injective resolution. Projective resolutions (and, more generally, flat resolutions) can be used to compute Tor functors. Projective resolution of a module M is unique up to a chain homotopy, i.e., given two projective resolutions P0 → M and P1 → M of M there exists a chain homotopy between them. Resolutions are used to define homological dimensions. The minimal length of a finite projective resolution of a module M is called its projective dimension and denoted pd(M). For example, a module has projective dimension zero if and only if it is a projective module. If M does not admit a finite projective resolution then the projective dimension is infinite. For example, for a commutative local ring R, the projective dimension is finite if and only if R is regular and in this case it coincides with the Krull dimension of R. Analogously, the injective dimension id(M) and flat dimension fd(M) are defined for modules also. The injective and projective dimensions are used on the category of right R-modules to define a homological dimension for R called the right global dimension of R. Similarly, flat dimension is used to define weak global dimension. The behavior of these dimensions reflects characteristics of the ring. For example, a ring has right global dimension 0 if and only if it is a semisimple ring, and a ring has weak global dimension 0 if and only if it is a von Neumann regular ring. === Graded modules and algebras === Let M be a graded module over a graded algebra, which is generated over a field by its elements of positive degree. Then M has a free resolution in which the free modules Ei may be graded in such a way that the di and ε are graded linear maps. Among these graded free resolutions, the minimal free resolutions are those for which the number of basis elements of each Ei is minimal. The number of basis elements of each Ei and their degrees are the same for all the minimal free resolutions of a graded module. If I is a homogeneous ideal in a polynomial ring over a field, the Castelnuovo–Mumford regularity of the projective algebraic set defined by I is the minimal integer r such that the degrees of the basis elements of the Ei in a minimal free resolution of I are all lower than r-i. === Examples === A classic example of a free resolution is given by the Koszul complex of a regular sequence in a local ring or of a homogeneous regular sequence in a graded algebra finitely generated over a field. Let X be an aspherical space, i.e., its universal cover E is contractible. Then every singular (or simplicial) chain complex of E is a free resolution of the module Z not only over the ring Z but also over the group ring Z [π1(X)]. == Resolutions in abelian categories == The definition of resolutions of an object M in an abelian category A is the same as above, but the Ei and Ci are objects in A, and all maps involved are morphisms in A. The analogous notion of projective and injective modules are projective and injective objects, and, accordingly, projective and injective resolutions. However, such resolutions need not exist in a general abelian category A. If every object of A has a projective (resp. injective) resolution, then A is said to have enough projectives (resp. enough injectives). Even if they do exist, such resolutions are often difficult to work with. For example, as pointed out above, every R-module has an injective resolution, but this resolution is not functorial, i.e., given a homomorphism M → M' , together with injective resolutions 0 → M → I ∗ , 0 → M ′ → I ∗ ′ , {\displaystyle 0\rightarrow M\rightarrow I_{*},\ \ 0\rightarrow M'\rightarrow I'_{*},} there is in general no functorial way of obtaining a map between I ∗ {\displaystyle I_{*}} and I ∗ ′ {\displaystyle I'_{*}} . === Abelian categories without projective resolutions in general === One class of examples of Abelian categories without projective resolutions are the categories Coh ( X ) {\displaystyle {\text{Coh}}(X)} of coherent sheaves on a scheme X {\displaystyle X} . For example, if X = P S n {\displaystyle X=\mathbb {P} _{S}^{n}} is projective space, any coherent sheaf M {\displaystyle {\mathcal {M}}} on X {\displaystyle X} has a presentation given by an exact sequence ⨁ i , j = 0 O X ( s i , j ) → ⨁ i = 0 O X ( s i ) → M → 0. {\displaystyle \bigoplus _{i,j=0}{\mathcal {O}}_{X}(s_{i,j})\to \bigoplus _{i=0}{\mathcal {O}}_{X}(s_{i})\to {\mathcal {M}}\to 0.} The first two terms are not in general projective since H n ( P S n , O X ( s ) ) ≠ 0 {\displaystyle H^{n}(\mathbb {P} _{S}^{n},{\mathcal {O}}_{X}(s))\neq 0} for s > 0 {\displaystyle s>0} . But, both terms are locally free, and locally flat. Both classes of sheaves can be used in place for certain computations, replacing projective resolutions for computing some derived functors. == Acyclic resolution == In many cases one is not really interested in the objects appearing in a resolution, but in the behavior of the resolution with respect to a given functor. Therefore, in many situations, the notion of acyclic resolutions is used: given a left exact functor F: A → B between two abelian categories, a resolution 0 → M → E 0 → E 1 → E 2 → ⋯ {\displaystyle 0\rightarrow M\rightarrow E_{0}\rightarrow E_{1}\rightarrow E_{2}\rightarrow \cdots } of an object M of A is called F-acyclic, if the derived functors RiF(En) vanish for all i > 0 and n ≥ 0. Dually, a left resolution is acyclic with respect to a right exact functor if its derived functors vanish on the objects of the resolution. For example, given a R-module M, the tensor product ⊗ R M {\displaystyle \otimes _{R}M} is a right exact functor Mod(R) → Mod(R). Every flat resolution is acyclic with respect to this functor. A flat resolution is acyclic for the tensor product by every M. Similarly, resolutions that are acyclic for all the functors Hom( ⋅ , M) are the projective resolutions and those that are acyclic for the functors Hom(M, ⋅ ) are the injective resolutions. Any injective (projective) resolution is F-acyclic for any left exact (right exact, respectively) functor. The importance of acyclic resolutions lies in the fact that the derived functors RiF (of a left exact functor, and likewise LiF of a right exact functor) can be obtained from as the homology of F-acyclic resolutions: given an acyclic resolution E ∗ {\displaystyle E_{*}} of an object M, we have R i F ( M ) = H i F ( E ∗ ) , {\displaystyle R_{i}F(M)=H_{i}F(E_{*}),} where right hand side is the i-th homology object of the complex F ( E ∗ ) . {\displaystyle F(E_{*}).} This situation applies in many situations. For example, for the constant sheaf R on a differentiable manifold M can be resolved by the sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} of smooth differential forms: 0 → R ⊂ C 0 ( M ) → d C 1 ( M ) → d ⋯ → d C dim ⁡ M ( M ) → 0. {\displaystyle 0\rightarrow R\subset {\mathcal {C}}^{0}(M){\stackrel {d}{\rightarrow }}{\mathcal {C}}^{1}(M){\stackrel {d}{\rightarrow }}\cdots {\stackrel {d}{\rightarrow }}{\mathcal {C}}^{\dim M}(M)\rightarrow 0.} The sheaves C ∗ ( M ) {\displaystyle {\mathcal {C}}^{*}(M)} are fine sheaves, which are known to be acyclic with respect to the global section functor Γ : F ↦ F ( M ) {\displaystyle \Gamma :{\mathcal {F}}\mapsto {\mathcal {F}}(M)} . Therefore, the sheaf cohomology, which is the derived functor of the global section functor Γ is computed as H i ( M , R ) = H i ( C ∗ ( M ) ) . {\displaystyle \mathrm {H} ^{i}(M,\mathbf {R} )=\mathrm {H} ^{i}({\mathcal {C}}^{*}(M)).} Similarly Godement resolutions are acyclic with respect to the global sections functor. == See also == Standard resolution Hilbert–Burch theorem Hilbert's syzygy theorem Free presentation Matrix factorizations (algebra) == Notes == == References == Iain T. Adamson (1972), Elementary rings and modules, University Mathematical Texts, Oliver and Boyd, ISBN 0-05-002192-3 Eisenbud, David (1995), Commutative algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 3-540-94268-8, MR 1322960, Zbl 0819.13001 Jacobson, Nathan (2009) [1985], Basic algebra II (Second ed.), Dover Publications, ISBN 978-0-486-47187-7 Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl 0848.13001 Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
Wikipedia/Free_resolution
In linear algebra, a cone—sometimes called a linear cone to distinguish it from other sorts of cones—is a subset of a real vector space that is closed under positive scalar multiplication; that is, C {\displaystyle C} is a cone if x ∈ C {\displaystyle x\in C} implies s x ∈ C {\displaystyle sx\in C} for every positive scalar s {\displaystyle s} . This is a broad generalization of the standard cone in Euclidean space. A convex cone is a cone that is also closed under addition, or, equivalently, a subset of a vector space that is closed under linear combinations with positive coefficients. It follows that convex cones are convex sets. The definition of a convex cone makes sense in a vector space over any ordered field, although the field of real numbers is used most often. == Definition == A subset C {\displaystyle C} of a vector space is a cone if x ∈ C {\displaystyle x\in C} implies s x ∈ C {\displaystyle sx\in C} for every s > 0 {\displaystyle s>0} . Here s > 0 {\displaystyle s>0} refers to (strict) positivity in the scalar field. === Competing definitions === Some other authors require [ 0 , ∞ ) C ⊂ C {\displaystyle [0,\infty )C\subset C} or even 0 ∈ C {\displaystyle 0\in C} . Some require a cone to be convex and/or satisfy C ∩ − C ⊂ { 0 } {\displaystyle C\cap -C\subset \{0\}} . The conical hull of a set C {\displaystyle C} is defined as the smallest convex cone that contains C ∪ { 0 } {\displaystyle C\cup \{0\}} . Therefore, it need not be the smallest cone that contains C ∪ { 0 } {\displaystyle C\cup \{0\}} . Wedge may refer to what we call cones (when "cone" is reserved for something stronger), or just to a subset of them, depending on the author. === Cone: 0 or not === A subset C {\displaystyle C} of a vector space V {\displaystyle V} over an ordered field F {\displaystyle F} is a cone (or sometimes called a linear cone) if for each x {\displaystyle x} in C {\displaystyle C} and positive scalar α {\displaystyle \alpha } in F {\displaystyle F} , the product α x {\displaystyle \alpha x} is in C {\displaystyle C} . Note that some authors define cone with the scalar α {\displaystyle \alpha } ranging over all non-negative scalars (rather than all positive scalars, which does not include 0). Some authors even require 0 ∈ C {\displaystyle 0\in C} , thus excluding the empty set. Therefore, [ 0 , ∞ ) {\displaystyle [0,\infty )} is a cone, ∅ {\displaystyle \varnothing } is a cone only according to the 1st and 2nd definition above, and ( 0 , ∞ ) {\displaystyle (0,\infty )} is a cone only according to the 1st definition above. All of them are convex (see below). === Convex cone === A cone C {\displaystyle C} is a convex cone if α x + β y {\displaystyle \alpha x+\beta y} belongs to C {\displaystyle C} , for any positive scalars α {\displaystyle \alpha } , β {\displaystyle \beta } , and any x {\displaystyle x} , y {\displaystyle y} in C {\displaystyle C} . A cone C {\displaystyle C} is convex if and only if C + C ⊆ C {\displaystyle C+C\subseteq C} . This concept is meaningful for any vector space that allows the concept of "positive" scalar, such as spaces over the rational, algebraic, or (more commonly) the real numbers. Also note that the scalars in the definition are positive meaning that the origin does not have to belong to C {\displaystyle C} . Some authors use a definition that ensures the origin belongs to C {\displaystyle C} . Because of the scaling parameters α {\displaystyle \alpha } and β {\displaystyle \beta } , cones are infinite in extent and not bounded. If C {\displaystyle C} is a convex cone, then for any positive scalar α {\displaystyle \alpha } and any x {\displaystyle x} in C {\displaystyle C} the vector α x = α 2 x + α 2 x ∈ C . {\displaystyle \alpha x={\tfrac {\alpha }{2}}x+{\tfrac {\alpha }{2}}x\in C.} So a convex cone is a special case of a linear cone as defined above. It follows from the above property that a convex cone can also be defined as a linear cone that is closed under convex combinations, or just under addition. More succinctly, a set C {\displaystyle C} is a convex cone if and only if α C = C {\displaystyle \alpha C=C} for every positive scalar α {\displaystyle \alpha } and C + C = C {\displaystyle C+C=C} . === Face of a convex cone === A face of a convex cone C {\displaystyle C} is a subset F {\displaystyle F} of C {\displaystyle C} such that F {\displaystyle F} is also a convex cone, and for any vectors x , y {\displaystyle x,y} in C {\displaystyle C} with x + y {\displaystyle x+y} in F {\displaystyle F} , x {\displaystyle x} and y {\displaystyle y} must both be in F {\displaystyle F} . For example, C {\displaystyle C} itself is a face of C {\displaystyle C} . The origin { 0 } {\displaystyle \{0\}} is a face of C {\displaystyle C} if C {\displaystyle C} contains no line (so C {\displaystyle C} is "strictly convex", or "salient", as defined below). The origin and C {\displaystyle C} are sometimes called the trivial faces of C {\displaystyle C} . A ray (the set of nonnegative multiples of a nonzero vector) is called an extremal ray if it is a face of C {\displaystyle C} . Let C {\displaystyle C} be a closed, strictly convex cone in R n {\displaystyle \mathbb {R} ^{n}} . Suppose that C {\displaystyle C} is more than just the origin. Then C {\displaystyle C} is the convex hull of its extremal rays. == Examples == For a vector space V {\displaystyle V} , every linear subspace of V {\displaystyle V} is a convex cone. In particular, the space V {\displaystyle V} itself and the origin { 0 } {\displaystyle \{0\}} are convex cones in V {\displaystyle V} . For authors who do not require a convex cone to contain the origin, the empty set ∅ {\displaystyle \emptyset } is also a convex cone. The conical hull of a finite or infinite set of vectors in R n {\displaystyle \mathbb {R} ^{n}} is a convex cone. The tangent cones of a convex set are convex cones. The set { x ∈ R 2 ∣ x 2 ≥ 0 , x 1 = 0 } ∪ { x ∈ R 2 ∣ x 1 ≥ 0 , x 2 = 0 } {\displaystyle \left\{x\in \mathbb {R} ^{2}\mid x_{2}\geq 0,x_{1}=0\right\}\cup \left\{x\in \mathbb {R} ^{2}\mid x_{1}\geq 0,x_{2}=0\right\}} is a cone but not a convex cone. The norm cone C = { ( x , r ) ∈ R d + 1 ∣ ‖ x ‖ ≤ r } {\displaystyle C=\left\{(x,r)\in \mathbb {R} ^{d+1}\mid \|x\|\leq r\right\}} is a convex cone. (For d = 2 {\displaystyle d=2} , this is the round cone in the figure.) Each extremal ray of C {\displaystyle C} is spanned by a vector ( x , 1 ) {\displaystyle (x,1)} with ‖ x ‖ = 1 {\displaystyle \|x\|=1} (so x {\displaystyle x} is a point in the sphere S d − 1 {\displaystyle S^{d-1}} ). These rays are in fact the only nontrivial faces of C {\displaystyle C} . The intersection of two convex cones in the same vector space is again a convex cone, but their union may fail to be one. The class of convex cones is also closed under arbitrary linear maps. In particular, if C {\displaystyle C} is a convex cone, so is its opposite − C {\displaystyle -C} , and C ∩ − C {\displaystyle C\cap -C} is the largest linear subspace contained in C {\displaystyle C} . The set of positive semidefinite matrices. The set of nonnegative continuous functions is a convex cone. == Special examples == === Affine convex cones === An affine convex cone is the set resulting from applying an affine transformation to a convex cone. A common example is translating a convex cone by a point p: p + C. Technically, such transformations can produce non-cones. For example, unless p = 0, p + C is not a linear cone. However, it is still called an affine convex cone. === Half-spaces === A (linear) hyperplane is a set in the form { x ∈ V ∣ f ( x ) = c } {\displaystyle \{x\in V\mid f(x)=c\}} where f is a linear functional on the vector space V. A closed half-space is a set in the form { x ∈ V ∣ f ( x ) ≤ c } {\displaystyle \{x\in V\mid f(x)\leq c\}} or { x ∈ V ∣ f ( x ) ≥ c } , {\displaystyle \{x\in V\mid f(x)\geq c\},} and likewise an open half-space uses strict inequality. Half-spaces (open or closed) are affine convex cones. Moreover (in finite dimensions), any convex cone C that is not the whole space V must be contained in some closed half-space H of V; this is a special case of Farkas' lemma. === Polyhedral and finitely generated cones === Polyhedral cones are special kinds of cones that can be defined in several ways:: 256–257  A cone C {\displaystyle C} is polyhedral if it is the conical hull of finitely many vectors (this property is also called finitely-generated). I.e., there is a set of vectors { v 1 , … , v k } ⊂ R n {\displaystyle \{v_{1},\ldots ,v_{k}\}\subset \mathbb {R} ^{n}} so that C = { a 1 v 1 + ⋯ + a k v k ∣ a i ∈ R ≥ 0 } {\displaystyle C=\{a_{1}v_{1}+\cdots +a_{k}v_{k}\mid a_{i}\in \mathbb {R} _{\geq 0}\}} . A cone is polyhedral if it is the intersection of a finite number of half-spaces which have 0 on their boundary (the equivalence between these first two definitions was proved by Weyl in 1935). A cone C {\displaystyle C} is polyhedral if there is some matrix A ∈ R m × n {\displaystyle A\in \mathbb {R} ^{m\times n}} such that C = { x ∈ R n ∣ A x ≥ 0 } {\displaystyle C=\{x\in \mathbb {R} ^{n}\mid Ax\geq 0\}} . A cone is polyhedral if it is the solution set of a system of homogeneous linear inequalities. Algebraically, each inequality is defined by a row of the matrix A {\displaystyle A} . Geometrically, each inequality defines a halfspace that passes through the origin. Every finitely generated cone is a polyhedral cone, and every polyhedral cone is a finitely generated cone. Every polyhedral cone has a unique representation as a conical hull of its extremal generators, and a unique representation of intersections of halfspaces, given each linear form associated with the halfspaces also define a support hyperplane of a facet. Each face of a polyhedral cone is spanned by some subset of its extremal generators. As a result, a polyhedral cone has only finitely many faces. Polyhedral cones play a central role in the representation theory of polyhedra. For instance, the decomposition theorem for polyhedra states that every polyhedron can be written as the Minkowski sum of a convex polytope and a polyhedral cone. Polyhedral cones also play an important part in proving the related Finite Basis Theorem for polytopes which shows that every polytope is a polyhedron and every bounded polyhedron is a polytope. The two representations of a polyhedral cone - by inequalities and by vectors - may have very different sizes. For example, consider the cone of all non-negative n {\displaystyle n} -by- n {\displaystyle n} matrices with equal row and column sums. The inequality representation requires n 2 {\displaystyle n^{2}} inequalities and 2 n − 1 {\displaystyle 2n-1} equations, but the vector representation requires n ! {\displaystyle n!} vectors (see the Birkhoff-von Neumann Theorem). The opposite can also happen - the number of vectors may be polynomial while the number of inequalities is exponential.: 256  The two representations together provide an efficient way to decide whether a given vector is in the cone: to show that it is in the cone, it is sufficient to present it as a conic combination of the defining vectors; to show that it is not in the cone, it is sufficient to present a single defining inequality that it violates. This fact is known as Farkas' lemma. A subtle point in the representation by vectors is that the number of vectors may be exponential in the dimension, so the proof that a vector is in the cone might be exponentially long. Fortunately, Carathéodory's theorem guarantees that every vector in the cone can be represented by at most d {\displaystyle d} defining vectors, where d {\displaystyle d} is the dimension of the space. === Blunt, pointed, flat, salient, and proper cones === According to the above definition, if C is a convex cone, then C ∪ {0} is a convex cone, too. A convex cone is said to be pointed if 0 is in C, and blunt if 0 is not in C. Some authors use "pointed" for C ∩ − C = { 0 } {\displaystyle C\cap -C=\{0\}} or salient (see below). Blunt cones can be excluded from the definition of convex cone by substituting "non-negative" for "positive" in the condition of α, β. A cone is called flat if it contains some nonzero vector x and its opposite −x, meaning C contains a linear subspace of dimension at least one, and salient (or strictly convex) otherwise. A blunt convex cone is necessarily salient, but the converse is not necessarily true. A convex cone C is salient if and only if C ∩ −C ⊆ {0}. A cone C is said to be generating if C − C = { x − y ∣ x ∈ C , y ∈ C } {\displaystyle C-C=\{x-y\mid x\in C,y\in C\}} equals the whole vector space. Some authors require salient cones to be pointed. The term "pointed" is also often used to refer to a closed cone that contains no complete line (i.e., no nontrivial subspace of the ambient vector space V, or what is called a salient cone). The term proper (convex) cone is variously defined, depending on the context and author. It often means a cone that satisfies other properties like being convex, closed, pointed, salient, and full-dimensional. Because of these varying definitions, the context or source should be consulted for the definition of these terms. === Rational cones === A type of cone of particular interest to pure mathematicians is the partially ordered set of rational cones. "Rational cones are important objects in toric algebraic geometry, combinatorial commutative algebra, geometric combinatorics, integer programming.". This object arises when we study cones in R d {\displaystyle \mathbb {R} ^{d}} together with the lattice Z d {\displaystyle \mathbb {Z} ^{d}} . A cone is called rational (here we assume "pointed", as defined above) whenever its generators all have integer coordinates, i.e., if C {\displaystyle C} is a rational cone, then C = { a 1 v 1 + ⋯ + a k v k ∣ a i ∈ R + } {\displaystyle C=\{a_{1}v_{1}+\cdots +a_{k}v_{k}\mid a_{i}\in \mathbb {R} _{+}\}} for some v i ∈ Z d {\displaystyle v_{i}\in \mathbb {Z} ^{d}} . == Dual cone == Let C ⊂ V be a set, not necessarily a convex set, in a real vector space V equipped with an inner product. The (continuous or topological) dual cone to C is the set C ∗ = { v ∈ V ∣ ∀ w ∈ C , ⟨ w , v ⟩ ≥ 0 } , {\displaystyle C^{*}=\{v\in V\mid \forall w\in C,\langle w,v\rangle \geq 0\},} which is always a convex cone. Here, ⟨ w , v ⟩ {\displaystyle \langle w,v\rangle } is the duality pairing between C and V, i.e. ⟨ w , v ⟩ = v ( w ) {\displaystyle \langle w,v\rangle =v(w)} . More generally, the (algebraic) dual cone to C ⊂ V in a linear space V is a subset of the dual space V* defined by: C ∗ := { v ∈ V ∗ ∣ ∀ w ∈ C , v ( w ) ≥ 0 } . {\displaystyle C^{*}:=\left\{v\in V^{*}\mid \forall w\in C,v(w)\geq 0\right\}.} In other words, if V* is the algebraic dual space of V, C* is the set of linear functionals that are nonnegative on the primal cone C. If we take V* to be the continuous dual space then it is the set of continuous linear functionals nonnegative on C. This notion does not require the specification of an inner product on V. In finite dimensions, the two notions of dual cone are essentially the same because every finite dimensional linear functional is continuous, and every continuous linear functional in an inner product space induces a linear isomorphism (nonsingular linear map) from V* to V, and this isomorphism will take the dual cone given by the second definition, in V*, onto the one given by the first definition; see the Riesz representation theorem. If C is equal to its dual cone, then C is called self-dual. A cone can be said to be self-dual without reference to any given inner product, if there exists an inner product with respect to which it is equal to its dual by the first definition. == Constructions == Given a closed, convex subset K of Hilbert space V, the outward normal cone to the set K at the point x in K is given by N K ( x ) = { p ∈ V : ∀ x ∗ ∈ K , ⟨ p , x ∗ − x ⟩ ≤ 0 } . {\displaystyle N_{K}(x)=\left\{p\in V\colon \forall x^{*}\in K,\left\langle p,x^{*}-x\right\rangle \leq 0\right\}.} Given a closed, convex subset K of V, the tangent cone (or contingent cone) to the set K at the point x is given by T K ( x ) = ⋃ h > 0 K − x h ¯ . {\displaystyle T_{K}(x)={\overline {\bigcup _{h>0}{\frac {K-x}{h}}}}.} Given a closed, convex subset K of Hilbert space V, the tangent cone to the set K at the point x in K can be defined as polar cone to outwards normal cone N K ( x ) {\displaystyle N_{K}(x)} : T K ( x ) = N K o ( x ) = d e f { y ∈ V ∣ ∀ ξ ∈ N K ( x ) : ⟨ y , ξ ⟩ ⩽ 0 } {\displaystyle T_{K}(x)=N_{K}^{o}(x)\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \{y\in V\mid \forall \xi \in N_{K}(x):\langle y,\xi \rangle \leqslant 0\}} Both the normal and tangent cone have the property of being closed and convex. They are important concepts in the fields of convex optimization, variational inequalities and projected dynamical systems. == Properties == If C is a non-empty convex cone in X, then the linear span of C is equal to C - C and the largest vector subspace of X contained in C is equal to C ∩ (−C). == Partial order defined by a convex cone == A pointed and salient convex cone C induces a partial ordering "≥" on V, defined so that x ≥ y {\displaystyle x\geq y} if and only if x − y ∈ C . {\displaystyle x-y\in C.} (If the cone is flat, the same definition gives merely a preorder.) Sums and positive scalar multiples of valid inequalities with respect to this order remain valid inequalities. A vector space with such an order is called an ordered vector space. Examples include the product order on real-valued vectors, R n , {\displaystyle \mathbb {R} ^{n},} and the Loewner order on positive semidefinite matrices. Such an ordering is commonly found in semidefinite programming. == See also == Cone (disambiguation) Cone (geometry) Cone (topology) Farkas' lemma Bipolar theorem Ordered vector space == Notes == == References == Bourbaki, Nicolas (1987). Topological Vector Spaces. Elements of Mathematics. Berlin, New York: Springer-Verlag. ISBN 978-3-540-13627-9. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Rockafellar, R. T. (1997) [1970]. Convex Analysis. Princeton, NJ: Princeton University Press. ISBN 1-4008-7317-7. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Zălinescu, C. (2002). Convex Analysis in General Vector Spaces. River Edge, NJ: World Scientific. ISBN 981-238-067-1. MR 1921556.
Wikipedia/Cone_(linear_algebra)
In mathematics, the geometric topology is a topology one can put on the set H of hyperbolic 3-manifolds of finite volume. == Use == Convergence in this topology is a crucial ingredient of hyperbolic Dehn surgery, a fundamental tool in the theory of hyperbolic 3-manifolds. == Definition == The following is a definition due to Troels Jorgensen: A sequence { M i } {\displaystyle \{M_{i}\}} in H converges to M in H if there are a sequence of positive real numbers ϵ i {\displaystyle \epsilon _{i}} converging to 0, and a sequence of ( 1 + ϵ i ) {\displaystyle (1+\epsilon _{i})} -bi-Lipschitz diffeomorphisms ϕ i : M i , [ ϵ i , ∞ ) → M [ ϵ i , ∞ ) , {\displaystyle \phi _{i}:M_{i,[\epsilon _{i},\infty )}\rightarrow M_{[\epsilon _{i},\infty )},} where the domains and ranges of the maps are the ϵ i {\displaystyle \epsilon _{i}} -thick parts of either the M i {\displaystyle M_{i}} 's or M. == Alternate definition == There is an alternate definition due to Mikhail Gromov. Gromov's topology utilizes the Gromov-Hausdorff metric and is defined on pointed hyperbolic 3-manifolds. One essentially considers better and better bi-Lipschitz homeomorphisms on larger and larger balls. This results in the same notion of convergence as above as the thick part is always connected; thus, a large ball will eventually encompass all of the thick part. === On framed manifolds === As a further refinement, Gromov's metric can also be defined on framed hyperbolic 3-manifolds. This gives nothing new but this space can be explicitly identified with torsion-free Kleinian groups with the Chabauty topology. == See also == Algebraic topology (object) == References == William Thurston, The geometry and topology of 3-manifolds, Princeton lecture notes (1978-1981). Canary, R. D.; Epstein, D. B. A.; Green, P., Notes on notes of Thurston. Analytical and geometric aspects of hyperbolic space (Coventry/Durham, 1984), 3--92, London Math. Soc. Lecture Note Ser., 111, Cambridge Univ. Press, Cambridge, 1987.
Wikipedia/Geometric_topology_(object)
In mathematics, at the junction of singularity theory and differential topology, Cerf theory is the study of families of smooth real-valued functions f : M → R {\displaystyle f\colon M\to \mathbb {R} } on a smooth manifold M {\displaystyle M} , their generic singularities and the topology of the subspaces these singularities define, as subspaces of the function space. The theory is named after Jean Cerf, who initiated it in the late 1960s. == An example == Marston Morse proved that, provided M {\displaystyle M} is compact, any smooth function f : M → R {\displaystyle f\colon M\to \mathbb {R} } can be approximated by a Morse function. Thus, for many purposes, one can replace arbitrary functions on M {\displaystyle M} by Morse functions. As a next step, one could ask, 'if you have a one-parameter family of functions which start and end at Morse functions, can you assume the whole family is Morse?' In general, the answer is no. Consider, for example, the one-parameter family of functions on M = R {\displaystyle M=\mathbb {R} } given by f t ( x ) = ( 1 / 3 ) x 3 − t x . {\displaystyle f_{t}(x)=(1/3)x^{3}-tx.} At time t = − 1 {\displaystyle t=-1} , it has no critical points, but at time t = 1 {\displaystyle t=1} , it is a Morse function with two critical points at x = ± 1 {\displaystyle x=\pm 1} . Cerf showed that a one-parameter family of functions between two Morse functions can be approximated by one that is Morse at all but finitely many degenerate times. The degeneracies involve a birth/death transition of critical points, as in the above example when, at t = 0 {\displaystyle t=0} , an index 0 and index 1 critical point are created as t {\displaystyle t} increases. == A stratification of an infinite-dimensional space == Returning to the general case where M {\displaystyle M} is a compact manifold, let Morse ⁡ ( M ) {\displaystyle \operatorname {Morse} (M)} denote the space of Morse functions on M {\displaystyle M} , and Func ⁡ ( M ) {\displaystyle \operatorname {Func} (M)} the space of real-valued smooth functions on M {\displaystyle M} . Morse proved that Morse ⁡ ( M ) ⊂ Func ⁡ ( M ) {\displaystyle \operatorname {Morse} (M)\subset \operatorname {Func} (M)} is an open and dense subset in the C ∞ {\displaystyle C^{\infty }} topology. For the purposes of intuition, here is an analogy. Think of the Morse functions as the top-dimensional open stratum in a stratification of Func ⁡ ( M ) {\displaystyle \operatorname {Func} (M)} (we make no claim that such a stratification exists, but suppose one does). Notice that in stratified spaces, the co-dimension 0 open stratum is open and dense. For notational purposes, reverse the conventions for indexing the stratifications in a stratified space, and index the open strata not by their dimension, but by their co-dimension. This is convenient since Func ⁡ ( M ) {\displaystyle \operatorname {Func} (M)} is infinite-dimensional if M {\displaystyle M} is not a finite set. By assumption, the open co-dimension 0 stratum of Func ⁡ ( M ) {\displaystyle \operatorname {Func} (M)} is Morse ⁡ ( M ) {\displaystyle \operatorname {Morse} (M)} , i.e.: Func ⁡ ( M ) 0 = Morse ⁡ ( M ) {\displaystyle \operatorname {Func} (M)^{0}=\operatorname {Morse} (M)} . In a stratified space X {\displaystyle X} , frequently X 0 {\displaystyle X^{0}} is disconnected. The essential property of the co-dimension 1 stratum X 1 {\displaystyle X^{1}} is that any path in X {\displaystyle X} which starts and ends in X 0 {\displaystyle X^{0}} can be approximated by a path that intersects X 1 {\displaystyle X^{1}} transversely in finitely many points, and does not intersect X i {\displaystyle X^{i}} for any i > 1 {\displaystyle i>1} . Thus Cerf theory is the study of the positive co-dimensional strata of Func ⁡ ( M ) {\displaystyle \operatorname {Func} (M)} , i.e.: Func ⁡ ( M ) i {\displaystyle \operatorname {Func} (M)^{i}} for i > 0 {\displaystyle i>0} . In the case of f t ( x ) = x 3 − t x {\displaystyle f_{t}(x)=x^{3}-tx} , only for t = 0 {\displaystyle t=0} is the function not Morse, and f 0 ( x ) = x 3 {\displaystyle f_{0}(x)=x^{3}} has a cubic degenerate critical point corresponding to the birth/death transition. == A single time parameter, statement of theorem == The Morse Theorem asserts that if f : M → R {\displaystyle f\colon M\to \mathbb {R} } is a Morse function, then near a critical point p {\displaystyle p} it is conjugate to a function g : R n → R {\displaystyle g\colon \mathbb {R} ^{n}\to \mathbb {R} } of the form g ( x 1 , x 2 , … , x n ) = f ( p ) + ϵ 1 x 1 2 + ϵ 2 x 2 2 + ⋯ + ϵ n x n 2 {\displaystyle g(x_{1},x_{2},\dotsc ,x_{n})=f(p)+\epsilon _{1}x_{1}^{2}+\epsilon _{2}x_{2}^{2}+\dotsb +\epsilon _{n}x_{n}^{2}} where ϵ i ∈ { ± 1 } {\displaystyle \epsilon _{i}\in \{\pm 1\}} . Cerf's one-parameter theorem asserts the essential property of the co-dimension one stratum. Precisely, if f t : M → R {\displaystyle f_{t}\colon M\to \mathbb {R} } is a one-parameter family of smooth functions on M {\displaystyle M} with t ∈ [ 0 , 1 ] {\displaystyle t\in [0,1]} , and f 0 , f 1 {\displaystyle f_{0},f_{1}} Morse, then there exists a smooth one-parameter family F t : M → R {\displaystyle F_{t}\colon M\to \mathbb {R} } such that F 0 = f 0 , F 1 = f 1 {\displaystyle F_{0}=f_{0},F_{1}=f_{1}} , F {\displaystyle F} is uniformly close to f {\displaystyle f} in the C k {\displaystyle C^{k}} -topology on functions M × [ 0 , 1 ] → R {\displaystyle M\times [0,1]\to \mathbb {R} } . Moreover, F t {\displaystyle F_{t}} is Morse at all but finitely many times. At a non-Morse time the function has only one degenerate critical point p {\displaystyle p} , and near that point the family F t {\displaystyle F_{t}} is conjugate to the family g t ( x 1 , x 2 , … , x n ) = f ( p ) + x 1 3 + ϵ 1 t x 1 + ϵ 2 x 2 2 + ⋯ + ϵ n x n 2 {\displaystyle g_{t}(x_{1},x_{2},\dotsc ,x_{n})=f(p)+x_{1}^{3}+\epsilon _{1}tx_{1}+\epsilon _{2}x_{2}^{2}+\dotsb +\epsilon _{n}x_{n}^{2}} where ϵ i ∈ { ± 1 } , t ∈ [ − 1 , 1 ] {\displaystyle \epsilon _{i}\in \{\pm 1\},t\in [-1,1]} . If ϵ 1 = − 1 {\displaystyle \epsilon _{1}=-1} this is a one-parameter family of functions where two critical points are created (as t {\displaystyle t} increases), and for ϵ 1 = 1 {\displaystyle \epsilon _{1}=1} it is a one-parameter family of functions where two critical points are destroyed. == Origins == The PL-Schoenflies problem for S 2 ⊂ R 3 {\displaystyle S^{2}\subset \mathbb {R} ^{3}} was solved by J. W. Alexander in 1924. His proof was adapted to the smooth case by Morse and Emilio Baiada. The essential property was used by Cerf in order to prove that every orientation-preserving diffeomorphism of S 3 {\displaystyle S^{3}} is isotopic to the identity, seen as a one-parameter extension of the Schoenflies theorem for S 2 ⊂ R 3 {\displaystyle S^{2}\subset \mathbb {R} ^{3}} . The corollary Γ 4 = 0 {\displaystyle \Gamma _{4}=0} at the time had wide implications in differential topology. The essential property was later used by Cerf to prove the pseudo-isotopy theorem for high-dimensional simply-connected manifolds. The proof is a one-parameter extension of Stephen Smale's proof of the h-cobordism theorem (the rewriting of Smale's proof into the functional framework was done by Morse, and also by John Milnor and by Cerf, André Gramain, and Bernard Morin following a suggestion of René Thom). Cerf's proof is built on the work of Thom and John Mather. A useful modern summary of Thom and Mather's work from that period is the book of Marty Golubitsky and Victor Guillemin. == Applications == Beside the above-mentioned applications, Robion Kirby used Cerf Theory as a key step in justifying the Kirby calculus. == Generalization == A stratification of the complement of an infinite co-dimension subspace of the space of smooth maps { f : M → R } {\displaystyle \{f\colon M\to \mathbb {R} \}} was eventually developed by Francis Sergeraert. During the seventies, the classification problem for pseudo-isotopies of non-simply connected manifolds was solved by Allen Hatcher and John Wagoner, discovering algebraic K i {\displaystyle K_{i}} -obstructions on π 1 M {\displaystyle \pi _{1}M} ( i = 2 {\displaystyle i=2} ) and π 2 M {\displaystyle \pi _{2}M} ( i = 1 {\displaystyle i=1} ) and by Kiyoshi Igusa, discovering obstructions of a similar nature on π 1 M {\displaystyle \pi _{1}M} ( i = 3 {\displaystyle i=3} ). == References ==
Wikipedia/Cerf_theory
In mathematics, general topology (or point set topology) is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. The fundamental concepts in point-set topology are continuity, compactness, and connectedness: Continuous functions, intuitively, take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The terms 'nearby', 'arbitrarily small', and 'far apart' can all be made precise by using the concept of open sets. If we change the definition of 'open set', we change what continuous functions, compact sets, and connected sets are. Each choice of definition for 'open set' is called a topology. A set with a topology is called a topological space. Metric spaces are an important class of topological spaces where a real, non-negative distance, also called a metric, can be defined on pairs of points in the set. Having a metric simplifies many proofs, and many of the most common topological spaces are metric spaces. == History == General topology grew out of a number of areas, most importantly the following: the detailed study of subsets of the real line (once known as the topology of point sets; this usage is now obsolete) the introduction of the manifold concept the study of metric spaces, especially normed linear spaces, in the early days of functional analysis. General topology assumed its present form around 1940. It captures, one might say, almost everything in the intuition of continuity, in a technically adequate form that can be applied in any area of mathematics. == A topology on a set == Let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: Both the empty set and X are elements of τ Any union of elements of τ is an element of τ Any intersection of finitely many elements of τ is an element of τ If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ. The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (i.e., its complement is open). A subset of X may be open, closed, both (clopen set), or neither. The empty set and X itself are always both closed and open. === Basis for a topology === A base (or basis) B for a topological space X with topology T is a collection of open sets in T such that every open set in T can be written as a union of elements of B. We say that the base generates the topology T. Bases are useful because many properties of topologies can be reduced to statements about a base that generates that topology—and because many topologies are most easily defined in terms of a base that generates them. === Subspace and quotient === Every subset of a topological space can be given the subspace topology in which the open sets are the intersections of the open sets of the larger space with the subset. For any indexed family of topological spaces, the product can be given the product topology, which is generated by the inverse images of open sets of the factors under the projection mappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. A quotient space is defined as follows: if X is a topological space and Y is a set, and if f : X→ Y is a surjective function, then the quotient topology on Y is the collection of subsets of Y that have open inverse images under f. In other words, the quotient topology is the finest topology on Y for which f is continuous. A common example of a quotient topology is when an equivalence relation is defined on the topological space X. The map f is then the natural projection onto the set of equivalence classes. === Examples of topological spaces === A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. ==== Discrete and trivial topologies ==== Any set can be given the discrete topology, in which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must be Hausdorff spaces where limit points are unique. ==== Cofinite and cocountable topologies ==== Any set can be given the cofinite topology in which the open sets are the empty set and the sets whose complement is finite. This is the smallest T1 topology on any infinite set. Any set can be given the cocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations. ==== Topologies on the real and complex numbers ==== There are many ways to define a topology on R, the set of real numbers. The standard topology on R is generated by the open intervals. The set of all open intervals forms a base or basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, the Euclidean spaces Rn can be given a topology. In the usual topology on Rn the basic open sets are the open balls. Similarly, C, the set of complex numbers, and Cn have a standard topology in which the basic open sets are open balls. The real line can also be given the lower limit topology. Here, the basic open sets are the half open intervals [a, b). This topology on R is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. ==== The metric topology ==== Every metric space can be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on any normed vector space. On a finite-dimensional vector space this topology is the same for all norms. ==== Further examples ==== There exist numerous topologies on any given finite set. Such spaces are called finite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general. Every manifold has a natural topology, since it is locally Euclidean. Similarly, every simplex and every simplicial complex inherits a natural topology from Rn. The Zariski topology is defined algebraically on the spectrum of a ring or an algebraic variety. On Rn or Cn, the closed sets of the Zariski topology are the solution sets of systems of polynomial equations. A linear graph has a natural topology that generalises many of the geometric aspects of graphs with vertices and edges. Many sets of linear operators in functional analysis are endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. Any local field has a topology native to it, and this can be extended to vector spaces over that field. The Sierpiński space is the simplest non-discrete topological space. It has important relations to the theory of computation and semantics. If Γ is an ordinal number, then the set Γ = [0, Γ) may be endowed with the order topology generated by the intervals (a, b), [0, b) and (a, Γ) where a and b are elements of Γ. == Continuous functions == Continuity is expressed in terms of neighborhoods: f is continuous at some point x ∈ X if and only if for any neighborhood V of f(x), there is a neighborhood U of x such that f(U) ⊆ V. Intuitively, continuity means no matter how "small" V becomes, there is always a U containing x that maps inside V and whose image under f contains f(x). This is equivalent to the condition that the preimages of the open (closed) sets in Y are open (closed) in X. In metric spaces, this definition is equivalent to the ε–δ-definition that is often used in analysis. An extreme example: if a set X is given the discrete topology, all functions f : X → T {\displaystyle f\colon X\rightarrow T} to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose range is indiscrete is continuous. === Alternative definitions === Several equivalent definitions for a topological structure exist and thus there are several equivalent ways to define a continuous function. ==== Neighborhood definition ==== Definitions based on preimages are often difficult to use directly. The following criterion expresses continuity in terms of neighborhoods: f is continuous at some point x ∈ X if and only if for any neighborhood V of f(x), there is a neighborhood U of x such that f(U) ⊆ V. Intuitively, continuity means no matter how "small" V becomes, there is always a U containing x that maps inside V. If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above δ-ε definition of continuity in the context of metric spaces. However, in general topological spaces, there is no notion of nearness or distance. Note, however, that if the target space is Hausdorff, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous. ==== Sequences and nets ==== In several contexts, the topology of a space is conveniently specified in terms of limit points. In many instances, this is accomplished by specifying when a point is the limit of a sequence, but for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function f: X → Y is sequentially continuous if whenever a sequence (xn) in X converges to a limit x, the sequence (f(xn)) converges to f(x). Thus sequentially continuous functions "preserve sequential limits". Every continuous function is sequentially continuous. If X is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X is a metric space, sequential continuity and continuity are equivalent. For non first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve limits of nets, and in fact this property characterizes continuous functions. ==== Closure operator definition ==== Instead of specifying the open subsets of a topological space, the topology can also be determined by a closure operator (denoted cl), which assigns to any subset A ⊆ X its closure, or an interior operator (denoted int), which assigns to any subset A of X its interior. In these terms, a function f : ( X , c l ) → ( X ′ , c l ′ ) {\displaystyle f\colon (X,\mathrm {cl} )\to (X',\mathrm {cl} ')\,} between topological spaces is continuous in the sense above if and only if for all subsets A of X f ( c l ( A ) ) ⊆ c l ′ ( f ( A ) ) . {\displaystyle f(\mathrm {cl} (A))\subseteq \mathrm {cl} '(f(A)).} That is to say, given any element x of X that is in the closure of any subset A, f(x) belongs to the closure of f(A). This is equivalent to the requirement that for all subsets A' of X' f − 1 ( c l ′ ( A ′ ) ) ⊇ c l ( f − 1 ( A ′ ) ) . {\displaystyle f^{-1}(\mathrm {cl} '(A'))\supseteq \mathrm {cl} (f^{-1}(A')).} Moreover, f : ( X , i n t ) → ( X ′ , i n t ′ ) {\displaystyle f\colon (X,\mathrm {int} )\to (X',\mathrm {int} ')\,} is continuous if and only if f − 1 ( i n t ′ ( A ) ) ⊆ i n t ( f − 1 ( A ) ) {\displaystyle f^{-1}(\mathrm {int} '(A))\subseteq \mathrm {int} (f^{-1}(A))} for any subset A of X. === Properties === If f: X → Y and g: Y → Z are continuous, then so is the composition g ∘ f: X → Z. If f: X → Y is continuous and X is compact, then f(X) is compact. X is connected, then f(X) is connected. X is path-connected, then f(X) is path-connected. X is Lindelöf, then f(X) is Lindelöf. X is separable, then f(X) is separable. The possible topologies on a fixed set X are partially ordered: a topology τ1 is said to be coarser than another topology τ2 (notation: τ1 ⊆ τ2) if every open subset with respect to τ1 is also open with respect to τ2. Then, the identity map idX: (X, τ2) → (X, τ1) is continuous if and only if τ1 ⊆ τ2 (see also comparison of topologies). More generally, a continuous function ( X , τ X ) → ( Y , τ Y ) {\displaystyle (X,\tau _{X})\rightarrow (Y,\tau _{Y})} stays continuous if the topology τY is replaced by a coarser topology and/or τX is replaced by a finer topology. === Homeomorphisms === Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. In fact, if an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f−1 need not be continuous. A bijective continuous function with continuous inverse function is called a homeomorphism. If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism. === Defining topologies via continuous functions === Given a function f : X → S , {\displaystyle f\colon X\rightarrow S,\,} where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f−1(A) is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus the final topology can be characterized as the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f. Dually, for a function f from a set S to a topological space X, the initial topology on S has a basis of open sets given by those sets of the form f−1(U) where U is open in X . If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus the initial topology can be characterized as the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X. A topology on a set S is uniquely determined by the class of all continuous functions S → X {\displaystyle S\rightarrow X} into all topological spaces X. Dually, a similar idea can be applied to maps X → S . {\displaystyle X\rightarrow S.} == Compact sets == Formally, a topological space X is called compact if each of its open covers has a finite subcover. Otherwise it is called non-compact. Explicitly, this means that for every arbitrary collection { U α } α ∈ A {\displaystyle \{U_{\alpha }\}_{\alpha \in A}} of open subsets of X such that X = ⋃ α ∈ A U α , {\displaystyle X=\bigcup _{\alpha \in A}U_{\alpha },} there is a finite subset J of A such that X = ⋃ i ∈ J U i . {\displaystyle X=\bigcup _{i\in J}U_{i}.} Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta. Every closed interval in R of finite length is compact. More is true: In Rn, a set is compact if and only if it is closed and bounded. (See Heine–Borel theorem). Every continuous image of a compact space is compact. A compact subset of a Hausdorff space is closed. Every continuous bijection from a compact space to a Hausdorff space is necessarily a homeomorphism. Every sequence of points in a compact metric space has a convergent subsequence. Every compact finite-dimensional manifold can be embedded in some Euclidean space Rn. == Connected sets == A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. Otherwise, X is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice. For a topological space X the following conditions are equivalent: X is connected. X cannot be divided into two disjoint nonempty closed sets. The only subsets of X that are both open and closed (clopen sets) are X and the empty set. The only subsets of X with empty boundary are X and the empty set. X cannot be written as the union of two nonempty separated sets. The only continuous functions from X to {0,1}, the two-point space endowed with the discrete topology, are constant. Every interval in R is connected. The continuous image of a connected space is connected. === Connected components === The maximal connected subsets (ordered by inclusion) of a nonempty topological space are called the connected components of the space. The components of any topological space X form a partition of X: they are disjoint, nonempty, and their union is the whole space. Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets, which are not open. Let Γ x {\displaystyle \Gamma _{x}} be the connected component of x in a topological space X, and Γ x ′ {\displaystyle \Gamma _{x}'} be the intersection of all open-closed sets containing x (called quasi-component of x.) Then Γ x ⊂ Γ x ′ {\displaystyle \Gamma _{x}\subset \Gamma '_{x}} where the equality holds if X is compact Hausdorff or locally connected. === Disconnected spaces === A space in which all components are one-point sets is called totally disconnected. Related to this property, a space X is called totally separated if, for any two distinct elements x and y of X, there exist disjoint open neighborhoods U of x and V of y such that X is the union of U and V. Clearly any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers Q, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff. === Path-connected sets === A path from a point x to a point y in a topological space X is a continuous function f from the unit interval [0,1] to X with f(0) = x and f(1) = y. A path-component of X is an equivalence class of X under the equivalence relation, which makes x equivalent to y if there is a path from x to y. The space X is said to be path-connected (or pathwise connected or 0-connected) if there is at most one path-component; that is, if there is a path joining any two points in X. Again, many authors exclude the empty space. Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line L* and the topologist's sine curve. However, subsets of the real line R are connected if and only if they are path-connected; these subsets are the intervals of R. Also, open subsets of Rn or Cn are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. == Products of spaces == Given X such that X := ∏ i ∈ I X i , {\displaystyle X:=\prod _{i\in I}X_{i},} is the Cartesian product of the topological spaces Xi, indexed by i ∈ I {\displaystyle i\in I} , and the canonical projections pi : X → Xi, the product topology on X is defined as the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology. The open sets in the product topology are unions (finite or infinite) of sets of the form ∏ i ∈ I U i {\displaystyle \prod _{i\in I}U_{i}} , where each Ui is open in Xi and Ui ≠ Xi only finitely many times. In particular, for a finite product (in particular, for the product of two topological spaces), the products of base elements of the Xi gives a basis for the product ∏ i ∈ I X i {\displaystyle \prod _{i\in I}X_{i}} . The product topology on X is the topology generated by sets of the form pi−1(U), where i is in I and U is an open subset of Xi. In other words, the sets {pi−1(U)} form a subbase for the topology on X. A subset of X is open if and only if it is a (possibly infinite) union of intersections of finitely many sets of the form pi−1(U). The pi−1(U) are sometimes called open cylinders, and their intersections are cylinder sets. In general, the product of the topologies of each Xi forms a basis for what is called the box topology on X. In general, the box topology is finer than the product topology, but for finite products they coincide. Related to compactness is Tychonoff's theorem: the (arbitrary) product of compact spaces is compact. == Separation axioms == Many of these names have alternative meanings in some of mathematical literature, as explained on History of the separation axioms; for example, the meanings of "normal" and "T4" are sometimes interchanged, similarly "regular" and "T3", etc. Many of the concepts also have several names; however, the one listed first is always least likely to be ambiguous. Most of these axioms have alternative definitions with the same meaning; the definitions given here fall into a consistent pattern that relates the various notions of separation defined in the previous section. Other possible definitions can be found in the individual articles. In all of the following definitions, X is again a topological space. X is T0, or Kolmogorov, if any two distinct points in X are topologically distinguishable. (It is a common theme among the separation axioms to have one version of an axiom that requires T0 and one version that doesn't.) X is T1, or accessible or Fréchet, if any two distinct points in X are separated. Thus, X is T1 if and only if it is both T0 and R0. (Though you may say such things as T1 space, Fréchet topology, and Suppose that the topological space X is Fréchet, avoid saying Fréchet space in this context, since there is another entirely different notion of Fréchet space in functional analysis.) X is Hausdorff, or T2 or separated, if any two distinct points in X are separated by neighbourhoods. Thus, X is Hausdorff if and only if it is both T0 and R1. A Hausdorff space must also be T1. X is T2½, or Urysohn, if any two distinct points in X are separated by closed neighbourhoods. A T2½ space must also be Hausdorff. X is regular, or T3, if it is T0 and if given any point x and closed set F in X such that x does not belong to F, they are separated by neighbourhoods. (In fact, in a regular space, any such x and F is also separated by closed neighbourhoods.) X is Tychonoff, or T3½, completely T3, or completely regular, if it is T0 and if f, given any point x and closed set F in X such that x does not belong to F, they are separated by a continuous function. X is normal, or T4, if it is Hausdorff and if any two disjoint closed subsets of X are separated by neighbourhoods. (In fact, a space is normal if and only if any two disjoint closed sets can be separated by a continuous function; this is Urysohn's lemma.) X is completely normal, or T5 or completely T4, if it is T1 and if any two separated sets are separated by neighbourhoods. A completely normal space must also be normal. X is perfectly normal, or T6 or perfectly T4, if it is T1 and if any two disjoint closed sets are precisely separated by a continuous function. A perfectly normal Hausdorff space must also be completely normal Hausdorff. The Tietze extension theorem: In a normal space, every continuous real-valued function defined on a closed subspace can be extended to a continuous map defined on the whole space. == Countability axioms == An axiom of countability is a property of certain mathematical objects (usually in a category) that requires the existence of a countable set with certain properties, while without it such sets might not exist. Important countability axioms for topological spaces: sequential space: a set is open if every sequence convergent to a point in the set is eventually in the set first-countable space: every point has a countable neighbourhood basis (local base) second-countable space: the topology has a countable base separable space: there exists a countable dense subspace Lindelöf space: every open cover has a countable subcover σ-compact space: there exists a countable cover by compact spaces Relations: Every first countable space is sequential. Every second-countable space is first-countable, separable, and Lindelöf. Every σ-compact space is Lindelöf. A metric space is first-countable. For metric spaces second-countability, separability, and the Lindelöf property are all equivalent. == Metric spaces == A metric space is an ordered pair ( M , d ) {\displaystyle (M,d)} where M {\displaystyle M} is a set and d {\displaystyle d} is a metric on M {\displaystyle M} , i.e., a function d : M × M → R {\displaystyle d\colon M\times M\rightarrow \mathbb {R} } such that for any x , y , z ∈ M {\displaystyle x,y,z\in M} , the following holds: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} (non-negative), d ( x , y ) = 0 {\displaystyle d(x,y)=0\,} iff x = y {\displaystyle x=y\,} (identity of indiscernibles), d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)\,} (symmetry) and d ( x , z ) ≤ d ( x , y ) + d ( y , z ) {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} (triangle inequality) . The function d {\displaystyle d} is also called distance function or simply distance. Often, d {\displaystyle d} is omitted and one just writes M {\displaystyle M} for a metric space if it is clear from the context what metric is used. Every metric space is paracompact and Hausdorff, and thus normal. The metrization theorems provide necessary and sufficient conditions for a topology to come from a metric. == Baire category theorem == The Baire category theorem says: If X is a complete metric space or a locally compact Hausdorff space, then the interior of every union of countably many nowhere dense sets is empty. Any open subspace of a Baire space is itself a Baire space. == Main areas of research == === Continuum theory === A continuum (pl continua) is a nonempty compact connected metric space, or less frequently, a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua. These objects arise frequently in nearly all areas of topology and analysis, and their properties are strong enough to yield many 'geometric' features. === Dynamical systems === Topological dynamics concerns the behavior of a space and its subspaces over time when subjected to continuous change. Many examples with applications to physics and other areas of math include fluid dynamics, billiards and flows on manifolds. The topological characteristics of fractals in fractal geometry, of Julia sets and the Mandelbrot set arising in complex dynamics, and of attractors in differential equations are often critical to understanding these systems. === Pointless topology === Pointless topology (also called point-free or pointfree topology) is an approach to topology that avoids mentioning points. The name 'pointless topology' is due to John von Neumann. The ideas of pointless topology are closely related to mereotopologies, in which regions (sets) are treated as foundational without explicit reference to underlying point sets. === Dimension theory === Dimension theory is a branch of general topology dealing with dimensional invariants of topological spaces. === Topological algebras === A topological algebra A over a topological field K is a topological vector space together with a continuous multiplication ⋅ : A × A ⟶ A {\displaystyle \cdot :A\times A\longrightarrow A} ( a , b ) ⟼ a ⋅ b {\displaystyle (a,b)\longmapsto a\cdot b} that makes it an algebra over K. A unital associative topological algebra is a topological ring. The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931). === Metrizability theory === In topology and related areas of mathematics, a metrizable space is a topological space that is homeomorphic to a metric space. That is, a topological space ( X , τ ) {\displaystyle (X,\tau )} is said to be metrizable if there is a metric d : X × X → [ 0 , ∞ ) {\displaystyle d\colon X\times X\to [0,\infty )} such that the topology induced by d is τ {\displaystyle \tau } . Metrization theorems are theorems that give sufficient conditions for a topological space to be metrizable. === Set-theoretic topology === Set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that are independent of Zermelo–Fraenkel set theory (ZFC). A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. == See also == List of examples in general topology Glossary of general topology for detailed definitions List of general topology topics for related articles Category of topological spaces == References == == Further reading == Some standard books on general topology include: Bourbaki, Topologie Générale (General Topology), ISBN 0-387-19374-X. Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Stephen Willard, General Topology, ISBN 0-486-43479-6. James Munkres, Topology, ISBN 0-13-181629-2. George F. Simmons, Introduction to Topology and Modern Analysis, ISBN 1-575-24238-9. Paul L. Shick, Topology: Point-Set and Geometric, ISBN 0-470-09605-5. Ryszard Engelking, General Topology, ISBN 3-88538-006-4. Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 O.Ya. Viro, O.A. Ivanov, V.M. Kharlamov and N.Yu. Netsvetaev, Elementary Topology: Textbook in Problems, ISBN 978-0-8218-4506-6. The arXiv subject code is math.GN. == External links == Media related to General topology at Wikimedia Commons
Wikipedia/Point-set_topology
In classical mechanics, the parameters that define the configuration of a system are called generalized coordinates, and the space defined by these coordinates is called the configuration space of the physical system. It is often the case that these parameters satisfy mathematical constraints, such that the set of actual configurations of the system is a manifold in the space of generalized coordinates. This manifold is called the configuration manifold of the system. Notice that this is a notion of "unrestricted" configuration space, i.e. in which different point particles may occupy the same position. In mathematics, in particular in topology, a notion of "restricted" configuration space is mostly used, in which the diagonals, representing "colliding" particles, are removed. == Examples == === A particle in 3D space === The position of a single particle moving in ordinary Euclidean 3-space is defined by the vector q = ( x , y , z ) {\displaystyle q=(x,y,z)} , and therefore its configuration space is Q = R 3 {\displaystyle Q=\mathbb {R} ^{3}} . It is conventional to use the symbol q {\displaystyle q} for a point in configuration space; this is the convention in both the Hamiltonian formulation of classical mechanics, and in Lagrangian mechanics. The symbol p {\displaystyle p} is used to denote momenta; the symbol q ˙ = d q / d t {\displaystyle {\dot {q}}=dq/dt} refers to velocities. A particle might be constrained to move on a specific manifold. For example, if the particle is attached to a rigid linkage, free to swing about the origin, it is effectively constrained to lie on a sphere. Its configuration space is the subset of coordinates in R 3 {\displaystyle \mathbb {R} ^{3}} that define points on the sphere S 2 {\displaystyle S^{2}} . In this case, one says that the manifold Q {\displaystyle Q} is the sphere, i.e. Q = S 2 {\displaystyle Q=S^{2}} . For n disconnected, non-interacting point particles, the configuration space is R 3 n {\displaystyle \mathbb {R} ^{3n}} . In general, however, one is interested in the case where the particles interact: for example, they are specific locations in some assembly of gears, pulleys, rolling balls, etc. often constrained to move without slipping. In this case, the configuration space is not all of R 3 n {\displaystyle \mathbb {R} ^{3n}} , but the subspace (submanifold) of allowable positions that the points can take. === Rigid body in 3D space === The set of coordinates that define the position of a reference point and the orientation of a coordinate frame attached to a rigid body in three-dimensional space form its configuration space, often denoted R 3 × S O ( 3 ) {\displaystyle \mathbb {R} ^{3}\times \mathrm {SO} (3)} where R 3 {\displaystyle \mathbb {R} ^{3}} represents the coordinates of the origin of the frame attached to the body, and S O ( 3 ) {\displaystyle \mathrm {SO} (3)} represents the rotation matrices that define the orientation of this frame relative to a ground frame. A configuration of the rigid body is defined by six parameters, three from R 3 {\displaystyle \mathbb {R} ^{3}} and three from S O ( 3 ) {\displaystyle \mathrm {SO} (3)} , and is said to have six degrees of freedom. In this case, the configuration space Q = R 3 × S O ( 3 ) {\displaystyle Q=\mathbb {R} ^{3}\times \mathrm {SO} (3)} is six-dimensional, and a point q ∈ Q {\displaystyle q\in Q} is just a point in that space. The "location" of q {\displaystyle q} in that configuration space is described using generalized coordinates; thus, three of the coordinates might describe the location of the center of mass of the rigid body, while three more might be the Euler angles describing its orientation. There is no canonical choice of coordinates; one could also choose some tip or endpoint of the rigid body, instead of its center of mass; one might choose to use quaternions instead of Euler angles, and so on. However, the parameterization does not change the mechanical characteristics of the system; all of the different parameterizations ultimately describe the same (six-dimensional) manifold, the same set of possible positions and orientations. Some parameterizations are easier to work with than others, and many important statements can be made by working in a coordinate-free fashion. Examples of coordinate-free statements are that the tangent space T Q {\displaystyle TQ} corresponds to the velocities of the points q ∈ Q {\displaystyle q\in Q} , while the cotangent space T ∗ Q {\displaystyle T^{*}Q} corresponds to momenta. (Velocities and momenta can be connected; for the most general, abstract case, this is done with the rather abstract notion of the tautological one-form.) === Robotic arm === For a robotic arm consisting of numerous rigid linkages, the configuration space consists of the location of each linkage (taken to be a rigid body, as in the section above), subject to the constraints of how the linkages are attached to each other, and their allowed range of motion. Thus, for n {\displaystyle n} linkages, one might consider the total space [ R 3 × S O ( 3 ) ] n {\displaystyle \left[\mathbb {R} ^{3}\times \mathrm {SO} (3)\right]^{n}} except that all of the various attachments and constraints mean that not every point in this space is reachable. Thus, the configuration space Q {\displaystyle Q} is necessarily a subspace of the n {\displaystyle n} -rigid-body configuration space. Note, however, that in robotics, the term configuration space can also refer to a further-reduced subset: the set of reachable positions by a robot's end-effector. This definition, however, leads to complexities described by the holonomy: that is, there may be several different ways of arranging a robot arm to obtain a particular end-effector location, and it is even possible to have the robot arm move while keeping the end effector stationary. Thus, a complete description of the arm, suitable for use in kinematics, requires the specification of all of the joint positions and angles, and not just some of them. The joint parameters of the robot are used as generalized coordinates to define configurations. The set of joint parameter values is called the joint space. A robot's forward and inverse kinematics equations define maps between configurations and end-effector positions, or between joint space and configuration space. Robot motion planning uses this mapping to find a path in joint space that provides an achievable route in the configuration space of the end-effector. == Formal definition == In classical mechanics, the configuration of a system refers to the position of all constituent point particles of the system. == Phase space == The configuration space is insufficient to completely describe a mechanical system: it fails to take into account velocities. The set of velocities available to a system defines a plane tangent to the configuration manifold of the system. At a point q ∈ Q {\displaystyle q\in Q} , that tangent plane is denoted by T q Q {\displaystyle T_{q}Q} . Momentum vectors are linear functionals of the tangent plane, known as cotangent vectors; for a point q ∈ Q {\displaystyle q\in Q} , that cotangent plane is denoted by T q ∗ Q {\displaystyle T_{q}^{*}Q} . The set of positions and momenta of a mechanical system forms the cotangent bundle T ∗ Q {\displaystyle T^{*}Q} of the configuration manifold Q {\displaystyle Q} . This larger manifold is called the phase space of the system. == Quantum state space == In quantum mechanics, configuration space can be used (see for example the Mott problem), but the classical mechanics extension to phase space cannot. Instead, a rather different set of formalisms and notation are used in the analogous concept called quantum state space. The analog of a "point particle" becomes a single point in C P 1 {\displaystyle \mathbb {C} \mathbf {P} ^{1}} , the complex projective line, also known as the Bloch sphere. It is complex, because a quantum-mechanical wave function has a complex phase; it is projective because the wave-function is normalized to unit probability. That is, given a wave-function ψ {\displaystyle \psi } one is free to normalize it by the total probability ∫ ψ ∗ ψ {\textstyle \int \psi ^{*}\psi } , thus making it projective. == See also == Feature space (topic in pattern recognition) Parameter space Configuration space (mathematics) == References == == External links == Intuitive Explanation of Classical Configuration Spaces. Interactive Visualization of the C-space for a Robot Arm with Two Rotational Links from UC Berkeley. Configuration Space Visualization from Free University of Berlin Configuration Spaces, Braids, and Robotics from Robert Ghrist
Wikipedia/Configuration_space_(physics)
In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. More specifically, a topological space is a set whose elements are called points, along with an additional structure called a topology, which can be defined as a set of neighbourhoods for each point that satisfy some axioms formalizing the concept of closeness. There are several equivalent definitions of a topology, the most commonly used of which is the definition through open sets, which is easier than the others to manipulate. A topological space is the most general type of a mathematical space that allows for the definition of limits, continuity, and connectedness. Common types of topological spaces include Euclidean spaces, metric spaces and manifolds. Although very general, the concept of topological spaces is fundamental, and used in virtually every branch of modern mathematics. The study of topological spaces in their own right is called general topology (or point-set topology). == History == Around 1735, Leonhard Euler discovered the formula V − E + F = 2 {\displaystyle V-E+F=2} relating the number of vertices (V), edges (E) and faces (F) of a convex polyhedron, and hence of a planar graph. The study and generalization of this formula, specifically by Cauchy (1789–1857) and L'Huilier (1750–1840), boosted the study of topology. In 1827, Carl Friedrich Gauss published General investigations of curved surfaces, which in section 3 defines the curved surface in a similar manner to the modern topological understanding: "A curved surface is said to possess continuous curvature at one of its points A, if the direction of all the straight lines drawn from A to points of the surface at an infinitesimal distance from A are deflected infinitesimally from one and the same plane passing through A." Yet, "until Riemann's work in the early 1850s, surfaces were always dealt with from a local point of view (as parametric surfaces) and topological issues were never considered". " Möbius and Jordan seem to be the first to realize that the main problem about the topology of (compact) surfaces is to find invariants (preferably numerical) to decide the equivalence of surfaces, that is, to decide whether two surfaces are homeomorphic or not." The subject is clearly defined by Felix Klein in his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry. The term "topology" was introduced by Johann Benedict Listing in 1847, although he had used the term in correspondence some years earlier instead of previously used "Analysis situs". The foundation of this science, for a space of any dimension, was created by Henri Poincaré. His first article on this topic appeared in 1894. In the 1930s, James Waddell Alexander II and Hassler Whitney first expressed the idea that a surface is a topological space that is locally like a Euclidean plane. Topological spaces were first defined by Felix Hausdorff in 1914 in his seminal "Principles of Set Theory". Metric spaces had been defined earlier in 1906 by Maurice Fréchet, though it was Hausdorff who popularised the term "metric space" (German: metrischer Raum). == Definitions == The utility of the concept of a topology is shown by the fact that there are several equivalent definitions of this mathematical structure. Thus one chooses the axiomatization suited for the application. The most commonly used is that in terms of open sets, but perhaps more intuitive is that in terms of neighbourhoods and so this is given first. === Definition via neighbourhoods === This axiomatization is due to Felix Hausdorff. Let X {\displaystyle X} be a (possibly empty) set. The elements of X {\displaystyle X} are usually called points, though they can be any mathematical object. Let N {\displaystyle {\mathcal {N}}} be a function assigning to each x {\displaystyle x} (point) in X {\displaystyle X} a non-empty collection N ( x ) {\displaystyle {\mathcal {N}}(x)} of subsets of X . {\displaystyle X.} The elements of N ( x ) {\displaystyle {\mathcal {N}}(x)} will be called neighbourhoods of x {\displaystyle x} with respect to N {\displaystyle {\mathcal {N}}} (or, simply, neighbourhoods of x {\displaystyle x} ). The function N {\displaystyle {\mathcal {N}}} is called a neighbourhood topology if the axioms below are satisfied; and then X {\displaystyle X} with N {\displaystyle {\mathcal {N}}} is called a topological space. If N {\displaystyle N} is a neighbourhood of x {\displaystyle x} (i.e., N ∈ N ( x ) {\displaystyle N\in {\mathcal {N}}(x)} ), then x ∈ N . {\displaystyle x\in N.} In other words, each point of the set X {\displaystyle X} belongs to every one of its neighbourhoods with respect to N {\displaystyle {\mathcal {N}}} . If N {\displaystyle N} is a subset of X {\displaystyle X} and includes a neighbourhood of x , {\displaystyle x,} then N {\displaystyle N} is a neighbourhood of x . {\displaystyle x.} I.e., every superset of a neighbourhood of a point x ∈ X {\displaystyle x\in X} is again a neighbourhood of x . {\displaystyle x.} The intersection of two neighbourhoods of x {\displaystyle x} is a neighbourhood of x . {\displaystyle x.} Any neighbourhood N {\displaystyle N} of x {\displaystyle x} includes a neighbourhood M {\displaystyle M} of x {\displaystyle x} such that N {\displaystyle N} is a neighbourhood of each point of M . {\displaystyle M.} The first three axioms for neighbourhoods have a clear meaning. The fourth axiom has a very important use in the structure of the theory, that of linking together the neighbourhoods of different points of X . {\displaystyle X.} A standard example of such a system of neighbourhoods is for the real line R , {\displaystyle \mathbb {R} ,} where a subset N {\displaystyle N} of R {\displaystyle \mathbb {R} } is defined to be a neighbourhood of a real number x {\displaystyle x} if it includes an open interval containing x . {\displaystyle x.} Given such a structure, a subset U {\displaystyle U} of X {\displaystyle X} is defined to be open if U {\displaystyle U} is a neighbourhood of all points in U . {\displaystyle U.} The open sets then satisfy the axioms given below in the next definition of a topological space. Conversely, when given the open sets of a topological space, the neighbourhoods satisfying the above axioms can be recovered by defining N {\displaystyle N} to be a neighbourhood of x {\displaystyle x} if N {\displaystyle N} includes an open set U {\displaystyle U} such that x ∈ U . {\displaystyle x\in U.} === Definition via open sets === A topology on a set X may be defined as a collection τ {\displaystyle \tau } of subsets of X, called open sets and satisfying the following axioms: The empty set and X {\displaystyle X} itself belong to τ . {\displaystyle \tau .} Any arbitrary (finite or infinite) union of members of τ {\displaystyle \tau } belongs to τ . {\displaystyle \tau .} The intersection of any finite number of members of τ {\displaystyle \tau } belongs to τ . {\displaystyle \tau .} As this definition of a topology is the most commonly used, the set τ {\displaystyle \tau } of the open sets is commonly called a topology on X . {\displaystyle X.} A subset C ⊆ X {\displaystyle C\subseteq X} is said to be closed in ( X , τ ) {\displaystyle (X,\tau )} if its complement X ∖ C {\displaystyle X\setminus C} is an open set. ==== Examples of topologies ==== Given X = { 1 , 2 , 3 , 4 } , {\displaystyle X=\{1,2,3,4\},} the trivial or indiscrete topology on X {\displaystyle X} is the family τ = { { } , { 1 , 2 , 3 , 4 } } = { ∅ , X } {\displaystyle \tau =\{\{\},\{1,2,3,4\}\}=\{\varnothing ,X\}} consisting of only the two subsets of X {\displaystyle X} required by the axioms forms a topology on X . {\displaystyle X.} Given X = { 1 , 2 , 3 , 4 } , {\displaystyle X=\{1,2,3,4\},} the family τ = { ∅ , { 2 } , { 1 , 2 } , { 2 , 3 } , { 1 , 2 , 3 } , X } {\displaystyle \tau =\{\varnothing ,\{2\},\{1,2\},\{2,3\},\{1,2,3\},X\}} of six subsets of X {\displaystyle X} forms another topology of X . {\displaystyle X.} Given X = { 1 , 2 , 3 , 4 } , {\displaystyle X=\{1,2,3,4\},} the discrete topology on X {\displaystyle X} is the power set of X , {\displaystyle X,} which is the family τ = ℘ ( X ) {\displaystyle \tau =\wp (X)} consisting of all possible subsets of X . {\displaystyle X.} In this case the topological space ( X , τ ) {\displaystyle (X,\tau )} is called a discrete space. Given X = Z , {\displaystyle X=\mathbb {Z} ,} the set of integers, the family τ {\displaystyle \tau } of all finite subsets of the integers plus Z {\displaystyle \mathbb {Z} } itself is not a topology, because (for example) the union of all finite sets not containing zero is not finite and therefore not a member of the family of finite sets. The union of all finite sets not containing zero is also not all of Z , {\displaystyle \mathbb {Z} ,} and so it cannot be in τ . {\displaystyle \tau .} === Definition via closed sets === Using de Morgan's laws, the above axioms defining open sets become axioms defining closed sets: The empty set and X {\displaystyle X} are closed. The intersection of any collection of closed sets is also closed. The union of any finite number of closed sets is also closed. Using these axioms, another way to define a topological space is as a set X {\displaystyle X} together with a collection τ {\displaystyle \tau } of closed subsets of X . {\displaystyle X.} Thus the sets in the topology τ {\displaystyle \tau } are the closed sets, and their complements in X {\displaystyle X} are the open sets. === Other definitions === There are many other equivalent ways to define a topological space: in other words the concepts of neighbourhood, or that of open or closed sets can be reconstructed from other starting points and satisfy the correct axioms. Another way to define a topological space is by using the Kuratowski closure axioms, which define the closed sets as the fixed points of an operator on the power set of X . {\displaystyle X.} A net is a generalisation of the concept of sequence. A topology is completely determined if for every net in X {\displaystyle X} the set of its accumulation points is specified. == Comparison of topologies == Many topologies can be defined on a set to form a topological space. When every open set of a topology τ 1 {\displaystyle \tau _{1}} is also open for a topology τ 2 , {\displaystyle \tau _{2},} one says that τ 2 {\displaystyle \tau _{2}} is finer than τ 1 , {\displaystyle \tau _{1},} and τ 1 {\displaystyle \tau _{1}} is coarser than τ 2 . {\displaystyle \tau _{2}.} A proof that relies only on the existence of certain open sets will also hold for any finer topology, and similarly a proof that relies only on certain sets not being open applies to any coarser topology. The terms larger and smaller are sometimes used in place of finer and coarser, respectively. The terms stronger and weaker are also used in the literature, but with little agreement on the meaning, so one should always be sure of an author's convention when reading. The collection of all topologies on a given fixed set X {\displaystyle X} forms a complete lattice: if F = { τ α : α ∈ A } {\displaystyle F=\left\{\tau _{\alpha }:\alpha \in A\right\}} is a collection of topologies on X , {\displaystyle X,} then the meet of F {\displaystyle F} is the intersection of F , {\displaystyle F,} and the join of F {\displaystyle F} is the meet of the collection of all topologies on X {\displaystyle X} that contain every member of F . {\displaystyle F.} == Continuous functions == A function f : X → Y {\displaystyle f:X\to Y} between topological spaces is called continuous if for every x ∈ X {\displaystyle x\in X} and every neighbourhood N {\displaystyle N} of f ( x ) {\displaystyle f(x)} there is a neighbourhood M {\displaystyle M} of x {\displaystyle x} such that f ( M ) ⊆ N . {\displaystyle f(M)\subseteq N.} This relates easily to the usual definition in analysis. Equivalently, f {\displaystyle f} is continuous if the inverse image of every open set is open. This is an attempt to capture the intuition that there are no "jumps" or "separations" in the function. A homeomorphism is a bijection that is continuous and whose inverse is also continuous. Two spaces are called homeomorphic if there exists a homeomorphism between them. From the standpoint of topology, homeomorphic spaces are essentially identical. In category theory, one of the fundamental categories is Top, which denotes the category of topological spaces whose objects are topological spaces and whose morphisms are continuous functions. The attempt to classify the objects of this category (up to homeomorphism) by invariants has motivated areas of research, such as homotopy theory, homology theory, and K-theory. == Examples of topological spaces == A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. Any set can be given the discrete topology in which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must be Hausdorff spaces where limit points are unique. There exist numerous topologies on any given finite set. Such spaces are called finite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general. Any set can be given the cofinite topology in which the open sets are the empty set and the sets whose complement is finite. This is the smallest T1 topology on any infinite set. Any set can be given the cocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations. The real line can also be given the lower limit topology. Here, the basic open sets are the half open intervals [ a , b ) . {\displaystyle [a,b).} This topology on R {\displaystyle \mathbb {R} } is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. If γ {\displaystyle \gamma } is an ordinal number, then the set γ = [ 0 , γ ) {\displaystyle \gamma =[0,\gamma )} may be endowed with the order topology generated by the intervals ( α , β ) , {\displaystyle (\alpha ,\beta ),} [ 0 , β ) , {\displaystyle [0,\beta ),} and ( α , γ ) {\displaystyle (\alpha ,\gamma )} where α {\displaystyle \alpha } and β {\displaystyle \beta } are elements of γ . {\displaystyle \gamma .} Every manifold has a natural topology since it is locally Euclidean. Similarly, every simplex and every simplicial complex inherits a natural topology from . The Sierpiński space is the simplest non-discrete topological space. It has important relations to the theory of computation and semantics. === Topology from other topologies === Every subset of a topological space can be given the subspace topology in which the open sets are the intersections of the open sets of the larger space with the subset. For any indexed family of topological spaces, the product can be given the product topology, which is generated by the inverse images of open sets of the factors under the projection mappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. This construction is a special case of an initial topology. A quotient space is defined as follows: if X {\displaystyle X} is a topological space and Y {\displaystyle Y} is a set, and if f : X → Y {\displaystyle f:X\to Y} is a surjective function, then the quotient topology on Y {\displaystyle Y} is the collection of subsets of Y {\displaystyle Y} that have open inverse images under f . {\displaystyle f.} In other words, the quotient topology is the finest topology on Y {\displaystyle Y} for which f {\displaystyle f} is continuous. A common example of a quotient topology is when an equivalence relation is defined on the topological space X . {\displaystyle X.} The map f {\displaystyle f} is then the natural projection onto the set of equivalence classes. This construction is a special case of a final topology. The Vietoris topology on the set of all non-empty subsets of a topological space X , {\displaystyle X,} named for Leopold Vietoris, is generated by the following basis: for every n {\displaystyle n} -tuple U 1 , … , U n {\displaystyle U_{1},\ldots ,U_{n}} of open sets in X , {\displaystyle X,} we construct a basis set consisting of all subsets of the union of the U i {\displaystyle U_{i}} that have non-empty intersections with each U i . {\displaystyle U_{i}.} The Fell topology on the set of all non-empty closed subsets of a locally compact Polish space X {\displaystyle X} is a variant of the Vietoris topology, and is named after mathematician James Fell. It is generated by the following basis: for every n {\displaystyle n} -tuple U 1 , … , U n {\displaystyle U_{1},\ldots ,U_{n}} of open sets in X {\displaystyle X} and for every compact set K , {\displaystyle K,} the set of all subsets of X {\displaystyle X} that are disjoint from K {\displaystyle K} and have nonempty intersections with each U i {\displaystyle U_{i}} is a member of the basis. === Metric spaces === Metric spaces embody a metric, a precise notion of distance between points. Every metric space can be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on any normed vector space. On a finite-dimensional vector space this topology is the same for all norms. There are many ways of defining a topology on R , {\displaystyle \mathbb {R} ,} the set of real numbers. The standard topology on R {\displaystyle \mathbb {R} } is generated by the open intervals. The set of all open intervals forms a base or basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, the Euclidean spaces R n {\displaystyle \mathbb {R} ^{n}} can be given a topology. In the usual topology on R n {\displaystyle \mathbb {R} ^{n}} the basic open sets are the open balls. Similarly, C , {\displaystyle \mathbb {C} ,} the set of complex numbers, and C n {\displaystyle \mathbb {C} ^{n}} have a standard topology in which the basic open sets are open balls. === Topology from algebraic structure === For any algebraic objects we can introduce the discrete topology, under which the algebraic operations are continuous functions. For any such structure that is not finite, we often have a natural topology compatible with the algebraic operations, in the sense that the algebraic operations are still continuous. This leads to concepts such as topological groups, topological rings, topological fields and topological vector spaces over the latter. Local fields are topological fields important in number theory. The Zariski topology is defined algebraically on the spectrum of a ring or an algebraic variety. On R n {\displaystyle \mathbb {R} ^{n}} or C n , {\displaystyle \mathbb {C} ^{n},} the closed sets of the Zariski topology are the solution sets of systems of polynomial equations. === Topological spaces with order structure === Spectral: A space is spectral if and only if it is the prime spectrum of a ring (Hochster theorem). Specialization preorder: In a space the specialization preorder (or canonical preorder) is defined by x ≤ y {\displaystyle x\leq y} if and only if cl ⁡ { x } ⊆ cl ⁡ { y } , {\displaystyle \operatorname {cl} \{x\}\subseteq \operatorname {cl} \{y\},} where cl {\displaystyle \operatorname {cl} } denotes an operator satisfying the Kuratowski closure axioms. === Topology from other structure === If Γ {\displaystyle \Gamma } is a filter on a set X {\displaystyle X} then { ∅ } ∪ Γ {\displaystyle \{\varnothing \}\cup \Gamma } is a topology on X . {\displaystyle X.} Many sets of linear operators in functional analysis are endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. A linear graph has a natural topology that generalizes many of the geometric aspects of graphs with vertices and edges. Outer space of a free group F n {\displaystyle F_{n}} consists of the so-called "marked metric graph structures" of volume 1 on F n . {\displaystyle F_{n}.} == Classification of topological spaces == Topological spaces can be broadly classified, up to homeomorphism, by their topological properties. A topological property is a property of spaces that is invariant under homeomorphisms. To prove that two spaces are not homeomorphic it is sufficient to find a topological property not shared by them. Examples of such properties include connectedness, compactness, and various separation axioms. For algebraic invariants see algebraic topology. == See also == Complete Heyting algebra – The system of all open sets of a given topological space ordered by inclusion is a complete Heyting algebra. Compact space – Type of mathematical space Convergence space – Generalization of the notion of convergence that is found in general topology Exterior space Hausdorff space – Type of topological space Hilbert space – Type of vector space in math Hemicontinuity – Semicontinuity for set-valued functions Linear subspace – In mathematics, vector subspace Pointless topology Quasitopological space – Function in topology Relatively compact subspace – Subset of a topological space whose closure is compact Space (mathematics) – Mathematical set with some added structure == Citations == == Bibliography == Armstrong, M. A. (1983) [1979]. Basic Topology. Undergraduate Texts in Mathematics. Springer. ISBN 0-387-90839-0. Bredon, Glen E., Topology and Geometry (Graduate Texts in Mathematics), Springer; 1st edition (October 17, 1997). ISBN 0-387-97926-3. Bourbaki, Nicolas; Elements of Mathematics: General Topology, Addison-Wesley (1966). Brown, Ronald (2006). Topology and Groupoids. Booksurge. ISBN 1-4196-2722-8. (3rd edition of differently titled books) Čech, Eduard; Point Sets, Academic Press (1969). Fulton, William, Algebraic Topology, (Graduate Texts in Mathematics), Springer; 1st edition (September 5, 1997). ISBN 0-387-94327-7. Gallier, Jean; Xu, Dianna (2013). A Guide to the Classification Theorem for Compact Surfaces. Springer. Gauss, Carl Friedrich (1827). General investigations of curved surfaces. Lipschutz, Seymour; Schaum's Outline of General Topology, McGraw-Hill; 1st edition (June 1, 1968). ISBN 0-07-037988-2. Munkres, James; Topology, Prentice Hall; 2nd edition (December 28, 1999). ISBN 0-13-181629-2. Runde, Volker; A Taste of Topology (Universitext), Springer; 1st edition (July 6, 2005). ISBN 0-387-25790-X. Schubert, Horst (1968), Topology, Macdonald Technical & Scientific, ISBN 0-356-02077-0 Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). ISBN 0-03-079485-4. Vaidyanathaswamy, R. (1999). Set Topology. Chelsea Publishing Co. ISBN 0486404560. Willard, Stephen (2004). General Topology. Dover Publications. ISBN 0-486-43479-6. == External links == "Topological space", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Topology_(structure)
Geometry & Topology is a peer-refereed, international mathematics research journal devoted to geometry and topology, and their applications. It is currently based at the University of Warwick, United Kingdom, and published by Mathematical Sciences Publishers, a nonprofit academic publishing organisation. It was founded in 1997 by a group of topologists who were dissatisfied with recent substantial rises in subscription prices of journals published by major publishing corporations. The aim was to set up a high-quality journal, capable of competing with existing journals, but with substantially lower subscription fees. The journal was open-access for its first ten years of existence and was available free to individual users, although institutions were required to pay modest subscription fees for both online access and for printed volumes. At present, an online subscription is required to view full-text PDF copies of articles in the most recent three volumes; articles older than that are open-access, at which point copies of the published articles are uploaded to the arXiv. A traditional printed version is also published, at present on an annual basis. The journal has grown to be well respected in its field, and has in recent years published a number of important papers, in particular proofs of the Property P conjecture and the Birman conjecture. == References == Walter Neumann on the Success of Geometry & Topology, May 2010, Sciencewatch.com, Thomson Reuters == External links == Geometry & Topology MSP Open Access Policy
Wikipedia/Geometry_&_Topology
Mathematical Sciences Publishers is a nonprofit publishing company run by and for mathematicians. It publishes several journals and the book series Geometry & Topology Monographs. It is run from a central office in the Department of Mathematics at the University of California, Berkeley. == Journals owned and published == Algebra & Number Theory Algebraic & Geometric Topology Analysis & PDE Annals of K-Theory Communications in Applied Mathematics and Computational Science Geometry & Topology Innovations in Incidence Geometry—Algebraic, Topological and Combinatorial Involve: A Journal of Mathematics Journal of Algebraic Statistics Journal of Mechanics of Materials and Structures Journal of Software for Algebra and Geometry Mathematics and Mechanics of Complex Systems Moscow Journal of Combinatorics and Number Theory Pacific Journal of Mathematics Probability and Mathematical Physics Pure and Applied Analysis Tunisian Journal of Mathematics == Journals distributed == Annals of Mathematics == Online publications == Mathematical Sciences Publishers produces Celebratio Mathematica, a publicly supported online journal that honors mathematicians and their contributions. It provides a comprehensive collection of their works, including biographical information, bibliographic data, photographs, testimonials and commentary. The editorial board carefully selects mathematicians based on their scientific achievements and scholarly impact. == Book series == Open Book Series Geometry & Topology Monographs == References ==
Wikipedia/Mathematical_Sciences_Publishers
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces (known as normal stress) and frictional stresses acting tangentially between the surfaces (shear stress). Normal contact mechanics or frictionless contact mechanics focuses on normal stresses caused by applied normal forces and by the adhesion present on surfaces in close contact, even if they are clean and dry. Frictional contact mechanics emphasizes the effect of friction forces. Contact mechanics is part of mechanical engineering. The physical and mathematical formulation of the subject is built upon the mechanics of materials and continuum mechanics and focuses on computations involving elastic, viscoelastic, and plastic bodies in static or dynamic contact. Contact mechanics provides necessary information for the safe and energy efficient design of technical systems and for the study of tribology, contact stiffness, electrical contact resistance and indentation hardness. Principles of contacts mechanics are implemented towards applications such as locomotive wheel-rail contact, coupling devices, braking systems, tires, bearings, combustion engines, mechanical linkages, gasket seals, metalworking, metal forming, ultrasonic welding, electrical contacts, and many others. Current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm. The original work in contact mechanics dates back to 1881 with the publication of the paper "On the contact of elastic solids" "Über die Berührung fester elastischer Körper" by Heinrich Hertz. Hertz attempted to understand how the optical properties of multiple, stacked lenses might change with the force holding them together. Hertzian contact stress refers to the localized stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact. It gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, and any other bodies where two surfaces are in contact. == History == Classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the contact problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics. For example, in mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian contact stress usually refers to the stress close to the area of contact between two spheres of different radii. It was not until nearly one hundred years later that Kenneth L. Johnson, Kevin Kendall, and Alan D. Roberts found a similar solution for the case of adhesive contact. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s. The Derjaguin model came to be known as the Derjaguin–Muller–Toporov (DMT) model (after Derjaguin, M. V. Muller and Yu. P. Toporov), and the Johnson et al. model came to be known as the Johnson–Kendall–Roberts (JKR) model for adhesive elastic contact. This rejection proved to be instrumental in the development of the David Tabor and later Daniel Maugis parameters that quantify which contact model (of the JKR and DMT models) represent adhesive contact better for specific materials. Further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Frank Philip Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact. Through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology. The works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces. The contributions of J. F. Archard (1957) must also be mentioned in discussion of pioneering works in this field. Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force. Further important insights along these lines were provided by James A. Greenwood and J. B. P. Williamson (1966), A. W. Bush (1975), and Bo N. J. Persson (2002). The main findings of these works were that the true contact surface in rough materials is generally proportional to the normal force, while the parameters of individual micro-contacts (pressure and size of the micro-contact) are only weakly dependent upon the load. == Classical solutions for non-adhesive elastic contact == The theory of contact between elastic bodies can be used to find contact areas and indentation depths for simple geometries. Some commonly used solutions are listed below. The theory used to compute these solutions is discussed later in the article. Solutions for multitude of other technically relevant shapes, e.g. the truncated cone, the worn sphere, rough profiles, hollow cylinders, etc. can be found in === Contact between a sphere and a half-space === An elastic sphere of radius R {\displaystyle R} indents an elastic half-space where total deformation is d {\displaystyle d} , causing a contact area of radius a = R d {\displaystyle a={\sqrt {Rd}}} The applied force F {\displaystyle F} is related to the displacement d {\displaystyle d} by F = 4 3 E ∗ R 1 2 d 3 2 {\displaystyle F={\frac {4}{3}}E^{*}R^{\frac {1}{2}}d^{\frac {3}{2}}} where 1 E ∗ = 1 − ν 1 2 E 1 + 1 − ν 2 2 E 2 {\displaystyle {\frac {1}{E^{*}}}={\frac {1-\nu _{1}^{2}}{E_{1}}}+{\frac {1-\nu _{2}^{2}}{E_{2}}}} and E 1 {\displaystyle E_{1}} , E 2 {\displaystyle E_{2}} are the elastic moduli and ν 1 {\displaystyle \nu _{1}} , ν 2 {\displaystyle \nu _{2}} the Poisson's ratios associated with each body. The distribution of normal pressure in the contact area as a function of distance from the center of the circle is p ( r ) = p 0 ( 1 − r 2 a 2 ) 1 2 {\displaystyle p(r)=p_{0}\left(1-{\frac {r^{2}}{a^{2}}}\right)^{\frac {1}{2}}} where p 0 {\displaystyle p_{0}} is the maximum contact pressure given by p 0 = 3 F 2 π a 2 = 1 π ( 6 F E ∗ 2 R 2 ) 1 3 {\displaystyle p_{0}={\frac {3F}{2\pi a^{2}}}={\frac {1}{\pi }}\left({\frac {6F{E^{*}}^{2}}{R^{2}}}\right)^{\frac {1}{3}}} The radius of the circle is related to the applied load F {\displaystyle F} by the equation a 3 = 3 F R 4 E ∗ {\displaystyle a^{3}={\cfrac {3FR}{4E^{*}}}} The total deformation d {\displaystyle d} is related to the maximum contact pressure by d = a 2 R = ( 9 F 2 16 E ∗ 2 R ) 1 3 {\displaystyle d={\frac {a^{2}}{R}}=\left({\frac {9F^{2}}{16{E^{*}}^{2}R}}\right)^{\frac {1}{3}}} The maximum shear stress occurs in the interior at z ≈ 0.49 a {\displaystyle z\approx 0.49a} for ν = 0.33 {\displaystyle \nu =0.33} . === Contact between two spheres === For contact between two spheres of radii R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} , the area of contact is a circle of radius a {\displaystyle a} . The equations are the same as for a sphere in contact with a half plane except that the effective radius R {\displaystyle R} is defined as 1 R = 1 R 1 + 1 R 2 {\displaystyle {\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}} === Contact between two crossed cylinders of equal radius === This is equivalent to contact between a sphere of radius R {\displaystyle R} and a plane. === Contact between a rigid cylinder with flat end and an elastic half-space === If a rigid cylinder is pressed into an elastic half-space, it creates a pressure distribution described by p ( r ) = p 0 ( 1 − r 2 R 2 ) − 1 2 {\displaystyle p(r)=p_{0}\left(1-{\frac {r^{2}}{R^{2}}}\right)^{-{\frac {1}{2}}}} where R {\displaystyle R} is the radius of the cylinder and p 0 = 1 π E ∗ d R {\displaystyle p_{0}={\frac {1}{\pi }}E^{*}{\frac {d}{R}}} The relationship between the indentation depth and the normal force is given by F = 2 R E ∗ d {\displaystyle F=2RE^{*}d} === Contact between a rigid conical indenter and an elastic half-space === In the case of indentation of an elastic half-space of Young's modulus E {\displaystyle E} using a rigid conical indenter, the depth of the contact region ϵ {\displaystyle \epsilon } and contact radius a {\displaystyle a} are related by ϵ = a tan ⁡ ( θ ) {\displaystyle \epsilon =a\tan(\theta )} with θ {\displaystyle \theta } defined as the angle between the plane and the side surface of the cone. The total indentation depth d {\displaystyle d} is given by: d = π 2 ϵ {\displaystyle d={\frac {\pi }{2}}\epsilon } The total force is F = π E 2 ( 1 − ν 2 ) a 2 tan ⁡ ( θ ) = 2 E π ( 1 − ν 2 ) d 2 tan ⁡ ( θ ) {\displaystyle F={\frac {\pi E}{2\left(1-\nu ^{2}\right)}}a^{2}\tan(\theta )={\frac {2E}{\pi \left(1-\nu ^{2}\right)}}{\frac {d^{2}}{\tan(\theta )}}} The pressure distribution is given by p ( r ) = E d π a ( 1 − ν 2 ) ln ⁡ ( a r + ( a r ) 2 − 1 ) = E d π a ( 1 − ν 2 ) cosh − 1 ⁡ ( a r ) {\displaystyle p\left(r\right)={\frac {Ed}{\pi a\left(1-\nu ^{2}\right)}}\ln \left({\frac {a}{r}}+{\sqrt {\left({\frac {a}{r}}\right)^{2}-1}}\right)={\frac {Ed}{\pi a\left(1-\nu ^{2}\right)}}\cosh ^{-1}\left({\frac {a}{r}}\right)} The stress has a logarithmic singularity at the tip of the cone. === Contact between two cylinders with parallel axes === In contact between two cylinders with parallel axes, the force is linearly proportional to the length of cylinders L and to the indentation depth d: F ≈ π 4 E ∗ L d {\displaystyle F\approx {\frac {\pi }{4}}E^{*}Ld} The radii of curvature are entirely absent from this relationship. The contact radius is described through the usual relationship a = R d {\displaystyle a={\sqrt {Rd}}} with 1 R = 1 R 1 + 1 R 2 {\displaystyle {\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}} as in contact between two spheres. The maximum pressure is equal to p 0 = ( E ∗ F π L R ) 1 2 {\displaystyle p_{0}=\left({\frac {E^{*}F}{\pi LR}}\right)^{\frac {1}{2}}} === Bearing contact === The contact in the case of bearings is often a contact between a convex surface (male cylinder or sphere) and a concave surface (female cylinder or sphere: bore or hemispherical cup). === Method of dimensionality reduction === Some contact problems can be solved with the method of dimensionality reduction (MDR). In this method, the initial three-dimensional system is replaced with a contact of a body with a linear elastic or viscoelastic foundation (see fig.). The properties of one-dimensional systems coincide exactly with those of the original three-dimensional system, if the form of the bodies is modified and the elements of the foundation are defined according to the rules of the MDR. MDR is based on the solution to axisymmetric contact problems first obtained by Ludwig Föppl (1941) and Gerhard Schubert (1942) However, for exact analytical results, it is required that the contact problem is axisymmetric and the contacts are compact. == Hertzian theory of non-adhesive elastic contact == The classical theory of contact focused primarily on non-adhesive contact where no tension force is allowed to occur within the contact area, i.e., contacting bodies can be separated without adhesion forces. Several analytical and numerical approaches have been used to solve contact problems that satisfy the no-adhesion condition. Complex forces and moments are transmitted between the bodies where they touch, so problems in contact mechanics can become quite sophisticated. In addition, the contact stresses are usually a nonlinear function of the deformation. To simplify the solution procedure, a frame of reference is usually defined in which the objects (possibly in motion relative to one another) are static. They interact through surface tractions (or pressures/stresses) at their interface. As an example, consider two objects which meet at some surface S {\displaystyle S} in the ( x {\displaystyle x} , y {\displaystyle y} )-plane with the z {\displaystyle z} -axis assumed normal to the surface. One of the bodies will experience a normally-directed pressure distribution p z = p ( x , y ) = q z ( x , y ) {\displaystyle p_{z}=p(x,y)=q_{z}(x,y)} and in-plane surface traction distributions q x = q x ( x , y ) {\displaystyle q_{x}=q_{x}(x,y)} and q y = q y ( x , y ) {\displaystyle q_{y}=q_{y}(x,y)} over the region S {\displaystyle S} . In terms of a Newtonian force balance, the forces: P z = ∫ S p ( x , y ) d A ; Q x = ∫ S q x ( x , y ) d A ; Q y = ∫ S q y ( x , y ) d A {\displaystyle P_{z}=\int _{S}p(x,y)~\mathrm {d} A~;~~Q_{x}=\int _{S}q_{x}(x,y)~\mathrm {d} A~;~~Q_{y}=\int _{S}q_{y}(x,y)~\mathrm {d} A} must be equal and opposite to the forces established in the other body. The moments corresponding to these forces: M x = ∫ S y q z ( x , y ) d A ; M y = ∫ S − x q z ( x , y ) d A ; M z = ∫ S [ x q y ( x , y ) − y q x ( x , y ) ] d A {\displaystyle M_{x}=\int _{S}y~q_{z}(x,y)~\mathrm {d} A~;~~M_{y}=\int _{S}-x~q_{z}(x,y)~\mathrm {d} A~;~~M_{z}=\int _{S}[x~q_{y}(x,y)-y~q_{x}(x,y)]~\mathrm {d} A} are also required to cancel between bodies so that they are kinematically immobile. === Assumptions in Hertzian theory === The following assumptions are made in determining the solutions of Hertzian contact problems: The strains are small and within the elastic limit. The surfaces are continuous and non-conforming (implying that the area of contact is much smaller than the characteristic dimensions of the contacting bodies). Each body can be considered an elastic half-space. The surfaces are frictionless. Additional complications arise when some or all these assumptions are violated and such contact problems are usually called non-Hertzian. === Analytical solution techniques === Analytical solution methods for non-adhesive contact problem can be classified into two types based on the geometry of the area of contact. A conforming contact is one in which the two bodies touch at multiple points before any deformation takes place (i.e., they just "fit together"). A non-conforming contact is one in which the shapes of the bodies are dissimilar enough that, under zero load, they only touch at a point (or possibly along a line). In the non-conforming case, the contact area is small compared to the sizes of the objects and the stresses are highly concentrated in this area. Such a contact is called concentrated, otherwise it is called diversified. A common approach in linear elasticity is to superpose a number of solutions each of which corresponds to a point load acting over the area of contact. For example, in the case of loading of a half-plane, the Flamant solution is often used as a starting point and then generalized to various shapes of the area of contact. The force and moment balances between the two bodies in contact act as additional constraints to the solution. ==== Point contact on a (2D) half-plane ==== A starting point for solving contact problems is to understand the effect of a "point-load" applied to an isotropic, homogeneous, and linear elastic half-plane, shown in the figure to the right. The problem may be either plane stress or plane strain. This is a boundary value problem of linear elasticity subject to the traction boundary conditions: σ x z ( x , 0 ) = 0 ; σ z ( x , z ) = − P δ ( x , z ) {\displaystyle \sigma _{xz}(x,0)=0~;~~\sigma _{z}(x,z)=-P\delta (x,z)} where δ ( x , z ) {\displaystyle \delta (x,z)} is the Dirac delta function. The boundary conditions state that there are no shear stresses on the surface and a singular normal force P is applied at (0, 0). Applying these conditions to the governing equations of elasticity produces the result σ x x = − 2 P π x 2 z ( x 2 + z 2 ) 2 σ z z = − 2 P π z 3 ( x 2 + z 2 ) 2 σ x z = − 2 P π x z 2 ( x 2 + z 2 ) 2 {\displaystyle {\begin{aligned}\sigma _{xx}&=-{\frac {2P}{\pi }}{\frac {x^{2}z}{\left(x^{2}+z^{2}\right)^{2}}}\\\sigma _{zz}&=-{\frac {2P}{\pi }}{\frac {z^{3}}{\left(x^{2}+z^{2}\right)^{2}}}\\\sigma _{xz}&=-{\frac {2P}{\pi }}{\frac {xz^{2}}{\left(x^{2}+z^{2}\right)^{2}}}\end{aligned}}} for some point, ( x , y ) {\displaystyle (x,y)} , in the half-plane. The circle shown in the figure indicates a surface on which the maximum shear stress is constant. From this stress field, the strain components and thus the displacements of all material points may be determined. ==== Line contact on a (2D) half-plane ==== ===== Normal loading over a region ===== Suppose, rather than a point load P {\displaystyle P} , a distributed load p ( x ) {\displaystyle p(x)} is applied to the surface instead, over the range a < x < b {\displaystyle a<x<b} . The principle of linear superposition can be applied to determine the resulting stress field as the solution to the integral equations: σ x x = − 2 z π ∫ a b p ( x ′ ) ( x − x ′ ) 2 d x ′ [ ( x − x ′ ) 2 + z 2 ] 2 ; σ z z = − 2 z 3 π ∫ a b p ( x ′ ) d x ′ [ ( x − x ′ ) 2 + z 2 ] 2 σ x z = − 2 z 2 π ∫ a b p ( x ′ ) ( x − x ′ ) d x ′ [ ( x − x ′ ) 2 + z 2 ] 2 {\displaystyle {\begin{aligned}\sigma _{xx}&=-{\frac {2z}{\pi }}\int _{a}^{b}{\frac {p\left(x'\right)\left(x-x'\right)^{2}\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}~;~~\sigma _{zz}=-{\frac {2z^{3}}{\pi }}\int _{a}^{b}{\frac {p\left(x'\right)\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\\[3pt]\sigma _{xz}&=-{\frac {2z^{2}}{\pi }}\int _{a}^{b}{\frac {p\left(x'\right)\left(x-x'\right)\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\end{aligned}}} ===== Shear loading over a region ===== The same principle applies for loading on the surface in the plane of the surface. These kinds of tractions would tend to arise as a result of friction. The solution is similar the above (for both singular loads Q {\displaystyle Q} and distributed loads q ( x ) {\displaystyle q(x)} ) but altered slightly: σ x x = − 2 π ∫ a b q ( x ′ ) ( x − x ′ ) 3 d x ′ [ ( x − x ′ ) 2 + z 2 ] 2 ; σ z z = − 2 z 2 π ∫ a b q ( x ′ ) ( x − x ′ ) d x ′ [ ( x − x ′ ) 2 + z 2 ] 2 σ x z = − 2 z π ∫ a b q ( x ′ ) ( x − x ′ ) 2 d x ′ [ ( x − x ′ ) 2 + z 2 ] 2 {\displaystyle {\begin{aligned}\sigma _{xx}&=-{\frac {2}{\pi }}\int _{a}^{b}{\frac {q\left(x'\right)\left(x-x'\right)^{3}\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}~;~~\sigma _{zz}=-{\frac {2z^{2}}{\pi }}\int _{a}^{b}{\frac {q\left(x'\right)\left(x-x'\right)\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\\[3pt]\sigma _{xz}&=-{\frac {2z}{\pi }}\int _{a}^{b}{\frac {q\left(x'\right)\left(x-x'\right)^{2}\,dx'}{\left[\left(x-x'\right)^{2}+z^{2}\right]^{2}}}\end{aligned}}} These results may themselves be superposed onto those given above for normal loading to deal with more complex loads. ==== Point contact on a (3D) half-space ==== Analogously to the Flamant solution for the 2D half-plane, fundamental solutions are known for the linearly elastic 3D half-space as well. These were found by Boussinesq for a concentrated normal load and by Cerruti for a tangential load. See the section on this in Linear elasticity. === Numerical solution techniques === Distinctions between conforming and non-conforming contact do not have to be made when numerical solution schemes are employed to solve contact problems. These methods do not rely on further assumptions within the solution process since they base solely on the general formulation of the underlying equations. Besides the standard equations describing the deformation and motion of bodies two additional inequalities can be formulated. The first simply restricts the motion and deformation of the bodies by the assumption that no penetration can occur. Hence the gap h {\displaystyle h} between two bodies can only be positive or zero h ≥ 0 {\displaystyle h\geq 0} where h = 0 {\displaystyle h=0} denotes contact. The second assumption in contact mechanics is related to the fact, that no tension force is allowed to occur within the contact area (contacting bodies can be lifted up without adhesion forces). This leads to an inequality which the stresses have to obey at the contact interface. It is formulated for the normal stress σ n = t ⋅ n {\displaystyle \sigma _{n}=\mathbf {t} \cdot \mathbf {n} } . At locations where there is contact between the surfaces the gap is zero, i.e. h = 0 {\displaystyle h=0} , and there the normal stress is different than zero, indeed, σ n < 0 {\displaystyle \sigma _{n}<0} . At locations where the surfaces are not in contact the normal stress is identical to zero; σ n = 0 {\displaystyle \sigma _{n}=0} , while the gap is positive; i.e., h > 0 {\displaystyle h>0} . This type of complementarity formulation can be expressed in the so-called Kuhn–Tucker form, viz. h ≥ 0 , σ n ≤ 0 , σ n h = 0 . {\displaystyle h\geq 0\,,\quad \sigma _{n}\leq 0\,,\quad \sigma _{n}\,h=0\,.} These conditions are valid in a general way. The mathematical formulation of the gap depends upon the kinematics of the underlying theory of the solid (e.g., linear or nonlinear solid in two- or three dimensions, beam or shell model). By restating the normal stress σ n {\displaystyle \sigma _{n}} in terms of the contact pressure, p {\displaystyle p} ; i.e., p = − σ n {\displaystyle p=-\sigma _{n}} the Kuhn-Tucker problem can be restated as in standard complementarity form i.e. h ≥ 0 , p ≥ 0 , p h = 0 . {\displaystyle h\geq 0\,,\quad p\geq 0\,,\quad p\,h=0\,.} In the linear elastic case the gap can be formulated as h = h 0 + g + u , {\displaystyle {h}=h_{0}+{g}+u,} where h 0 {\displaystyle h_{0}} is the rigid body separation, g {\displaystyle g} is the geometry/topography of the contact (cylinder and roughness) and u {\displaystyle u} is the elastic deformation/deflection. If the contacting bodies are approximated as linear elastic half spaces, the Boussinesq-Cerruti integral equation solution can be applied to express the deformation ( u {\displaystyle u} ) as a function of the contact pressure ( p {\displaystyle p} ); i.e., u = ∫ ∞ ∞ K ( x − s ) p ( s ) d s , {\displaystyle u=\int _{\infty }^{\infty }K(x-s)p(s)ds,} where K ( x − s ) = 2 π E ∗ ln ⁡ | x − s | {\displaystyle K(x-s)={\frac {2}{\pi E^{*}}}\ln |x-s|} for line loading of an elastic half space and K ( x − s ) = 1 π E ∗ 1 ( x 1 − s 1 ) 2 + ( x 2 − s 2 ) 2 {\displaystyle K(x-s)={\frac {1}{\pi E^{*}}}{\frac {1}{\sqrt {\left(x_{1}-s_{1}\right)^{2}+\left(x_{2}-s_{2}\right)^{2}}}}} for point loading of an elastic half-space. After discretization the linear elastic contact mechanics problem can be stated in standard Linear Complementarity Problem (LCP) form. h = h 0 + g + C p , h ⋅ p = 0 , p ≥ 0 , h ≥ 0 , {\displaystyle {\begin{aligned}\mathbf {h} &=\mathbf {h} _{0}+\mathbf {g} +\mathbf {Cp} ,\\\mathbf {h} \cdot \mathbf {p} &=0,\,\,\,\mathbf {p} \geq 0,\,\,\,\mathbf {h} \geq 0,\\\end{aligned}}} where C {\displaystyle \mathbf {C} } is a matrix, whose elements are so called influence coefficients relating the contact pressure and the deformation. The strict LCP formulation of the CM problem presented above, allows for direct application of well-established numerical solution techniques such as Lemke's pivoting algorithm. The Lemke algorithm has the advantage that it finds the numerically exact solution within a finite number of iterations. The MATLAB implementation presented by Almqvist et al. is one example that can be employed to solve the problem numerically. In addition, an example code for an LCP solution of a 2D linear elastic contact mechanics problem has also been made public at MATLAB file exchange by Almqvist et al. == Contact between rough surfaces == When two bodies with rough surfaces are pressed against each other, the true contact area formed between the two bodies, A {\displaystyle A} , is much smaller than the apparent or nominal contact area A 0 {\displaystyle A_{0}} . The mechanics of contacting rough surfaces are discussed in terms of normal contact mechanics and static frictional interactions. Natural and engineering surfaces typically exhibit roughness features, known as asperities, across a broad range of length scales down to the molecular level, with surface structures exhibiting self affinity, also known as surface fractality. It is recognized that the self affine structure of surfaces is the origin of the linear scaling of true contact area with applied pressure. Assuming a model of shearing welded contacts in tribological interactions, this ubiquitously observed linearity between contact area and pressure can also be considered the origin of the linearity of the relationship between static friction and applied normal force. In contact between a "random rough" surface and an elastic half-space, the true contact area is related to the normal force F {\displaystyle F} by A = κ E ∗ h ′ F {\displaystyle A={\frac {\kappa }{E^{*}h'}}F} with h ′ {\displaystyle h'} equal to the root mean square (also known as the quadratic mean) of the surface slope and κ ≈ 2 {\displaystyle \kappa \approx 2} . The median pressure in the true contact surface p a v = F A ≈ 1 2 E ∗ h ′ {\displaystyle p_{\mathrm {av} }={\frac {F}{A}}\approx {\frac {1}{2}}E^{*}h'} can be reasonably estimated as half of the effective elastic modulus E ∗ {\displaystyle E^{*}} multiplied with the root mean square of the surface slope h ′ {\displaystyle h'} . === An overview of the GW model === Greenwood and Williamson in 1966 (GW) proposed a theory of elastic contact mechanics of rough surfaces which is today the foundation of many theories in tribology (friction, adhesion, thermal and electrical conductance, wear, etc.). They considered the contact between a smooth rigid plane and a nominally flat deformable rough surface covered with round tip asperities of the same radius R. Their theory assumes that the deformation of each asperity is independent of that of its neighbours and is described by the Hertz model. The heights of asperities have a random distribution. The probability that asperity height is between z {\displaystyle z} and z + d z {\displaystyle z+dz} is ϕ ( z ) d z {\displaystyle \phi (z)dz} . The authors calculated the number of contact spots n, the total contact area A r {\displaystyle A_{r}} and the total load P in general case. They gave those formulas in two forms: in the basic and using standardized variables. If one assumes that N asperities covers a rough surface, then the expected number of contacts is n = N ∫ d ∞ ϕ ( z ) d z {\displaystyle n=N\int _{d}^{\infty }\phi (z)dz} The expected total area of contact can be calculated from the formula A a = N π R ∫ d ∞ ( z − d ) ϕ ( z ) d z {\displaystyle A_{a}=N\pi R\int _{d}^{\infty }(z-d)\phi (z)dz} and the expected total force is given by P = 4 3 N E r R ∫ d ∞ ( z − d ) 3 2 ϕ ( z ) d z {\displaystyle P={\frac {4}{3}}NE_{r}{\sqrt {R}}\int _{d}^{\infty }(z-d)^{\frac {3}{2}}\phi (z)dz} where: R, radius of curvature of the microasperity, z, height of the microasperity measured from the profile line, d, close the surface, E r = ( 1 − ν 1 2 E 1 + 1 − ν 2 2 E 2 ) − 1 {\displaystyle E_{r}=\left({\frac {1-\nu _{1}^{2}}{E_{1}}}+{\frac {1-\nu _{2}^{2}}{E_{2}}}\right)^{-1}} , composite Young's modulus of elasticity, E i {\displaystyle E_{i}} , modulus of elasticity of the surface, ν i {\displaystyle \nu _{i}} , Poisson's surface coefficients. Greenwood and Williamson introduced standardized separation h = d / σ {\displaystyle h=d/\sigma } and standardized height distribution ϕ ∗ ( s ) {\displaystyle \phi ^{*}(s)} whose standard deviation is equal to one. Below are presented the formulas in the standardized form. F n ( h ) = ∫ h ∞ ( s − h ) n ϕ ∗ ( s ) d s n = η A n F 0 ( h ) A a = π η A R σ F 1 ( h ) P = 4 3 η A E r R σ 3 2 F 3 2 ( h ) {\displaystyle {\begin{aligned}F_{n}(h)&=\int _{h}^{\infty }(s-h)^{n}\phi ^{*}(s)ds\\n&=\eta A_{n}F_{0}(h)\\A_{a}&=\pi \eta AR\sigma F_{1}(h)\\P&={\frac {4}{3}}\eta AE_{r}{\sqrt {R}}\sigma ^{\frac {3}{2}}F_{\frac {3}{2}}(h)\end{aligned}}} where: d is the separation, A {\displaystyle A} is the nominal contact area, η {\displaystyle \eta } is the surface density of asperities, E ∗ {\displaystyle E^{*}} is the effective Young modulus. A {\displaystyle A} and P {\displaystyle P} can be determined when the F n ( h ) {\displaystyle F_{n}(h)} terms are calculated for the given surfaces using the convolution of the surface roughness ϕ ∗ ( s ) {\displaystyle \phi ^{*}(s)} . Several studies have followed the suggested curve fits for F n ( h ) {\displaystyle F_{n}(h)} assuming a Gaussian surface high distribution with curve fits presented by Arcoumanis et al. and Jedynak among others. It has been repeatedly observed that engineering surfaces do not demonstrate Gaussian surface height distributions e.g. Peklenik. Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the F n ( h ) {\displaystyle F_{n}(h)} terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction. Recently the exact approximants to A r {\displaystyle A_{r}} and P {\displaystyle P} were published by Jedynak. They are given by the following rational formulas, which are approximants to the integrals F n ( h ) {\displaystyle F_{n}(h)} . They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis. F n ( h ) = a 0 + a 1 h + a 2 h 2 + a 3 h 3 1 + b 1 h + b 2 h 2 + b 3 h 3 + b 4 h 4 + b 5 h 5 + b 6 h 6 exp ⁡ ( − h 2 2 ) {\displaystyle F_{n}(h)={\frac {a_{0}+a_{1}h+a_{2}h^{2}+a_{3}h^{3}}{1+b_{1}h+b_{2}h^{2}+b_{3}h^{3}+b_{4}h^{4}+b_{5}h^{5}+b_{6}h^{6}}}\exp \left(-{\frac {h^{2}}{2}}\right)} For F 1 ( h ) {\displaystyle F_{1}(h)} the coefficients are [ a 0 , a 1 , a 2 , a 3 ] = [ 0.398942280401 , 0.159773702775 , 0.0389687688311 , 0.00364356495452 ] [ b 1 , b 2 , b 3 , b 4 , b 5 , b 6 ] = [ 1.653807476138 , 1.170419428529 , 0.448892964428 , 0.0951971709160 , 0.00931642803836 , − 6.383774657279 × 10 − 6 ] {\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.398942280401,0.159773702775,0.0389687688311,0.00364356495452]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=\left[1.653807476138,1.170419428529,0.448892964428,0.0951971709160,0.00931642803836,-6.383774657279\times 10^{-6}\right]\end{aligned}}} The maximum relative error is 9.93 × 10 − 8 % {\displaystyle 9.93\times 10^{-8}\%} . For F 3 2 ( h ) {\displaystyle F_{\frac {3}{2}}(h)} the coefficients are [ a 0 , a 1 , a 2 , a 3 ] = [ 0.430019993662 , 0.101979509447 , 0.0229040629580 , 0.000688602924 ] [ b 1 , b 2 , b 3 , b 4 , b 5 , b 6 ] = [ 1.671117125984 , 1.199586555505 , 0.46936532151 , 0.102632881122 , 0.010686348714 , 0.0000517200271 ] {\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.430019993662,0.101979509447,0.0229040629580,0.000688602924]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=[1.671117125984,1.199586555505,0.46936532151,0.102632881122,0.010686348714,0.0000517200271]\end{aligned}}} The maximum relative error is 1.91 × 10 − 7 % {\displaystyle 1.91\times 10^{-7}\%} . The paper also contains the exact expressions for F n ( h ) {\displaystyle F_{n}(h)} F 1 ( h ) = 1 2 π exp ⁡ ( − 1 2 h 2 ) − 1 2 h erfc ⁡ ( h 2 ) F 3 2 ( h ) = 1 4 π exp ⁡ ( − h 2 4 ) h ( ( h 2 + 1 ) K 1 4 ( h 2 4 ) − h 2 K 3 4 ( h 2 4 ) ) {\displaystyle {\begin{aligned}F_{1}(h)&={\frac {1}{\sqrt {2\pi }}}\exp \left(-{\frac {1}{2}}h^{2}\right)-{\frac {1}{2}}h\,\operatorname {erfc} \left({\frac {h}{\sqrt {2}}}\right)\\F_{\frac {3}{2}}(h)&={\frac {1}{4{\sqrt {\pi }}}}\exp \left(-{\frac {h^{2}}{4}}\right){\sqrt {h}}\left(\left(h^{2}+1\right)K_{\frac {1}{4}}\left({\frac {h^{2}}{4}}\right)-h^{2}K_{\frac {3}{4}}\left({\frac {h^{2}}{4}}\right)\right)\end{aligned}}} where erfc(z) means the complementary error function and K ν ( z ) {\displaystyle K_{\nu }(z)} is the modified Bessel function of the second kind. For the situation where the asperities on the two surfaces have a Gaussian height distribution and the peaks can be assumed to be spherical, the average contact pressure is sufficient to cause yield when p av = 1.1 σ y ≈ 0.39 σ 0 {\displaystyle p_{\text{av}}=1.1\sigma _{y}\approx 0.39\sigma _{0}} where σ y {\displaystyle \sigma _{y}} is the uniaxial yield stress and σ 0 {\displaystyle \sigma _{0}} is the indentation hardness. Greenwood and Williamson defined a dimensionless parameter Ψ {\displaystyle \Psi } called the plasticity index that could be used to determine whether contact would be elastic or plastic. The Greenwood-Williamson model requires knowledge of two statistically dependent quantities; the standard deviation of the surface roughness and the curvature of the asperity peaks. An alternative definition of the plasticity index has been given by Mikic. Yield occurs when the pressure is greater than the uniaxial yield stress. Since the yield stress is proportional to the indentation hardness σ 0 {\displaystyle \sigma _{0}} , Mikic defined the plasticity index for elastic-plastic contact to be Ψ = E ∗ h ′ σ 0 > 2 3 . {\displaystyle \Psi ={\frac {E^{*}h'}{\sigma _{0}}}>{\frac {2}{3}}~.} In this definition Ψ {\displaystyle \Psi } represents the micro-roughness in a state of complete plasticity and only one statistical quantity, the rms slope, is needed which can be calculated from surface measurements. For Ψ < 2 3 {\displaystyle \Psi <{\frac {2}{3}}} , the surface behaves elastically during contact. In both the Greenwood-Williamson and Mikic models the load is assumed to be proportional to the deformed area. Hence, whether the system behaves plastically or elastically is independent of the applied normal force. === An overview of the GT model === The model proposed by John A. Greenwood and John H. Tripp (GT), extended the GW model to contact between two rough surfaces. The GT model is widely used in the field of elastohydrodynamic analysis. The most frequently cited equations given by the GT model are for the asperity contact area A a = π 2 ( η β σ ) 2 A F 2 ( λ ) , {\displaystyle A_{a}=\pi ^{2}(\eta \beta \sigma )^{2}AF_{2}(\lambda ),} and load carried by asperities P = 8 2 15 π ( η β σ ) 2 σ β E ′ A F 5 2 ( λ ) , {\displaystyle P={\frac {8{\sqrt {2}}}{15}}\pi (\eta \beta \sigma )^{2}{\sqrt {\frac {\sigma }{\beta }}}E'AF_{\frac {5}{2}}(\lambda ),} where: η β σ {\displaystyle \eta \beta \sigma } , roughness parameter, A {\displaystyle A} , nominal contact area, λ {\displaystyle \lambda } , Stribeck oil film parameter, first defined by Stribeck \cite{gt} as λ = h / σ {\displaystyle \lambda =h/\sigma } , E ′ {\displaystyle E'} , effective elastic modulus, F 2 , F 5 2 ( λ ) {\displaystyle F_{2},F_{\frac {5}{2}}(\lambda )} , statistical functions introduced to match the assumed Gaussian distribution of asperities. Matthew Leighton et al. presented fits for crosshatched IC engine cylinder liner surfaces together with a process for determining the F n ( h ) {\displaystyle F_{n}(h)} terms for any measured surfaces. Leighton et al. demonstrated that Gaussian fit data is not accurate for modelling any engineered surfaces and went on to demonstrate that early running of the surfaces results in a gradual transition which significantly changes the surface topography, load carrying capacity and friction. The exact solutions for A a {\displaystyle A_{a}} and P {\displaystyle P} are firstly presented by Jedynak. They are expressed by F n {\displaystyle F_{n}} as follows. They are calculated for the Gaussian distribution of asperities, which have been shown to be unrealistic for engineering surface but can be assumed where friction, load carrying capacity or real contact area results are not critical to the analysis. F 2 = 1 2 ( h 2 + 1 ) erfc ⁡ ( h 2 ) − h 2 π exp ⁡ ( − h 2 2 ) F 5 2 = 1 8 π exp ⁡ ( − h 2 4 ) h 3 2 ( ( 2 h 2 + 3 ) K 3 4 ( h 2 4 ) − ( 2 h 2 + 5 ) K 1 4 ( h 2 4 ) ) {\displaystyle {\begin{aligned}F_{2}&={\frac {1}{2}}\left(h^{2}+1\right)\operatorname {erfc} \left({\frac {h}{\sqrt {2}}}\right)-{\frac {h}{\sqrt {2\pi }}}\exp \left(-{\frac {h^{2}}{2}}\right)\\F_{\frac {5}{2}}&={\frac {1}{8{\sqrt {\pi }}}}\exp \left(-{\frac {h^{2}}{4}}\right)h^{\frac {3}{2}}\left(\left(2h^{2}+3\right)K_{\frac {3}{4}}\left({\frac {h^{2}}{4}}\right)-\left(2h^{2}+5\right)K_{\frac {1}{4}}\left({\frac {h^{2}}{4}}\right)\right)\end{aligned}}} where erfc(z) means the complementary error function and K ν ( z ) {\displaystyle K_{\nu }(z)} is the modified Bessel function of the second kind. In paper one can find comprehensive review of existing approximants to F 5 2 {\displaystyle F_{\frac {5}{2}}} . New proposals give the most accurate approximants to F 5 2 {\displaystyle F_{\frac {5}{2}}} and F 2 {\displaystyle F_{2}} , which are reported in the literature. They are given by the following rational formulas, which are very exact approximants to integrals F n ( h ) {\displaystyle F_{n}(h)} . They are calculated for the Gaussian distribution of asperities F n ( h ) = a 0 + a 1 h + a 2 h 2 + a 3 h 3 1 + b 1 h + b 2 h 2 + b 3 h 3 + b 4 h 4 + b 5 h 5 + b 6 h 6 exp ⁡ ( − h 2 2 ) {\displaystyle F_{n}(h)={\frac {a_{0}+a_{1}h+a_{2}h^{2}+a_{3}h^{3}}{1+b_{1}h+b_{2}h^{2}+b_{3}h^{3}+b_{4}h^{4}+b_{5}h^{5}+b_{6}h^{6}}}\exp \left(-{\frac {h^{2}}{2}}\right)} For F 2 ( h ) {\displaystyle F_{2}(h)} the coefficients are [ a 0 , a 1 , a 2 , a 3 ] = [ 0.5 , 0.182536384941 , 0.039812283118 , 0.003684879001 ] [ b 1 , b 2 , b 3 , b 4 , b 5 , b 6 ] = [ 1.960841785003 , 1.708677456715 , 0.856592986083 , 0.264996791567 , 0.049257843893 , 0.004640740133 ] {\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.5,0.182536384941,0.039812283118,0.003684879001]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=[1.960841785003,1.708677456715,0.856592986083,0.264996791567,0.049257843893,0.004640740133]\end{aligned}}} The maximum relative error is 1.68 × 10 − 7 % {\displaystyle 1.68\times 10^{-7}\%} . For F 5 2 ( h ) {\displaystyle F_{\frac {5}{2}}(h)} the coefficients are [ a 0 , a 1 , a 2 , a 3 ] = [ 0.616634218997 , 0.108855827811 , 0.023453835635 , 0.000449332509 ] [ b 1 , b 2 , b 3 , b 4 , b 5 , b 6 ] = [ 1.919948267476 , 1.635304362591 , 0.799392556572 , 0.240278859212 , 0.043178653945 , 0.003863334276 ] {\displaystyle {\begin{aligned}[][a_{0},a_{1},a_{2},a_{3}]&=[0.616634218997,0.108855827811,0.023453835635,0.000449332509]\\[][b_{1},b_{2},b_{3},b_{4},b_{5},b_{6}]&=[1.919948267476,1.635304362591,0.799392556572,0.240278859212,0.043178653945,0.003863334276]\end{aligned}}} The maximum relative error is 4.98 × 10 − 8 % {\displaystyle 4.98\times 10^{-8}\%} . == Adhesive contact between elastic bodies == When two solid surfaces are brought into close proximity, they experience attractive van der Waals forces. R. S. Bradley's van der Waals model provides a means of calculating the tensile force between two rigid spheres with perfectly smooth surfaces. The Hertzian model of contact does not consider adhesion possible. However, in the late 1960s, several contradictions were observed when the Hertz theory was compared with experiments involving contact between rubber and glass spheres. It was observed that, though Hertz theory applied at large loads, at low loads the area of contact was larger than that predicted by Hertz theory, the area of contact had a non-zero value even when the load was removed, and there was even strong adhesion if the contacting surfaces were clean and dry. This indicated that adhesive forces were at work. The Johnson-Kendall-Roberts (JKR) model and the Derjaguin-Muller-Toporov (DMT) models were the first to incorporate adhesion into Hertzian contact. === Bradley model of rigid contact === It is commonly assumed that the surface force between two atomic planes at a distance z {\displaystyle z} from each other can be derived from the Lennard-Jones potential. With this assumption F ( z ) = 16 γ 3 z 0 [ ( z z 0 ) − 9 − ( z z 0 ) − 3 ] {\displaystyle F(z)={\cfrac {16\gamma }{3z_{0}}}\left[\left({\cfrac {z}{z_{0}}}\right)^{-9}-\left({\cfrac {z}{z_{0}}}\right)^{-3}\right]} where F {\displaystyle F} is the force (positive in compression), 2 γ {\displaystyle 2\gamma } is the total surface energy of both surfaces per unit area, and z 0 {\displaystyle z_{0}} is the equilibrium separation of the two atomic planes. The Bradley model applied the Lennard-Jones potential to find the force of adhesion between two rigid spheres. The total force between the spheres is found to be F a ( z ) = 16 γ π R 3 [ 1 4 ( z z 0 ) − 8 − ( z z 0 ) − 2 ] ; 1 R = 1 R 1 + 1 R 2 {\displaystyle F_{a}(z)={\cfrac {16\gamma \pi R}{3}}\left[{\cfrac {1}{4}}\left({\cfrac {z}{z_{0}}}\right)^{-8}-\left({\cfrac {z}{z_{0}}}\right)^{-2}\right]~;~~{\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}} where R 1 , R 2 {\displaystyle R_{1},R_{2}} are the radii of the two spheres. The two spheres separate completely when the pull-off force is achieved at z = z 0 {\displaystyle z=z_{0}} at which point F a = F c = − 4 γ π R . {\displaystyle F_{a}=F_{c}=-4\gamma \pi R.} === JKR model of elastic contact === To incorporate the effect of adhesion in Hertzian contact, Johnson, Kendall, and Roberts formulated the JKR theory of adhesive contact using a balance between the stored elastic energy and the loss in surface energy. The JKR model considers the effect of contact pressure and adhesion only inside the area of contact. The general solution for the pressure distribution in the contact area in the JKR model is p ( r ) = p 0 ( 1 − r 2 a 2 ) 1 2 + p 0 ′ ( 1 − r 2 a 2 ) − 1 2 {\displaystyle p(r)=p_{0}\left(1-{\frac {r^{2}}{a^{2}}}\right)^{\frac {1}{2}}+p_{0}'\left(1-{\frac {r^{2}}{a^{2}}}\right)^{-{\frac {1}{2}}}} Note that in the original Hertz theory, the term containing p 0 ′ {\displaystyle p_{0}'} was neglected on the ground that tension could not be sustained in the contact zone. For contact between two spheres p 0 = 2 a E ∗ π R ; p 0 ′ = − ( 4 γ E ∗ π a ) 1 2 {\displaystyle p_{0}={\frac {2aE^{*}}{\pi R}};\quad p_{0}'=-\left({\frac {4\gamma E^{*}}{\pi a}}\right)^{\frac {1}{2}}} where a {\displaystyle a\,} is the radius of the area of contact, F {\displaystyle F} is the applied force, 2 γ {\displaystyle 2\gamma } is the total surface energy of both surfaces per unit contact area, R i , E i , ν i , i = 1 , 2 {\displaystyle R_{i},\,E_{i},\,\nu _{i},~~i=1,2} are the radii, Young's moduli, and Poisson's ratios of the two spheres, and 1 R = 1 R 1 + 1 R 2 ; 1 E ∗ = 1 − ν 1 2 E 1 + 1 − ν 2 2 E 2 {\displaystyle {\frac {1}{R}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}};\quad {\frac {1}{E^{*}}}={\frac {1-\nu _{1}^{2}}{E_{1}}}+{\frac {1-\nu _{2}^{2}}{E_{2}}}} The approach distance between the two spheres is given by d = π a 2 E ∗ ( p 0 + 2 p 0 ′ ) = a 2 R {\displaystyle d={\frac {\pi a}{2E^{*}}}\left(p_{0}+2p_{0}'\right)={\frac {a^{2}}{R}}} The Hertz equation for the area of contact between two spheres, modified to take into account the surface energy, has the form a 3 = 3 R 4 E ∗ ( F + 6 γ π R + 12 γ π R F + ( 6 γ π R ) 2 ) {\displaystyle a^{3}={\frac {3R}{4E^{*}}}\left(F+6\gamma \pi R+{\sqrt {12\gamma \pi RF+(6\gamma \pi R)^{2}}}\right)} When the surface energy is zero, γ = 0 {\displaystyle \gamma =0} , the Hertz equation for contact between two spheres is recovered. When the applied load is zero, the contact radius is a 3 = 9 R 2 γ π E ∗ {\displaystyle a^{3}={\frac {9R^{2}\gamma \pi }{E^{*}}}} The tensile load at which the spheres are separated (i.e., a = 0 {\displaystyle a=0} ) is predicted to be F c = − 3 γ π R {\displaystyle F_{\text{c}}=-3\gamma \pi R\,} This force is also called the pull-off force. Note that this force is independent of the moduli of the two spheres. However, there is another possible solution for the value of a {\displaystyle a} at this load. This is the critical contact area a c {\displaystyle a_{\text{c}}} , given by a c 3 = 9 R 2 γ π 4 E ∗ {\displaystyle a_{\text{c}}^{3}={\frac {9R^{2}\gamma \pi }{4E^{*}}}} If we define the work of adhesion as Δ γ = γ 1 + γ 2 − γ 12 {\displaystyle \Delta \gamma =\gamma _{1}+\gamma _{2}-\gamma _{12}} where γ 1 , γ 2 {\displaystyle \gamma _{1},\gamma _{2}} are the adhesive energies of the two surfaces and γ 12 {\displaystyle \gamma _{12}} is an interaction term, we can write the JKR contact radius as a 3 = 3 R 4 E ∗ ( F + 3 Δ γ π R + 6 Δ γ π R F + ( 3 Δ γ π R ) 2 ) {\displaystyle a^{3}={\frac {3R}{4E^{*}}}\left(F+3\Delta \gamma \pi R+{\sqrt {6\Delta \gamma \pi RF+(3\Delta \gamma \pi R)^{2}}}\right)} The tensile load at separation is F = − 3 2 Δ γ π R {\displaystyle F=-{\frac {3}{2}}\Delta \gamma \pi R\,} and the critical contact radius is given by a c 3 = 9 R 2 Δ γ π 8 E ∗ {\displaystyle a_{\text{c}}^{3}={\frac {9R^{2}\Delta \gamma \pi }{8E^{*}}}} The critical depth of penetration is d c = a c 2 R = ( R 1 2 9 Δ γ π 4 E ∗ ) 2 3 {\displaystyle d_{\text{c}}={\frac {a_{c}^{2}}{R}}=\left(R^{\frac {1}{2}}{\frac {9\Delta \gamma \pi }{4E^{*}}}\right)^{\frac {2}{3}}} === DMT model of elastic contact === The Derjaguin–Muller–Toporov (DMT) model is an alternative model for adhesive contact which assumes that the contact profile remains the same as in Hertzian contact but with additional attractive interactions outside the area of contact. The radius of contact between two spheres from DMT theory is a 3 = 3 R 4 E ∗ ( F + 4 γ π R ) {\displaystyle a^{3}={\cfrac {3R}{4E^{*}}}\left(F+4\gamma \pi R\right)} and the pull-off force is F c = − 4 γ π R {\displaystyle F_{c}=-4\gamma \pi R\,} When the pull-off force is achieved the contact area becomes zero and there is no singularity in the contact stresses at the edge of the contact area. In terms of the work of adhesion Δ γ {\displaystyle \Delta \gamma } a 3 = 3 R 4 E ∗ ( F + 2 Δ γ π R ) {\displaystyle a^{3}={\cfrac {3R}{4E^{*}}}\left(F+2\Delta \gamma \pi R\right)} and F c = − 2 Δ γ π R {\displaystyle F_{c}=-2\Delta \gamma \pi R\,} === Tabor parameter === In 1977, Tabor showed that the apparent contradiction between the JKR and DMT theories could be resolved by noting that the two theories were the extreme limits of a single theory parametrized by the Tabor parameter ( μ {\displaystyle \mu } ) defined as μ := d c z 0 ≈ [ R ( Δ γ ) 2 E ∗ 2 z 0 3 ] 1 3 {\displaystyle \mu :={\frac {d_{c}}{z_{0}}}\approx \left[{\frac {R(\Delta \gamma )^{2}}{{E^{*}}^{2}z_{0}^{3}}}\right]^{\frac {1}{3}}} where z 0 {\displaystyle z_{0}} is the equilibrium separation between the two surfaces in contact. The JKR theory applies to large, compliant spheres for which μ {\displaystyle \mu } is large. The DMT theory applies for small, stiff spheres with small values of μ {\displaystyle \mu } . Subsequently, Derjaguin and his collaborators by applying Bradley's surface force law to an elastic half space, confirmed that as the Tabor parameter increases, the pull-off force falls from the Bradley value 2 π R Δ γ {\displaystyle 2\pi R\Delta \gamma } to the JKR value ( 3 / 2 ) π R Δ γ {\displaystyle (3/2)\pi R\Delta \gamma } . More detailed calculations were later done by Greenwood revealing the S-shaped load/approach curve which explains the jumping-on effect. A more efficient method of doing the calculations and additional results were given by Feng === Maugis–Dugdale model of elastic contact === Further improvement to the Tabor idea was provided by Maugis who represented the surface force in terms of a Dugdale cohesive zone approximation such that the work of adhesion is given by Δ γ = σ 0 h 0 {\displaystyle \Delta \gamma =\sigma _{0}~h_{0}} where σ 0 {\displaystyle \sigma _{0}} is the maximum force predicted by the Lennard-Jones potential and h 0 {\displaystyle h_{0}} is the maximum separation obtained by matching the areas under the Dugdale and Lennard-Jones curves (see adjacent figure). This means that the attractive force is constant for z 0 ≤ z ≤ z 0 + h 0 {\displaystyle z_{0}\leq z\leq z_{0}+h_{0}} . There is not further penetration in compression. Perfect contact occurs in an area of radius a {\displaystyle a} and adhesive forces of magnitude σ 0 {\displaystyle \sigma _{0}} extend to an area of radius c > a {\displaystyle c>a} . In the region a < r < c {\displaystyle a<r<c} , the two surfaces are separated by a distance h ( r ) {\displaystyle h(r)} with h ( a ) = 0 {\displaystyle h(a)=0} and h ( c ) = h 0 {\displaystyle h(c)=h_{0}} . The ratio m {\displaystyle m} is defined as m := c a {\displaystyle m:={\frac {c}{a}}} . In the Maugis–Dugdale theory, the surface traction distribution is divided into two parts - one due to the Hertz contact pressure and the other from the Dugdale adhesive stress. Hertz contact is assumed in the region − a < r < a {\displaystyle -a<r<a} . The contribution to the surface traction from the Hertz pressure is given by p H ( r ) = ( 3 F H 2 π a 2 ) ( 1 − r 2 a 2 ) 1 2 {\displaystyle p^{H}(r)=\left({\frac {3F^{H}}{2\pi a^{2}}}\right)\left(1-{\frac {r^{2}}{a^{2}}}\right)^{\frac {1}{2}}} where the Hertz contact force F H {\displaystyle F^{H}} is given by F H = 4 E ∗ a 3 3 R {\displaystyle F^{H}={\frac {4E^{*}a^{3}}{3R}}} The penetration due to elastic compression is d H = a 2 R {\displaystyle d^{H}={\frac {a^{2}}{R}}} The vertical displacement at r = c {\displaystyle r=c} is u H ( c ) = 1 π R [ a 2 ( 2 − m 2 ) sin − 1 ⁡ ( 1 m ) + a 2 m 2 − 1 ] {\displaystyle u^{H}(c)={\cfrac {1}{\pi R}}\left[a^{2}\left(2-m^{2}\right)\sin ^{-1}\left({\frac {1}{m}}\right)+a^{2}{\sqrt {m^{2}-1}}\right]} and the separation between the two surfaces at r = c {\displaystyle r=c} is h H ( c ) = c 2 2 R − d H + u H ( c ) {\displaystyle h^{H}(c)={\frac {c^{2}}{2R}}-d^{H}+u^{H}(c)} The surface traction distribution due to the adhesive Dugdale stress is p D ( r ) = { − σ 0 π cos − 1 ⁡ [ 2 − m 2 − r 2 a 2 m 2 ( 1 − r 2 m 2 a 2 ) ] for r ≤ a − σ 0 for a ≤ r ≤ c {\displaystyle p^{D}(r)={\begin{cases}-{\frac {\sigma _{0}}{\pi }}\cos ^{-1}\left[{\frac {2-m^{2}-{\frac {r^{2}}{a^{2}}}}{m^{2}\left(1-{\frac {r^{2}}{m^{2}a^{2}}}\right)}}\right]&\quad {\text{for}}\quad r\leq a\\-\sigma _{0}&\quad {\text{for}}\quad a\leq r\leq c\end{cases}}} The total adhesive force is then given by F D = − 2 σ 0 m 2 a 2 [ cos − 1 ⁡ ( 1 m ) + 1 m 2 m 2 − 1 ] {\displaystyle F^{D}=-2\sigma _{0}m^{2}a^{2}\left[\cos ^{-1}\left({\frac {1}{m}}\right)+{\frac {1}{m^{2}}}{\sqrt {m^{2}-1}}\right]} The compression due to Dugdale adhesion is d D = − ( 2 σ 0 a E ∗ ) m 2 − 1 {\displaystyle d^{D}=-\left({\frac {2\sigma _{0}a}{E^{*}}}\right){\sqrt {m^{2}-1}}} and the gap at r = c {\displaystyle r=c} is h D ( c ) = ( 4 σ 0 a π E ∗ ) [ m 2 − 1 cos − 1 ⁡ ( 1 m ) + 1 − m ] {\displaystyle h^{D}(c)=\left({\frac {4\sigma _{0}a}{\pi E^{*}}}\right)\left[{\sqrt {m^{2}-1}}\cos ^{-1}\left({\frac {1}{m}}\right)+1-m\right]} The net traction on the contact area is then given by p ( r ) = p H ( r ) + p D ( r ) {\displaystyle p(r)=p^{H}(r)+p^{D}(r)} and the net contact force is F = F H + F D {\displaystyle F=F^{H}+F^{D}} . When h ( c ) = h H ( c ) + h D ( c ) = h 0 {\displaystyle h(c)=h^{H}(c)+h^{D}(c)=h_{0}} the adhesive traction drops to zero. Non-dimensionalized values of a , c , F , d {\displaystyle a,c,F,d} are introduced at this stage that are defied as a ¯ = α a ; c ¯ := α c ; d ¯ := α 2 R d ; α := ( 4 E ∗ 3 π Δ γ R 2 ) 1 3 ; A ¯ := π c 2 ; F ¯ = F π Δ γ R {\displaystyle {\bar {a}}=\alpha a~;~~{\bar {c}}:=\alpha c~;~~{\bar {d}}:=\alpha ^{2}Rd~;~~\alpha :=\left({\frac {4E^{*}}{3\pi \Delta \gamma R^{2}}}\right)^{\frac {1}{3}}~;~~{\bar {A}}:=\pi c^{2}~;~~{\bar {F}}={\frac {F}{\pi \Delta \gamma R}}} In addition, Maugis proposed a parameter λ {\displaystyle \lambda } which is equivalent to the Tabor parameter μ {\displaystyle \mu } . This parameter is defined as λ := σ 0 ( 9 R 2 π Δ γ E ∗ 2 ) 1 3 ≈ 1.16 μ {\displaystyle \lambda :=\sigma _{0}\left({\frac {9R}{2\pi \Delta \gamma {E^{*}}^{2}}}\right)^{\frac {1}{3}}\approx 1.16\mu } where the step cohesive stress σ 0 {\displaystyle \sigma _{0}} equals to the theoretical stress of the Lennard-Jones potential σ th = 16 Δ γ 9 3 z 0 {\displaystyle \sigma _{\text{th}}={\frac {16\Delta \gamma }{9{\sqrt {3}}z_{0}}}} Zheng and Yu suggested another value for the step cohesive stress σ 0 = exp ⁡ ( − 223 420 ) ⋅ Δ γ z 0 ≈ 0.588 Δ γ z 0 {\displaystyle \sigma _{0}=\exp \left(-{\frac {223}{420}}\right)\cdot {\frac {\Delta \gamma }{z_{0}}}\approx 0.588{\frac {\Delta \gamma }{z_{0}}}} to match the Lennard-Jones potential, which leads to λ ≈ 0.663 μ {\displaystyle \lambda \approx 0.663\mu } Then the net contact force may be expressed as F ¯ = a ¯ 3 − λ a ¯ 2 [ m 2 − 1 + m 2 sec − 1 ⁡ m ] {\displaystyle {\bar {F}}={\bar {a}}^{3}-\lambda {\bar {a}}^{2}\left[{\sqrt {m^{2}-1}}+m^{2}\sec ^{-1}m\right]} and the elastic compression as d ¯ = a ¯ 2 − 4 3 λ a ¯ m 2 − 1 {\displaystyle {\bar {d}}={\bar {a}}^{2}-{\frac {4}{3}}~\lambda {\bar {a}}{\sqrt {m^{2}-1}}} The equation for the cohesive gap between the two bodies takes the form λ a ¯ 2 2 [ ( m 2 − 2 ) sec − 1 ⁡ m + m 2 − 1 ] + 4 λ a ¯ 3 [ m 2 − 1 sec − 1 ⁡ m − m + 1 ] = 1 {\displaystyle {\frac {\lambda {\bar {a}}^{2}}{2}}\left[\left(m^{2}-2\right)\sec ^{-1}m+{\sqrt {m^{2}-1}}\right]+{\frac {4\lambda {\bar {a}}}{3}}\left[{\sqrt {m^{2}-1}}\sec ^{-1}m-m+1\right]=1} This equation can be solved to obtain values of c {\displaystyle c} for various values of a {\displaystyle a} and λ {\displaystyle \lambda } . For large values of λ {\displaystyle \lambda } , m → 1 {\displaystyle m\rightarrow 1} and the JKR model is obtained. For small values of λ {\displaystyle \lambda } the DMT model is retrieved. === Carpick–Ogletree-Salmeron (COS) model === The Maugis–Dugdale model can only be solved iteratively if the value of λ {\displaystyle \lambda } is not known a-priori. The Carpick–Ogletree–Salmeron (COS) approximate solution (after Robert Carpick, D. Frank Ogletree and Miquel Salmeron)simplifies the process by using the following relation to determine the contact radius a {\displaystyle a} : a = a 0 ( β ) ( β + 1 − F / F c ( β ) 1 + β ) 2 3 {\displaystyle a=a_{0}(\beta )\left({\frac {\beta +{\sqrt {1-F/F_{c}(\beta )}}}{1+\beta }}\right)^{\frac {2}{3}}} where a 0 {\displaystyle a_{0}} is the contact area at zero load, and β {\displaystyle \beta } is a transition parameter that is related to λ {\displaystyle \lambda } by λ ≈ − 0.924 ln ⁡ ( 1 − 1.02 β ) {\displaystyle \lambda \approx -0.924\ln(1-1.02\beta )} The case β = 1 {\displaystyle \beta =1} corresponds exactly to JKR theory while β = 0 {\displaystyle \beta =0} corresponds to DMT theory. For intermediate cases 0 < β < 1 {\displaystyle 0<\beta <1} the COS model corresponds closely to the Maugis–Dugdale solution for 0.1 < λ < 5 {\displaystyle 0.1<\lambda <5} . === Influence of contact shape === Even in the presence of perfectly smooth surfaces, geometry can come into play in form of the macroscopic shape of the contacting region. When a rigid punch with flat but oddly shaped face is carefully pulled off its soft counterpart, its detachment occurs not instantaneously but detachment fronts start at pointed corners and travel inwards, until the final configuration is reached which for macroscopically isotropic shapes is almost circular. The main parameter determining the adhesive strength of flat contacts occurs to be the maximum linear size of the contact. The process of detachment can as observed experimentally can be seen in the film. == See also == == References == == External links == [1]: A MATLAB routine to solve the linear elastic contact mechanics problem entitled; "An LCP solution of the linear elastic contact mechanics problem" is provided at the file exchange at MATLAB Central. [2]: Contact mechanics calculator. [3]: detailed calculations and formulae of JKR theory for two spheres. [5]: A Matlab code for Hertz contact analysis (includes line, point and elliptical cases). [6]: JKR, MD, and DMT models of adhesion (Matlab routines).
Wikipedia/Contact_mechanics
Topology is a branch of mathematics concerned with geometric properties preserved under continuous deformation (stretching without tearing or gluing). Topology may also refer to: == Mathematics == A topology is the collection of open sets used to define a topological space == Electronics == Topology (electronics), a configuration of electronic components == Computing == Network topology, configurations of computer networks Logical topology, the arrangement of devices on a computer network and how they communicate with one another == Geospatial data == Geospatial topology, the study or science of places with applications in earth science, geography, human geography, and geomorphology In geographic information systems and their data structures, topology and planar enforcement are the storing of a border line between two neighboring areas (and the border point between two connecting lines) only once. Thus, any rounding errors might move the border, but will not lead to gaps or overlaps between the areas. Also in cartography, a topological map is a greatly simplified map that preserves the mathematical topology while sacrificing scale and shape Topology is often confused with the geographic meaning of topography (originally the study of places). The confusion may be a factor in topographies having become confused with terrain or relief, such that they are essentially synonymous. == Biology == The specific orientation of transmembrane proteins In phylogenetics, the branching pattern of a phylogenetic tree == Music == Topology (musical ensemble), an Australian post-classical quintet Topology (album), 1981 album by Joe McPhee == Other == Topology (journal), a mathematical journal, with an emphasis on subject areas related to topology and geometry Spatial effects that cannot be described by topography, i.e., social, economical, spatial, or phenomenological interactions
Wikipedia/Topology_(disambiguation)
In the field of solid mechanics, torsion is the twisting of an object due to an applied torque. Torsion could be defined as strain or angular deformation, and is measured by the angle a chosen section is rotated from its equilibrium position. The resulting stress (torsional shear stress) is expressed in either the pascal (Pa), an SI unit for newtons per square metre, or in pounds per square inch (psi) while torque is expressed in newton metres (N·m) or foot-pound force (ft·lbf). In sections perpendicular to the torque axis, the resultant shear stress in this section is perpendicular to the radius. In non-circular cross-sections, twisting is accompanied by a distortion called warping, in which transverse sections do not remain plane. For shafts of uniform cross-section unrestrained against warping, the torsion-related physical properties are expressed as: T = J T r τ = J T ℓ G φ {\displaystyle T={\frac {J_{\text{T}}}{r}}\tau ={\frac {J_{\text{T}}}{\ell }}G\varphi } where: T is the applied torque or moment of torsion in Nm. τ {\displaystyle \tau } (tau) is the maximum shear stress at the outer surface JT is the torsion constant for the section. For circular rods, and tubes with constant wall thickness, it is equal to the polar moment of inertia of the section, but for other shapes, or split sections, it can be much less. For more accuracy, finite element analysis (FEA) is the best method. Other calculation methods include membrane analogy and shear flow approximation. r is the perpendicular distance between the rotational axis and the farthest point in the section (at the outer surface). ℓ is the length of the object to or over which the torque is being applied. φ (phi) is the angle of twist in radians. G is the shear modulus, also called the modulus of rigidity, and is usually given in gigapascals (GPa), lbf/in2 (psi), or lbf/ft2 or in ISO units N/mm2. The product JTG is called the torsional rigidity wT. == Properties == The shear stress at a point within a shaft is: τ φ z ( r ) = T r J T {\displaystyle \tau _{\varphi _{z}}(r)={Tr \over J_{\text{T}}}} Note that the highest shear stress occurs on the surface of the shaft, where the radius is maximum. High stresses at the surface may be compounded by stress concentrations such as rough spots. Thus, shafts for use in high torsion are polished to a fine surface finish to reduce the maximum stress in the shaft and increase their service life. The angle of twist can be found by using: φ = T ℓ G J T . {\displaystyle \varphi _{}={\frac {T\ell }{GJ_{\text{T}}}}.} == Sample calculation == Calculation of the steam turbine shaft radius for a turboset: Assumptions: Power carried by the shaft is 1000 MW; this is typical for a large nuclear power plant. Yield stress of the steel used to make the shaft (τyield) is: 250 × 106 N/m2. Electricity has a frequency of 50 Hz; this is the typical frequency in Europe. In North America, the frequency is 60 Hz. The angular frequency can be calculated with the following formula: ω = 2 π f {\displaystyle \omega =2\pi f} The torque carried by the shaft is related to the power by the following equation: P = T ω {\displaystyle P=T\omega } The angular frequency is therefore 314.16 rad/s and the torque 3.1831 × 106 N·m. The maximal torque is: T max = τ max J zz r {\displaystyle T_{\max }={\frac {{\tau }_{\max }J_{\text{zz}}}{r}}} After substitution of the torsion constant, the following expression is obtained: D = ( 16 T max π τ max ) 1 / 3 {\displaystyle D=\left({\frac {16T_{\max }}{\pi {\tau }_{\max }}}\right)^{1/3}} The diameter is 40 cm. If one adds a factor of safety of 5 and re-calculates the radius with the maximum stress equal to the yield stress/5, the result is a diameter of 69 cm, the approximate size of a turboset shaft in a nuclear power plant. == Failure mode == The shear stress in the shaft may be resolved into principal stresses via Mohr's circle. If the shaft is loaded only in torsion, then one of the principal stresses will be in tension and the other in compression. These stresses are oriented at a 45-degree helical angle around the shaft. If the shaft is made of brittle material, then the shaft will fail by a crack initiating at the surface and propagating through to the core of the shaft, fracturing in a 45-degree angle helical shape. This is often demonstrated by twisting a piece of blackboard chalk between one's fingers. In the case of thin hollow shafts, a twisting buckling mode can result from excessive torsional load, with wrinkles forming at 45° to the shaft axis. == See also == List of area moments of inertia Saint-Venant's theorem Second moment of area Structural rigidity Torque tester Torsion siege engine Torsion spring or -bar Torsional vibration == References == == External links == The dictionary definition of torsion at Wiktionary Solid Mechanics at Wikibooks
Wikipedia/Torsion_(mechanics)
In mathematics, equivariant topology is the study of topological spaces that possess certain symmetries. In studying topological spaces, one often considers continuous maps f : X → Y {\displaystyle f:X\to Y} , and while equivariant topology also considers such maps, there is the additional constraint that each map "respects symmetry" in both its domain and target space. The notion of symmetry is usually captured by considering a group action of a group G {\displaystyle G} on X {\displaystyle X} and Y {\displaystyle Y} and requiring that f {\displaystyle f} is equivariant under this action, so that f ( g ⋅ x ) = g ⋅ f ( x ) {\displaystyle f(g\cdot x)=g\cdot f(x)} for all x ∈ X {\displaystyle x\in X} , a property usually denoted by f : X → G Y {\displaystyle f:X\to _{G}Y} . Heuristically speaking, standard topology views two spaces as equivalent "up to deformation," while equivariant topology considers spaces equivalent up to deformation so long as it pays attention to any symmetry possessed by both spaces. A famous theorem of equivariant topology is the Borsuk–Ulam theorem, which asserts that every Z 2 {\displaystyle \mathbf {Z} _{2}} -equivariant map f : S n → R n {\displaystyle f:S^{n}\to \mathbb {R} ^{n}} necessarily vanishes. == Induced G-bundles == An important construction used in equivariant cohomology and other applications includes a naturally occurring group bundle (see principal bundle for details). Let us first consider the case where G {\displaystyle G} acts freely on X {\displaystyle X} . Then, given a G {\displaystyle G} -equivariant map f : X → G Y {\displaystyle f:X\to _{G}Y} , we obtain sections s f : X / G → ( X × Y ) / G {\displaystyle s_{f}:X/G\to (X\times Y)/G} given by [ x ] ↦ [ x , f ( x ) ] {\displaystyle [x]\mapsto [x,f(x)]} , where X × Y {\displaystyle X\times Y} gets the diagonal action g ( x , y ) = ( g x , g y ) {\displaystyle g(x,y)=(gx,gy)} , and the bundle is p : ( X × Y ) / G → X / G {\displaystyle p:(X\times Y)/G\to X/G} , with fiber Y {\displaystyle Y} and projection given by p ( [ x , y ] ) = [ x ] {\displaystyle p([x,y])=[x]} . Often, the total space is written X × G Y {\displaystyle X\times _{G}Y} . More generally, the assignment s f {\displaystyle s_{f}} actually does not map to ( X × Y ) / G {\displaystyle (X\times Y)/G} generally. Since f {\displaystyle f} is equivariant, if g ∈ G x {\displaystyle g\in G_{x}} (the isotropy subgroup), then by equivariance, we have that g ⋅ f ( x ) = f ( g ⋅ x ) = f ( x ) {\displaystyle g\cdot f(x)=f(g\cdot x)=f(x)} , so in fact f {\displaystyle f} will map to the collection of { [ x , y ] ∈ ( X × Y ) / G ∣ G x ⊂ G y } {\displaystyle \{[x,y]\in (X\times Y)/G\mid G_{x}\subset G_{y}\}} . In this case, one can replace the bundle by a homotopy quotient where G {\displaystyle G} acts freely and is bundle homotopic to the induced bundle on X {\displaystyle X} by f {\displaystyle f} . == Applications to discrete geometry == In the same way that one can deduce the ham sandwich theorem from the Borsuk-Ulam Theorem, one can find many applications of equivariant topology to problems of discrete geometry. This is accomplished by using the configuration-space test-map paradigm: Given a geometric problem P {\displaystyle P} , we define the configuration space, X {\displaystyle X} , which parametrizes all associated solutions to the problem (such as points, lines, or arcs.) Additionally, we consider a test space Z ⊂ V {\displaystyle Z\subset V} and a map f : X → V {\displaystyle f:X\to V} where p ∈ X {\displaystyle p\in X} is a solution to a problem if and only if f ( p ) ∈ Z {\displaystyle f(p)\in Z} . Finally, it is usual to consider natural symmetries in a discrete problem by some group G {\displaystyle G} that acts on X {\displaystyle X} and V {\displaystyle V} so that f {\displaystyle f} is equivariant under these actions. The problem is solved if we can show the nonexistence of an equivariant map f : X → V ∖ Z {\displaystyle f:X\to V\setminus Z} . Obstructions to the existence of such maps are often formulated algebraically from the topological data of X {\displaystyle X} and V ∖ Z {\displaystyle V\setminus Z} . An archetypal example of such an obstruction can be derived having V {\displaystyle V} a vector space and Z = { 0 } {\displaystyle Z=\{0\}} . In this case, a nonvanishing map would also induce a nonvanishing section s f : x ↦ [ x , f ( x ) ] {\displaystyle s_{f}:x\mapsto [x,f(x)]} from the discussion above, so ω n ( X × G Y ) {\displaystyle \omega _{n}(X\times _{G}Y)} , the top Stiefel–Whitney class would need to vanish. == Examples == The identity map i : X → X {\displaystyle i:X\to X} will always be equivariant. If we let Z 2 {\displaystyle \mathbf {Z} _{2}} act antipodally on the unit circle, then z ↦ z 3 {\displaystyle z\mapsto z^{3}} is equivariant, since it is an odd function. Any map h : X → X / G {\displaystyle h:X\to X/G} is equivariant when G {\displaystyle G} acts trivially on the quotient, since h ( g ⋅ x ) = h ( x ) {\displaystyle h(g\cdot x)=h(x)} for all x {\displaystyle x} . == See also == Equivariant cohomology Equivariant stable homotopy theory G-spectrum == References ==
Wikipedia/Equivariant_topology
Topography is the study of the forms and features of land surfaces. The topography of an area may refer to the landforms and features themselves, or a description or depiction in maps. Topography is a field of geoscience and planetary science and is concerned with local detail in general, including not only relief, but also natural, artificial, and cultural features such as roads, land boundaries, and buildings. In the United States, topography often means specifically relief, even though the USGS topographic maps record not just elevation contours, but also roads, populated places, structures, land boundaries, and so on. Topography in a narrow sense involves the recording of relief or terrain, the three-dimensional quality of the surface, and the identification of specific landforms; this is also known as geomorphometry. In modern usage, this involves generation of elevation data in digital form (DEM). It is often considered to include the graphic representation of the landform on a map by a variety of cartographic relief depiction techniques, including contour lines, hypsometric tints, and relief shading. == Etymology == The term topography originated in ancient Greece and continued in ancient Rome, as the detailed description of a place. The word comes from the Greek τόπος (topos, "place") and -γραφία (-graphia, "writing"). In classical literature this refers to writing about a place or places, what is now largely called 'local history'. In Britain and in Europe in general, the word topography is still sometimes used in its original sense. Detailed military surveys in Britain (beginning in the late eighteenth century) were called Ordnance Surveys, and this term was used into the 20th century as generic for topographic surveys and maps. The earliest scientific surveys in France were the Cassini maps after the family who produced them over four generations. The term "topographic surveys" appears to be American in origin. The earliest detailed surveys in the United States were made by the "Topographical Bureau of the Army", formed during the War of 1812, which became the Corps of Topographical Engineers in 1838. After the work of national mapping was assumed by the United States Geological Survey in 1878, the term topographical remained as a general term for detailed surveys and mapping programs, and has been adopted by most other nations as standard. In the 20th century, the term topography started to be used to describe surface description in other fields where mapping in a broader sense is used, particularly in medical fields such as neurology. == Objectives == An objective of topography is to determine the position of any feature or more generally any point in terms of both a horizontal coordinate system such as latitude, longitude, and altitude. Identifying (naming) features, and recognizing typical landform patterns are also part of the field. A topographic study may be made for a variety of reasons: military planning and geological exploration have been primary motivators to start survey programs, but detailed information about terrain and surface features is essential for the planning and construction of any major civil engineering, public works, or reclamation projects. == Techniques == There are a variety of approaches to studying topography. Which method(s) to use depends on the scale and size of the area under study, its accessibility, and the quality of existing surveys. === Field survey === Surveying helps determine accurately the terrestrial or three-dimensional space position of points and the distances and angles between them using leveling instruments such as theodolites, dumpy levels and clinometers. GPS and other global navigation satellite systems (GNSS) are also used. Work on one of the first topographic maps was begun in France by Giovanni Domenico Cassini, the great Italian astronomer. Even though remote sensing has greatly sped up the process of gathering information, and has allowed greater accuracy control over long distances, the direct survey still provides the basic control points and framework for all topographic work, whether manual or GIS-based. In areas where there has been an extensive direct survey and mapping program (most of Europe and the Continental U.S., for example), the compiled data forms the basis of basic digital elevation datasets such as USGS DEM data. This data must often be "cleaned" to eliminate discrepancies between surveys, but it still forms a valuable set of information for large-scale analysis. The original American topographic surveys (or the British "Ordnance" surveys) involved not only recording of relief, but identification of landmark features and vegetative land cover. === Remote sensing === Remote sensing is a general term for geodata collection at a distance from the subject area. ==== Passive sensor methodologies ==== Besides their role in photogrammetry, aerial and satellite imagery can be used to identify and delineate terrain features and more general land-cover features. Certainly they have become more and more a part of geovisualization, whether maps or GIS systems. False-color and non-visible spectra imaging can also help determine the lie of the land by delineating vegetation and other land-use information more clearly. Images can be in visible colours and in other spectrum. ==== Photogrammetry ==== Photogrammetry is a measurement technique for which the co-ordinates of the points in 3D of an object are determined by the measurements made in two photographic images (or more) taken starting from different positions, usually from different passes of an aerial photography flight. In this technique, the common points are identified on each image. A line of sight (or ray) can be built from the camera location to the point on the object. It is the intersection of its rays (triangulation) which determines the relative three-dimensional position of the point. Known control points can be used to give these relative positions absolute values. More sophisticated algorithms can exploit other information on the scene known a priori (for example, symmetries in certain cases allowing the rebuilding of three-dimensional co-ordinates starting from one only position of the camera). ==== Active sensor methodologies ==== Satellite RADAR mapping is one of the major techniques of generating Digital Elevation Models (see below). Similar techniques are applied in bathymetric surveys using sonar to determine the terrain of the ocean floor. In recent years, LIDAR (LIght Detection And Ranging), a remote sensing technique that uses a laser instead of radio waves, has increasingly been employed for complex mapping needs such as charting canopies and monitoring glaciers. == Forms of topographic data == Terrain is commonly modelled either using vector (triangulated irregular network or TIN) or gridded (raster image) mathematical models. In the most applications in environmental sciences, land surface is represented and modelled using gridded models. In civil engineering and entertainment businesses, the most representations of land surface employ some variant of TIN models. In geostatistics, land surface is commonly modelled as a combination of the two signals – the smooth (spatially correlated) and the rough (noise) signal. In practice, surveyors first sample heights in an area, then use these to produce a Digital Land Surface Model in the form of a TIN. The DLSM can then be used to visualize terrain, drape remote sensing images, quantify ecological properties of a surface or extract land surface objects. The contour data or any other sampled elevation datasets are not a DLSM. A DLSM implies that elevation is available continuously at each location in the study area, i.e. that the map represents a complete surface. Digital Land Surface Models should not be confused with Digital Surface Models, which can be surfaces of the canopy, buildings and similar objects. For example, in the case of surface models produces using the lidar technology, one can have several surfaces – starting from the top of the canopy to the actual solid earth. The difference between the two surface models can then be used to derive volumetric measures (height of trees etc.). === Raw survey data === Topographic survey information is historically based upon the notes of surveyors. They may derive naming and cultural information from other local sources (for example, boundary delineation may be derived from local cadastral mapping). While of historical interest, these field notes inherently include errors and contradictions that later stages in map production resolve. === Remote sensing data === As with field notes, remote sensing data (aerial and satellite photography, for example), is raw and uninterpreted. It may contain holes (due to cloud cover for example) or inconsistencies (due to the timing of specific image captures). Most modern topographic mapping includes a large component of remotely sensed data in its compilation process. === Topographic mapping === In its contemporary definition, topographic mapping shows relief. In the United States, USGS topographic maps show relief using contour lines. The USGS calls maps based on topographic surveys, but without contours, "planimetric maps." These maps show not only the contours, but also any significant streams or other bodies of water, forest cover, built-up areas or individual buildings (depending on scale), and other features and points of interest. While not officially "topographic" maps, the national surveys of other nations share many of the same features, and so they are often called "topographic maps." Existing topographic survey maps, because of their comprehensive and encyclopedic coverage, form the basis for much derived topographic work. Digital Elevation Models, for example, have often been created not from new remote sensing data but from existing paper topographic maps. Many government and private publishers use the artwork (especially the contour lines) from existing topographic map sheets as the basis for their own specialized or updated topographic maps. Topographic mapping should not be confused with geological mapping. The latter is concerned with underlying structures and processes to the surface, rather than with identifiable surface features. === Digital elevation modeling === The digital elevation model (DEM) is a raster-based digital dataset of the topography (hypsometry and/or bathymetry) of all or part of the Earth (or a telluric planet). The pixels of the dataset are each assigned an elevation value, and a header portion of the dataset defines the area of coverage, the units each pixel covers, and the units of elevation (and the zero-point). DEMs may be derived from existing paper maps and survey data, or they may be generated from new satellite or other remotely sensed radar or sonar data. === Topological modeling === A geographic information system (GIS) can recognize and analyze the spatial relationships that exist within digitally stored spatial data. These topological relationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else). reconstitute a sight in synthesized images of the ground, determine a trajectory of overflight of the ground, calculate surfaces or volumes, trace topographic profiles, == Topography in other fields == Topography has been applied to different science fields. In neuroscience, the neuroimaging discipline uses techniques such as EEG topography for brain mapping. In ophthalmology, corneal topography is used as a technique for mapping the surface curvature of the cornea. In tissue engineering, atomic force microscopy is used to map nanotopography. In human anatomy, topography is superficial human anatomy. In mathematics the concept of topography is used to indicate the patterns or general organization of features on a map or as a term referring to the pattern in which variables (or their values) are distributed in a space. == Topographers == Topographers are experts in topography. They study and describe the surface features of a place or region. == See also == Cartography Digital elevation model Fall line (topography) Geomorphology Global Relief Model Hypsography Marine topography Sea-surface topography Topographic map Orography == References ==
Wikipedia/Topography
The circuit topology of a folded linear polymer refers to the arrangement of its intra-molecular contacts. Examples of linear polymers with intra-molecular contacts are nucleic acids and proteins. Proteins fold via the formation of contacts of various natures, including hydrogen bonds, disulfide bonds, and beta-beta interactions. RNA molecules fold by forming hydrogen bonds between nucleotides, forming nested or non-nested structures. Contacts in the genome are established via protein bridges including CTCF and cohesins and are measured by technologies including Hi-C. Circuit topology categorises the topological arrangement of these physical contacts, that are referred to as hard contacts (or h-contacts). Furthermore, chains can fold via knotting (or the formation of "soft" contacts (s-contacts)). Circuit topology uses a similar language to categorise both "soft" and "hard" contacts, and provides a full description of a folded linear chain. In this framework, a "circuit" refers to a segment of the chain where each contact site within the segment forms connections with other contact sites within the same segment, and thus is not left unpaired. A folded chain can thus be studied based on its constituting circuits. A simple example of a folded chain is a chain with two hard contacts. For a chain with two binary contacts, three arrangements are available: parallel (P), series (S), and crossed (X). For a chain with n contacts, the topology can be described by an n by n matrix in which each element illustrates the relation between a pair of contacts and may take one of the three states, P, S and X. Multivalent contacts can also be categorised in full or via decomposition into several binary contacts. Similarly, circuit topology allows for the classification of the pairwise arrangements of chain crossings and tangles, thus providing a complete 3D description of folded chains. Furthermore, one can apply circuit topology operations to soft and hard contacts to generate complex folds, using a bottom-up engineering approach. Both knot theory and circuit topology aim to describe chain entanglement, making it important to understand their relationship. Knot theory considers any entangled chain as a connected sum of prime knots, which are themselves undecomposable. Circuit topology splits any entangled chains (including prime knots) into basic structural units called soft contacts, and lists simple rules on how soft contacts can be put together. An advantage of circuit topology is that it can be applied to open linear chains with intra-chain interactions, so-called hard contacts. This enabled topological analysis of proteins and genomes, which are often described as "unknot" in knot theory. Finally, circuit topology enables studying interactions between hard contacts and entanglements and can identify slip knots, while knot theory typically overlooks hard contacts and split knots. Thus, circuit topology serves as a complementary approach to knot theory. Circuit topology has implications for folding kinetics and molecular evolution and has been applied to engineer polymers including molecular origami. Circuit topology along with contact order and size are determinants of the folding rate of linear polymers. The approach can also be used for medical applications including the prediction of pathogenicity of mutations. == Further reading == Scalvini, Barbara; Sheikhhassani, Vahid; Mashaghi, Alireza (2021). "Topological principles of protein folding". Physical Chemistry Chemical Physics. 23 (37): 21316–21328. Bibcode:2021PCCP...2321316S. doi:10.1039/D1CP03390E. hdl:1887/3277889. PMID 34545868. S2CID 237583577. Golovnev, Anatoly; Mashaghi, Alireza (September 2020). "Generalized Circuit Topology of Folded Linear Chains". iScience. 23 (9): 101492. Bibcode:2020iSci...23j1492G. doi:10.1016/j.isci.2020.101492. PMC 7481252. PMID 32896769. Heidari, Maziar; Schiessel, Helmut; Mashaghi, Alireza (24 June 2020). "Circuit Topology Analysis of Polymer Folding Reactions". ACS Central Science. 6 (6): 839–847. doi:10.1021/acscentsci.0c00308. PMC 7318069. PMID 32607431. == References == == See also == Molecular topology
Wikipedia/Circuit_topology
The Journal of Topology is a peer-reviewed scientific journal which publishes papers of high quality and significance in topology, geometry, and adjacent areas of mathematics. It was established in 2008, when the editorial board of Topology resigned due to the increasing costs of Elsevier's subscriptions. The journal is owned and managed by the London Mathematical Society and produced, distributed, sold and marketed by John Wiley & Sons. It appears quarterly with articles published individually online prior to appearing in a printed issue. == Editorial board == Arthur Bartels (University of Münster) Andrew Blumberg (University of Texas at Austin) Jeffrey Brock (Yale University) Simon Donaldson (Imperial College London) Cornelia Druţu Badea (University of Oxford) Mark Gross (University of Cambridge) Lars Hesselholt (University of Copenhagen) Misha Kapovich (UC Davis) Frances Kirwan (University of Oxford) Marc Lackenby (University of Oxford) Oscar Randal-Williams (University of Cambridge) Jacob Rasmussen (University of Cambridge) Ivan Smith (University of Cambridge) Constantin Teleman (University of California, Berkeley) == Abstracting and indexing == The journal is abstracted and indexed in Mathematical Reviews, Science Citation Index, and Zentralblatt MATH. == References == == External links == Official website
Wikipedia/Journal_of_Topology
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines. The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail. There is widespread interest in theoretical computer science to better understand the limits to which we can approximate certain famous optimization problems. For example, one of the long-standing open questions in computer science is to determine whether there is an algorithm that outperforms the 2-approximation for the Steiner Forest problem by Agrawal et al. The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. One well-known example of the former is the Goemans–Williamson algorithm for maximum cut, which solves a graph theoretic problem using high dimensional geometry. == Introduction == A simple example of an approximation algorithm is one for the minimum vertex cover problem, where the goal is to choose the smallest set of vertices such that every edge in the input graph contains at least one chosen vertex. One way to find a vertex cover is to repeat the following process: find an uncovered edge, add both its endpoints to the cover, and remove all edges incident to either vertex from the graph. As any vertex cover of the input graph must use a distinct vertex to cover each edge that was considered in the process (since it forms a matching), the vertex cover produced, therefore, is at most twice as large as the optimal one. In other words, this is a constant-factor approximation algorithm with an approximation factor of 2. Under the recent unique games conjecture, this factor is even the best possible one. NP-hard problems vary greatly in their approximability; some, such as the knapsack problem, can be approximated within a multiplicative factor 1 + ϵ {\displaystyle 1+\epsilon } , for any fixed ϵ > 0 {\displaystyle \epsilon >0} , and therefore produce solutions arbitrarily close to the optimum (such a family of approximation algorithms is called a polynomial-time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial, factor unless P = NP, as in the case of the maximum clique problem. Therefore, an important benefit of studying approximation algorithms is a fine-grained classification of the difficulty of various NP-hard problems beyond the one afforded by the theory of NP-completeness. In other words, although NP-complete problems may be equivalent (under polynomial-time reductions) to each other from the perspective of exact solutions, the corresponding optimization problems behave very differently from the perspective of approximate solutions. == Algorithm design techniques == By now there are several established techniques to design approximation algorithms. These include the following ones. Greedy algorithm Local search Enumeration and dynamic programming (which is also often used for parameterized approximations) Solving a convex programming relaxation to get a fractional solution. Then converting this fractional solution into a feasible solution by some appropriate rounding. The popular relaxations include the following. Linear programming relaxations Semidefinite programming relaxations Primal-dual methods Dual fitting Embedding the problem in some metric and then solving the problem on the metric. This is also known as metric embedding. Random sampling and the use of randomness in general in conjunction with the methods above. == A posteriori guarantees == While approximation algorithms always provide an a priori worst case guarantee (be it additive or multiplicative), in some cases they also provide an a posteriori guarantee that is often much better. This is often the case for algorithms that work by solving a convex relaxation of the optimization problem on the given input. For example, there is a different approximation algorithm for minimum vertex cover that solves a linear programming relaxation to find a vertex cover that is at most twice the value of the relaxation. Since the value of the relaxation is never larger than the size of the optimal vertex cover, this yields another 2-approximation algorithm. While this is similar to the a priori guarantee of the previous approximation algorithm, the guarantee of the latter can be much better (indeed when the value of the LP relaxation is far from the size of the optimal vertex cover). == Hardness of approximation == Approximation algorithms as a research area is closely related to and informed by inapproximability theory where the non-existence of efficient algorithms with certain approximation ratios is proved (conditioned on widely believed hypotheses such as the P ≠ NP conjecture) by means of reductions. In the case of the metric traveling salesman problem, the best known inapproximability result rules out algorithms with an approximation ratio less than 123/122 ≈ 1.008196 unless P = NP, Karpinski, Lampis, Schmied. Coupled with the knowledge of the existence of Christofides' 1.5 approximation algorithm, this tells us that the threshold of approximability for metric traveling salesman (if it exists) is somewhere between 123/122 and 1.5. While inapproximability results have been proved since the 1970s, such results were obtained by ad hoc means and no systematic understanding was available at the time. It is only since the 1990 result of Feige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of Independent Set and the famous PCP theorem, that modern tools for proving inapproximability results were uncovered. The PCP theorem, for example, shows that Johnson's 1974 approximation algorithms for Max SAT, set cover, independent set and coloring all achieve the optimal approximation ratio, assuming P ≠ NP. == Practicality == Not all approximation algorithms are suitable for direct practical applications. Some involve solving non-trivial linear programming/semidefinite relaxations (which may themselves invoke the ellipsoid algorithm), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. Despite their inability to be used "out of the box" in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights. In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. One such example is the initial PTAS for Euclidean TSP by Sanjeev Arora (and independently by Joseph Mitchell) which had a prohibitive running time of n O ( 1 / ϵ ) {\displaystyle n^{O(1/\epsilon )}} for a 1 + ϵ {\displaystyle 1+\epsilon } approximation. Yet, within a year these ideas were incorporated into a near-linear time O ( n log ⁡ n ) {\displaystyle O(n\log n)} algorithm for any constant ϵ > 0 {\displaystyle \epsilon >0} . == Structure of approximation algorithms == Given an optimization problem: Π : I × S {\displaystyle \Pi :I\times S} where Π {\displaystyle \Pi } is an approximation problem, I {\displaystyle I} the set of inputs and S {\displaystyle S} the set of solutions, we can define the cost function: c : S → R + {\displaystyle c:S\rightarrow \mathbb {R} ^{+}} and the set of feasible solutions: ∀ i ∈ I , S ( i ) = s ∈ S : i Π s {\displaystyle \forall i\in I,S(i)={s\in S:i\Pi _{s}}} finding the best solution s ∗ {\displaystyle s^{*}} for a maximization or a minimization problem: s ∗ ∈ S ( i ) {\displaystyle s^{*}\in S(i)} , c ( s ∗ ) = m i n / m a x c ( S ( i ) ) {\displaystyle c(s^{*})=min/max\ c(S(i))} Given a feasible solution s ∈ S ( i ) {\displaystyle s\in S(i)} , with s ≠ s ∗ {\displaystyle s\neq s^{*}} , we would want a guarantee of the quality of the solution, which is a performance to be guaranteed (approximation factor). Specifically, having A Π ( i ) ∈ S i {\displaystyle A_{\Pi }(i)\in S_{i}} , the algorithm has an approximation factor (or approximation ratio) of ρ ( n ) {\displaystyle \rho (n)} if ∀ i ∈ I s . t . | i | = n {\displaystyle \forall i\in I\ s.t.|i|=n} , we have: for a minimization problem: c ( A Π ( i ) ) c ( s ∗ ( i ) ) ≤ ρ ( n ) {\displaystyle {\frac {c(A_{\Pi }(i))}{c(s^{*}(i))}}\leq \rho (n)} , which in turn means the solution taken by the algorithm divided by the optimal solution achieves a ratio of ρ ( n ) {\displaystyle \rho (n)} ; for a maximization problem: c ( s ∗ ( i ) ) c ( A Π ( i ) ) ≤ ρ ( n ) {\displaystyle {\frac {c(s^{*}(i))}{c(A_{\Pi }(i))}}\leq \rho (n)} , which in turn means the optimal solution divided by the solution taken by the algorithm achieves a ratio of ρ ( n ) {\displaystyle \rho (n)} ; The approximation can be proven tight (tight approximation) by demonstrating that there exist instances where the algorithm performs at the approximation limit, indicating the tightness of the bound. In this case, it's enough to construct an input instance designed to force the algorithm into a worst-case scenario. == Performance guarantees == For some approximation algorithms it is possible to prove certain properties about the approximation of the optimum result. For example, a ρ-approximation algorithm A is defined to be an algorithm for which it has been proven that the value/cost, f(x), of the approximate solution A(x) to an instance x will not be more (or less, depending on the situation) than a factor ρ times the value, OPT, of an optimum solution. { O P T ≤ f ( x ) ≤ ρ O P T , if ρ > 1 ; ρ O P T ≤ f ( x ) ≤ O P T , if ρ < 1. {\displaystyle {\begin{cases}\mathrm {OPT} \leq f(x)\leq \rho \mathrm {OPT} ,\qquad {\mbox{if }}\rho >1;\\\rho \mathrm {OPT} \leq f(x)\leq \mathrm {OPT} ,\qquad {\mbox{if }}\rho <1.\end{cases}}} The factor ρ is called the relative performance guarantee. An approximation algorithm has an absolute performance guarantee or bounded error c, if it has been proven for every instance x that ( O P T − c ) ≤ f ( x ) ≤ ( O P T + c ) . {\displaystyle (\mathrm {OPT} -c)\leq f(x)\leq (\mathrm {OPT} +c).} Similarly, the performance guarantee, R(x,y), of a solution y to an instance x is defined as R ( x , y ) = max ( O P T f ( y ) , f ( y ) O P T ) , {\displaystyle R(x,y)=\max \left({\frac {OPT}{f(y)}},{\frac {f(y)}{OPT}}\right),} where f(y) is the value/cost of the solution y for the instance x. Clearly, the performance guarantee is greater than or equal to 1 and equal to 1 if and only if y is an optimal solution. If an algorithm A guarantees to return solutions with a performance guarantee of at most r(n), then A is said to be an r(n)-approximation algorithm and has an approximation ratio of r(n). Likewise, a problem with an r(n)-approximation algorithm is said to be r(n)-approximable or have an approximation ratio of r(n). For minimization problems, the two different guarantees provide the same result and that for maximization problems, a relative performance guarantee of ρ is equivalent to a performance guarantee of r = ρ − 1 {\displaystyle r=\rho ^{-1}} . In the literature, both definitions are common but it is clear which definition is used since, for maximization problems, as ρ ≤ 1 while r ≥ 1. The absolute performance guarantee P A {\displaystyle \mathrm {P} _{A}} of some approximation algorithm A, where x refers to an instance of a problem, and where R A ( x ) {\displaystyle R_{A}(x)} is the performance guarantee of A on x (i.e. ρ for problem instance x) is: P A = inf { r ≥ 1 ∣ R A ( x ) ≤ r , ∀ x } . {\displaystyle \mathrm {P} _{A}=\inf\{r\geq 1\mid R_{A}(x)\leq r,\forall x\}.} That is to say that P A {\displaystyle \mathrm {P} _{A}} is the largest bound on the approximation ratio, r, that one sees over all possible instances of the problem. Likewise, the asymptotic performance ratio R A ∞ {\displaystyle R_{A}^{\infty }} is: R A ∞ = inf { r ≥ 1 ∣ ∃ n ∈ Z + , R A ( x ) ≤ r , ∀ x , | x | ≥ n } . {\displaystyle R_{A}^{\infty }=\inf\{r\geq 1\mid \exists n\in \mathbb {Z} ^{+},R_{A}(x)\leq r,\forall x,|x|\geq n\}.} That is to say that it is the same as the absolute performance ratio, with a lower bound n on the size of problem instances. These two types of ratios are used because there exist algorithms where the difference between these two is significant. == Epsilon terms == In the literature, an approximation ratio for a maximization (minimization) problem of c - ϵ (min: c + ϵ) means that the algorithm has an approximation ratio of c ∓ ϵ for arbitrary ϵ > 0 but that the ratio has not (or cannot) be shown for ϵ = 0. An example of this is the optimal inapproximability — inexistence of approximation — ratio of 7 / 8 + ϵ for satisfiable MAX-3SAT instances due to Johan Håstad. As mentioned previously, when c = 1, the problem is said to have a polynomial-time approximation scheme. An ϵ-term may appear when an approximation algorithm introduces a multiplicative error and a constant error while the minimum optimum of instances of size n goes to infinity as n does. In this case, the approximation ratio is c ∓ k / OPT = c ∓ o(1) for some constants c and k. Given arbitrary ϵ > 0, one can choose a large enough N such that the term k / OPT < ϵ for every n ≥ N. For every fixed ϵ, instances of size n < N can be solved by brute force, thereby showing an approximation ratio — existence of approximation algorithms with a guarantee — of c ∓ ϵ for every ϵ > 0. == See also == Domination analysis considers guarantees in terms of the rank of the computed solution. PTAS - a type of approximation algorithm that takes the approximation ratio as a parameter Parameterized approximation algorithm - a type of approximation algorithm that runs in FPT time APX is the class of problems with some constant-factor approximation algorithm Approximation-preserving reduction Exact algorithm == Citations == == References == Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. ISBN 978-3-540-65367-7. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 35: Approximation Algorithms, pp. 1022–1056. Dorit S. Hochbaum, ed. Approximation Algorithms for NP-Hard problems, PWS Publishing Company, 1997. ISBN 0-534-94968-1. Chapter 9: Various Notions of Approximations: Good, Better, Best, and More Williamson, David P.; Shmoys, David B. (April 26, 2011), The Design of Approximation Algorithms, Cambridge University Press, ISBN 978-0521195270 == External links == Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski and Gerhard Woeginger, A compendium of NP optimization problems.
Wikipedia/Approximation_algorithms
The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012. As Fich (1996) writes, STOC and its annual IEEE counterpart FOCS (the Symposium on Foundations of Computer Science) are considered the two top conferences in theoretical computer science, considered broadly: they “are forums for some of the best work throughout theory of computing that promote breadth among theory of computing researchers and help to keep the community together.” Johnson (1984) includes regular attendance at STOC and FOCS as one of several defining characteristics of theoretical computer scientists. == Awards == The Gödel Prize for outstanding papers in theoretical computer science is presented alternately at STOC and at the International Colloquium on Automata, Languages and Programming (ICALP); the Knuth Prize for outstanding contributions to the foundations of computer science is presented alternately at STOC and at FOCS. Since 2003, STOC has presented one or more Best Paper Awards to recognize papers of the highest quality at the conference. In addition, the Danny Lewin Best Student Paper Award is awarded to the author(s) of the best student-only-authored paper in STOC. The award is named in honor of Daniel M. Lewin, an American-Israeli mathematician and entrepreneur who co-founded Internet company Akamai Technologies, and was one of the first victims of the September 11 attacks. == History == STOC was first organised on 5–7 May 1969, in Marina del Rey, California, United States. The conference chairman was Patrick C. Fischer, and the program committee consisted of Michael A. Harrison, Robert W. Floyd, Juris Hartmanis, Richard M. Karp, Albert R. Meyer, and Jeffrey D. Ullman. Early seminal papers in STOC include Cook (1971), which introduced the concept of NP-completeness (see also Cook–Levin theorem). == Location == STOC was organised in Canada in 1992, 1994, 2002, 2008, and 2017 in Greece in 2001, as a virtual/online conference in 2020 and 2021, and in Italy in 2022; all other meetings in 1969–2023 have been held in the United States. STOC was part of the Federated Computing Research Conference (FCRC) in 1993, 1996, 1999, 2003, 2007, 2011, 2015, 2019, and 2023. == Invited speakers == 2004 Éva Tardos (2004), "Network games", Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04, pp. 341–342, doi:10.1145/1007352.1007356, ISBN 978-1581138528, S2CID 18249534 Avi Wigderson (2004), "Depth through breadth, or why should we attend talks in other areas?", Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04, p. 579, doi:10.1145/1007352.1007359, ISBN 978-1581138528, S2CID 27563516 2005 Lance Fortnow (2005), "Beyond NP: the work and legacy of Larry Stockmeyer", Proceedings of the thirty-seventh annual ACM symposium on Theory of computing - STOC '05, p. 120, doi:10.1145/1060590.1060609, ISBN 978-1581139600, S2CID 16558679 2006 Prabhakar Raghavan (2006), "The changing face of web search: algorithms, auctions and advertising", Proceedings of the thirty-eighth annual ACM symposium on Theory of computing - STOC '06, p. 129, doi:10.1145/1132516.1132535, ISBN 978-1595931344, S2CID 19222958 Russell Impagliazzo (2006), "Can every randomized algorithm be derandomized?", Proceedings of the thirty-eighth annual ACM symposium on Theory of computing - STOC '06, pp. 373–374, doi:10.1145/1132516.1132571, ISBN 978-1595931344, S2CID 22433370 2007 Nancy Lynch (2007), "Distributed computing theory: algorithms, impossibility results, models, and proofs", Proceedings of the thirty-ninth annual ACM symposium on Theory of computing - STOC '07, p. 247, doi:10.1145/1250790.1250826, ISBN 9781595936318, S2CID 22140755 2008 Jennifer Rexford (2008), "Rethinking internet routing", Proceedings of the fortieth annual ACM symposium on Theory of computing - STOC 08, pp. 55–56, doi:10.1145/1374376.1374386, ISBN 9781605580470, S2CID 10958242 David Haussler (2008), "Computing how we became human", Proceedings of the fortieth annual ACM symposium on Theory of computing - STOC 08, pp. 639–640, doi:10.1145/1374376.1374468, ISBN 9781605580470, S2CID 30452365 Ryan O'Donnell (2008), "Some topics in analysis of boolean functions", Proceedings of the fortieth annual ACM symposium on Theory of computing - STOC 08, pp. 569–578, doi:10.1145/1374376.1374458, ISBN 9781605580470, S2CID 1241681 2009 Shafi Goldwasser (2009), "Athena lecture: Controlling Access to Programs?", Proceedings of the 41st annual ACM symposium on Symposium on theory of computing - STOC '09, pp. 167–168, doi:10.1145/1536414.1536416, ISBN 9781605585062 2010 David S. Johnson (2010), "Approximation Algorithms in Theory and Practice" (Knuth Prize Lecture) 2011 Leslie G. Valiant (2011), "The Extent and Limitations of Mechanistic Explanations of Nature" (2010 ACM Turing Award Lecture) Ravi Kannan (2011), "Algorithms: Recent Highlights and Challenges" (2011 Knuth Prize Lecture) David A. Ferruci (2011), "IBM's Watson/DeepQA" (FCRC Plenary Talk) Luiz Andre Barroso (2011), "Warehouse-Scale Computing: Entering the Teenage Decade" (FCRC Plenary Talk) 2013 Gary Miller (2013), Knuth Prize Lecture Prabhakar Raghavan (2013), Plenary talk 2014 Thomas Rothvoss (2014), "The matching polytope has exponential extension complexity" Shafi Goldwasser (2014), "The Cryptographic Lens" (Turing Award Lecture) video Silvio Micali (2014), "Proofs according to Silvio" (Turing Award Lecture) video 2015 Michael Stonebraker (2015), Turing Award Lecture video Andrew Yao (2015), FCRC Keynote Lecture László Babai (2015), Knuth Prize Lecture Olivier Temam (2015), FCRC Keynote Lecture 2016 Santosh Vempala (2016), "The Interplay of Sampling and Optimization in High Dimension" (Invited Talk) Timothy Chan (2016), "Computational Geometry, from Low to High Dimensions" (Invited Talk) 2017 Avi Wigderson (2017), "On the Nature and Future of ToC" (Keynote Talk) Orna Kupferman (2017), "Examining classical graph-theory problems from the viewpoint of formal-verification methods" (Keynote Talk) Oded Goldreich (2017), Knuth Prize Lecture == See also == Conferences in theoretical computer science. List of computer science conferences contains other academic conferences in computer science. List of computer science awards == Notes == == References == Cook, Stephen (1971), "The complexity of theorem proving procedures" (PDF), Proc. STOC 1971, pp. 151–158, doi:10.1145/800157.805047, S2CID 7573663. Fich, Faith (1996), "Infrastructure issues related to theory of computing research", ACM Computing Surveys, 28 (4es): 217–es, doi:10.1145/242224.242502, S2CID 195706843. Johnson, D. S. (1984), "The genealogy of theoretical computer science: a preliminary report", ACM SIGACT News, 16 (2): 36–49, doi:10.1145/1008959.1008960, S2CID 26789249. == External links == Official website STOC proceedings information in DBLP. STOC proceedings in the ACM Digital Library. Citation Statistics for FOCS/STOC/SODA, Piotr Indyk and Suresh Venkatasubramanian, July 2007.
Wikipedia/Symposium_on_Theory_of_Computing
First-order logic, also called predicate logic, predicate calculus, or quantificational logic, is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all humans are mortal", in first-order logic one can have expressions in the form "for all x, if x is a human, then x is mortal", where "for all x" is a quantifier, x is a variable, and "... is a human" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations;: 161  in this sense, propositional logic is the foundation of first-order logic. A theory about a topic, such as set theory, a theory for groups, or a formal theory of arithmetic, is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic. The term "first-order" distinguishes first-order logic from higher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted.: 56  In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets. There are many deductive systems for first-order logic which are both sound, i.e. all provable statements are true in all models; and complete, i.e. all statements which are true in all models are provable. Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem. First-order logic is the standard for the formalization of mathematics into axioms, and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures, i.e. categorical axiom systems, can be obtained in stronger logics such as second-order logic. The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce. For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001). == Introduction == While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. A predicate evaluates to true or false for an entity or entities in the domain of discourse. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as p and q. They are not viewed as an application of a predicate, such as isPhil {\displaystyle {\text{isPhil}}} , to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false. However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common form isPhil ( x ) {\displaystyle {\text{isPhil}}(x)} for some individual x {\displaystyle x} , in the first sentence the value of the variable x is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic.: 29–30  The truth of a formula such as "x is a philosopher" depends on which object is denoted by x and on the interpretation of the predicate "is a philosopher". Consequently, "x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment. Relationships between predicates can be stated using logical connectives. For example, the first-order formula "if x is a philosopher, then x is a scholar", is a conditional statement with "x is a philosopher" as its hypothesis, and "x is a scholar" as its conclusion, which again needs specification of x in order to have a definite truth value. Quantifiers can be applied to variables in a formula. The variable x in the previous formula can be universally quantified, for instance, with the first-order sentence "For every x, if x is a philosopher, then x is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if x is a philosopher, then x is a scholar" holds for all choices of x. The negation of the sentence "For every x, if x is a philosopher, then x is a scholar" is logically equivalent to the sentence "There exists x such that x is a philosopher and x is not a scholar". The existential quantifier "there exists" expresses the idea that the claim "x is a philosopher and x is not a scholar" holds for some choice of x. The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables. An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, consider the sentence "There exists x such that x is a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of the Republic." It is true, as witnessed by Plato in that text. There are two key parts of first-order logic. The syntax determines which finite sequences of symbols are well-formed expressions in first-order logic, while the semantics determines the meanings behind these expressions. == Syntax == Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is well formed. There are two key types of well-formed expressions: terms, which intuitively represent objects, and formulas, which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings of symbols, where all the symbols together form the alphabet of the language. === Alphabet === As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols. It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol ∧ {\displaystyle \land } always represents "and"; it is never interpreted as "or", which is represented by the logical symbol ∨ {\displaystyle \lor } . However, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate depending on the interpretation at hand. ==== Logical symbols ==== Logical symbols are a set of characters that vary by author, but usually include the following: Quantifier symbols: ∀ for universal quantification, and ∃ for existential quantification Logical connectives: ∧ for conjunction, ∨ for disjunction, → for implication, ↔ for biconditional, ¬ for negation. Some authors use Cpq instead of → and Epq instead of ↔, especially in contexts where → is used for other purposes. Moreover, the horseshoe ⊃ may replace →; the triple-bar ≡ may replace ↔; a tilde (~), Np, or Fp may replace ¬; a double bar ‖ {\displaystyle \|} , + {\displaystyle +} , or Apq may replace ∨; and an ampersand &, Kpq, or the middle dot ⋅ may replace ∧, especially if these symbols are not available for technical reasons. Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context. An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, ... . Subscripts are often used to distinguish variables: x0, x1, x2, ... . An equality symbol (sometimes, identity symbol) = (see § Equality and its axioms below). Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices. Other logical symbols include the following: Truth constants: T, or ⊤ for "true" and F, or ⊥ for "false". Without any such logical operators of valence 0, these two constants can only be expressed using quantifiers. Additional logical connectives such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq. ==== Non-logical symbols ==== Non-logical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes: For every integer n ≥ 0, there is a collection of n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n, there is an infinite supply of them: Pn0, Pn1, Pn2, Pn3, ... For every integer n ≥ 0, there are infinitely many n-ary function symbols: f n0, f n1, f n2, f n3, ... When the arity of a predicate symbol or function symbol is clear from context, the superscript n is often omitted. In this traditional approach, there is only one language of first-order logic. This approach is still common, especially in philosophically oriented books. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature. Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim–Skolem theorem. Though signatures might in some cases imply how non-logical symbols are to be interpreted, interpretation of the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics. In this approach, every non-logical symbol is of one of the following types: A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These are often denoted by uppercase letters such as P, Q and R. Examples: In P(x), P is a predicate symbol of valence 1. One possible interpretation is "x is a man". In Q(x,y), Q is a predicate symbol of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y". Relations of valence 0 can be identified with propositional variables, which can stand for any statement. One possible interpretation of R is "Socrates is a man". A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase roman letters such as f, g and h. Examples: f(x) may be interpreted as "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, g(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y". Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet such as a, b and c. The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, it may stand for the empty set. The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols. === Formation rules === The formation rules define the terms and formulas of first-order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms. ==== Terms ==== The set of terms is inductively defined by the following rules: Variables. Any variable symbol is a term. Functions. If f is an n-ary function symbol, and t1, ..., tn are terms, then f(t1,...,tn) is a term. In particular, symbols denoting individual constants are nullary function symbols, and thus are terms. Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term. ==== Formulas ==== The set of formulas (also called well-formed formulas or WFFs) is inductively defined by the following rules: Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t1,...,tn) is a formula. Equality. If the equality symbol is considered part of logic, and t1 and t2 are terms, then t1 = t2 is a formula. Negation. If φ {\displaystyle \varphi } is a formula, then ¬ φ {\displaystyle \lnot \varphi } is a formula. Binary connectives. If ⁠ φ {\displaystyle \varphi } ⁠ and ⁠ ψ {\displaystyle \psi } ⁠ are formulas, then ( φ → ψ {\displaystyle \varphi \rightarrow \psi } ) is a formula. Similar rules apply to other binary logical connectives. Quantifiers. If φ {\displaystyle \varphi } is a formula and x is a variable, then ∀ x φ {\displaystyle \forall x\varphi } (for all x, φ {\displaystyle \varphi } holds) and ∃ x φ {\displaystyle \exists x\varphi } (there exists x such that φ {\displaystyle \varphi } ) are formulas. Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas. For example: ∀ x ∀ y ( P ( f ( x ) ) → ¬ ( P ( x ) → Q ( f ( y ) , x , z ) ) ) {\displaystyle \forall x\forall y(P(f(x))\rightarrow \neg (P(x)\rightarrow Q(f(y),x,z)))} is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. However, ∀ x x → {\displaystyle \forall x\,x\rightarrow } is not a formula, although it is a string of symbols from the alphabet. The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability. ==== Notational conventions ==== For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is: ¬ {\displaystyle \lnot } is evaluated first ∧ {\displaystyle \land } and ∨ {\displaystyle \lor } are evaluated next Quantifiers are evaluated next → {\displaystyle \to } is evaluated last. Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula: ¬ ∀ x P ( x ) → ∃ x ¬ P ( x ) {\displaystyle \lnot \forall xP(x)\to \exists x\lnot P(x)} might be written as: ( ¬ [ ∀ x P ( x ) ] ) → ∃ x [ ¬ P ( x ) ] . {\displaystyle (\lnot [\forall xP(x)])\to \exists x[\lnot P(x)].} === Free and bound variables === In a formula, a variable may occur free or bound (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol x, each occurrence of a variable symbol x in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol x appears.p.297 Then, an occurrence of x is said to be bound if that occurrence of x lies within the scope of at least one of either ∃ x {\displaystyle \exists x} or ∀ x {\displaystyle \forall x} . Finally, x is bound in φ if all occurrences of x in φ are bound.pp.142--143 Intuitively, a variable symbol is free in a formula if at no point is it quantified:pp.142--143 in ∀y P(x, y), the sole occurrence of variable x is free while that of y is bound. The free and bound variable occurrences in a formula are defined inductively as follows. Atomic formulas If φ is an atomic formula, then x occurs free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula. Negation x occurs free in ¬φ if and only if x occurs free in φ. x occurs bound in ¬φ if and only if x occurs bound in φ Binary connectives x occurs free in (φ → ψ) if and only if x occurs free in either φ or ψ. x occurs bound in (φ → ψ) if and only if x occurs bound in either φ or ψ. The same rule applies to any other binary connective in place of →. Quantifiers x occurs free in ∀y φ, if and only if x occurs free in φ and x is a different symbol from y. Also, x occurs bound in ∀y φ, if and only if x is y or x occurs bound in φ. The same rule holds with ∃ in place of ∀. For example, in ∀x ∀y (P(x) → Q(x,f(x),z)), x and y occur only bound, z occurs only free, and w is neither because it does not occur in the formula. Free and bound variables of a formula need not be disjoint sets: in the formula P(x) → ∀x Q(x), the first occurrence of x, as argument of P, is free while the second one, as argument of Q, is bound. A formula in first-order logic with no free variable occurrences is called a first-order sentence. These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence ∃x Phil(x) will be either true or false in a given interpretation. === Example: ordered abelian groups === In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then: The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z. The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas. These are usually written as x + y = 0 and x + y − z ≤ x + y. The expression ( ∀ x ∀ y [ ≤ ⁡ ( + ⁡ ( x , y ) , z ) → ∀ x ∀ y + ⁡ ( x , y ) = 0 ) ] {\displaystyle (\forall x\forall y\,[\mathop {\leq } (\mathop {+} (x,y),z)\to \forall x\,\forall y\,\mathop {+} (x,y)=0)]} is a formula, which is usually written as ∀ x ∀ y ( x + y ≤ z ) → ∀ x ∀ y ( x + y = 0 ) . {\displaystyle \forall x\forall y(x+y\leq z)\to \forall x\forall y(x+y=0).} This formula has one free variable, z. The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written ( ∀ x ) ( ∀ y ) [ x + y = y + x ] . {\displaystyle (\forall x)(\forall y)[x+y=y+x].} == Semantics == An interpretation of a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.) === First-order structures === The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a domain of discourse D and an interpretation function I mapping non-logical symbols to predicates, functions, and constants. The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example, ∃ x P ( x ) {\displaystyle \exists xP(x)} states the existence of some object in D for which the predicate P is true (or, more precisely, for which the predicate assigned to the predicate symbol P by the interpretation is true). For example, one can take D to be the set of integers. Non-logical symbols are interpreted as follows: The interpretation of an n-ary function symbol is a function from Dn to D. For example, if the domain of discourse is the set of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function ⁠ I ( f ) {\displaystyle I(f)} ⁠ which, in this interpretation, is addition. The interpretation of a constant symbol (a function symbol of arity 0) is a function from D0 (a set whose only member is the empty tuple) to D, which can be simply identified with an object in D. For example, an interpretation may assign the value I ( c ) = 10 {\displaystyle I(c)=10} to the constant symbol c {\displaystyle c} . The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of D, giving the arguments for which the predicate is true. For example, an interpretation I ( P ) {\displaystyle I(P)} of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than its second argument. Equivalently, predicate symbols may be assigned Boolean-valued functions from Dn to { t r u e , f a l s e } {\displaystyle \{\mathrm {true,false} \}} . === Evaluation of truth values === A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as y = x {\displaystyle y=x} . The truth value of this formula changes depending on the values that x and y denote. First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment: Variables. Each variable x evaluates to μ(x) Functions. Given terms t 1 , … , t n {\displaystyle t_{1},\ldots ,t_{n}} that have been evaluated to elements d 1 , … , d n {\displaystyle d_{1},\ldots ,d_{n}} of the domain of discourse, and a n-ary function symbol f, the term f ( t 1 , … , t n ) {\displaystyle f(t_{1},\ldots ,t_{n})} evaluates to ( I ( f ) ) ( d 1 , … , d n ) {\displaystyle (I(f))(d_{1},\ldots ,d_{n})} . Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema. Atomic formulas (1). A formula P ( t 1 , … , t n ) {\displaystyle P(t_{1},\ldots ,t_{n})} is associated the value true or false depending on whether ⟨ v 1 , … , v n ⟩ ∈ I ( P ) {\displaystyle \langle v_{1},\ldots ,v_{n}\rangle \in I(P)} , where v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} are the evaluation of the terms t 1 , … , t n {\displaystyle t_{1},\ldots ,t_{n}} and I ( P ) {\displaystyle I(P)} is the interpretation of P {\displaystyle P} , which by assumption is a subset of D n {\displaystyle D^{n}} . Atomic formulas (2). A formula t 1 = t 2 {\displaystyle t_{1}=t_{2}} is assigned true if t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} evaluate to the same object of the domain of discourse (see the section on equality below). Logical connectives. A formula in the form ¬ φ {\displaystyle \neg \varphi } , φ → ψ {\displaystyle \varphi \rightarrow \psi } , etc. is evaluated according to the truth table for the connective in question, as in propositional logic. Existential quantifiers. A formula ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} is true according to M and μ {\displaystyle \mu } if there exists an evaluation μ ′ {\displaystyle \mu '} of the variables that differs from μ {\displaystyle \mu } at most regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment μ ′ {\displaystyle \mu '} . This formal definition captures the idea that ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} is true if and only if there is a way to choose a value for x such that φ(x) is satisfied. Universal quantifiers. A formula ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} is true according to M and μ {\displaystyle \mu } if φ(x) is true for every pair composed by the interpretation M and some variable assignment μ ′ {\displaystyle \mu '} that differs from μ {\displaystyle \mu } at most on the value of x. This captures the idea that ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} is true if every possible choice of a value for x causes φ(x) to be true. If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and μ {\displaystyle \mu } if and only if it is true according to M and every other variable assignment μ ′ {\displaystyle \mu '} . There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol cd is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows: Existential quantifiers (alternate). A formula ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} is true according to M if there is some d in the domain of discourse such that φ ( c d ) {\displaystyle \varphi (c_{d})} holds. Here φ ( c d ) {\displaystyle \varphi (c_{d})} is the result of substituting cd for every free occurrence of x in φ. Universal quantifiers (alternate). A formula ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} is true according to M if, for every d in the domain of discourse, φ ( c d ) {\displaystyle \varphi (c_{d})} is true according to M. This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments. === Validity, satisfiability, and logical consequence === If a sentence φ evaluates to true under a given interpretation M, one says that M satisfies φ; this is denoted M ⊨ φ {\displaystyle M\vDash \varphi } . A sentence is satisfiable if there is some interpretation under which it is true. This is a bit different from the symbol ⊨ {\displaystyle \vDash } from model theory, where M ⊨ ϕ {\displaystyle M\vDash \phi } denotes satisfiability in a model, i.e. "there is a suitable assignment of values in M {\displaystyle M} 's domain to variable symbols of ϕ {\displaystyle \phi } ". Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables x 1 {\displaystyle x_{1}} , ..., x n {\displaystyle x_{n}} is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables x 1 {\displaystyle x_{1}} , ..., x n {\displaystyle x_{n}} . This has the same effect as saying that a formula φ is satisfied if and only if its universal closure ∀ x 1 … ∀ x n ϕ ( x 1 , … , x n ) {\displaystyle \forall x_{1}\dots \forall x_{n}\phi (x_{1},\dots ,x_{n})} is satisfied. A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic. A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ. === Algebraizations === An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators: Cylindric algebra, by Alfred Tarski, et al.; Polyadic algebra, by Paul Halmos; Predicate functor logic, primarily by Willard Quine. These algebras are all lattices that properly extend the two-element Boolean algebra. Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra.: 32–33  This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical Zermelo–Fraenkel set theory (ZFC). They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions.: 803  === First-order theories, models, and elementary classes === A first-order theory of a particular signature is a set of axioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called effective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived. A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory. Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models. A theory is consistent (within a deductive system) if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete. === Empty domains === The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class. There are several difficulties with empty domains, however: Many common rules of inference are valid only when the domain of discourse is required to be nonempty. One example is the rule stating that φ ∨ ∃ x ψ {\displaystyle \varphi \lor \exists x\psi } implies ∃ x ( φ ∨ ψ ) {\displaystyle \exists x(\varphi \lor \psi )} when x is not a free variable in φ {\displaystyle \varphi } . This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted. The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains. Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition. == Deductive systems == A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs but are completely formalized unlike natural-language mathematical proofs. A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective. A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area. In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B. === Rules of inference === A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion. For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φ[t/x] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.) To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by ∃ x ( x = y ) {\displaystyle \exists x(x=y)} , in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[t/y] is ∃ x ( x = x + 1 ) {\displaystyle \exists x(x=x+1)} , which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is ∃ z ( z = x + 1 ) {\displaystyle \exists z(z=x+1)} , which is again logically valid. The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule. === Hilbert-style systems and natural deduction === A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference. Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof. === Sequent calculus === The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses sequents, which are expressions of the form: A 1 , … , A n ⊢ B 1 , … , B k , {\displaystyle A_{1},\ldots ,A_{n}\vdash B_{1},\ldots ,B_{k},} where A1, ..., An, B1, ..., Bk are formulas and the turnstile symbol ⊢ {\displaystyle \vdash } is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that ( A 1 ∧ ⋯ ∧ A n ) {\displaystyle (A_{1}\land \cdots \land A_{n})} implies ( B 1 ∨ ⋯ ∨ B k ) {\displaystyle (B_{1}\lor \cdots \lor B_{k})} . === Tableaux method === Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has ¬ A {\displaystyle \lnot A} at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that C ∨ D {\displaystyle C\lor D} is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent C ∨ D {\displaystyle C\lor D} and children C and D. === Resolution === The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving. The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses A 1 ∨ ⋯ ∨ A k ∨ C {\displaystyle A_{1}\lor \cdots \lor A_{k}\lor C} and B 1 ∨ ⋯ ∨ B l ∨ ¬ C {\displaystyle B_{1}\lor \cdots \lor B_{l}\lor \lnot C} , the conclusion A 1 ∨ ⋯ ∨ A k ∨ B 1 ∨ ⋯ ∨ B l {\displaystyle A_{1}\lor \cdots \lor A_{k}\lor B_{1}\lor \cdots \lor B_{l}} can be obtained. === Provable identities === Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form. Some provable identities include: ¬ ∀ x P ( x ) ⇔ ∃ x ¬ P ( x ) {\displaystyle \lnot \forall x\,P(x)\Leftrightarrow \exists x\,\lnot P(x)} ¬ ∃ x P ( x ) ⇔ ∀ x ¬ P ( x ) {\displaystyle \lnot \exists x\,P(x)\Leftrightarrow \forall x\,\lnot P(x)} ∀ x ∀ y P ( x , y ) ⇔ ∀ y ∀ x P ( x , y ) {\displaystyle \forall x\,\forall y\,P(x,y)\Leftrightarrow \forall y\,\forall x\,P(x,y)} ∃ x ∃ y P ( x , y ) ⇔ ∃ y ∃ x P ( x , y ) {\displaystyle \exists x\,\exists y\,P(x,y)\Leftrightarrow \exists y\,\exists x\,P(x,y)} ∀ x P ( x ) ∧ ∀ x Q ( x ) ⇔ ∀ x ( P ( x ) ∧ Q ( x ) ) {\displaystyle \forall x\,P(x)\land \forall x\,Q(x)\Leftrightarrow \forall x\,(P(x)\land Q(x))} ∃ x P ( x ) ∨ ∃ x Q ( x ) ⇔ ∃ x ( P ( x ) ∨ Q ( x ) ) {\displaystyle \exists x\,P(x)\lor \exists x\,Q(x)\Leftrightarrow \exists x\,(P(x)\lor Q(x))} P ∧ ∃ x Q ( x ) ⇔ ∃ x ( P ∧ Q ( x ) ) {\displaystyle P\land \exists x\,Q(x)\Leftrightarrow \exists x\,(P\land Q(x))} (where x {\displaystyle x} must not occur free in P {\displaystyle P} ) P ∨ ∀ x Q ( x ) ⇔ ∀ x ( P ∨ Q ( x ) ) {\displaystyle P\lor \forall x\,Q(x)\Leftrightarrow \forall x\,(P\lor Q(x))} (where x {\displaystyle x} must not occur free in P {\displaystyle P} ) == Equality and its axioms == There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are:: 198–200  Reflexivity. For each variable x, x = x. Substitution for functions. For all variables x and y, and any function symbol f, x = y → f(..., x, ...) = f(..., y, ...). Substitution for formulas. For any variables x and y and any formula φ(z) with a free variable z, then: x = y → (φ(x) → φ(y)). These are axiom schemas, each of which specifies an infinite set of axioms. The third schema is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol f, is (equivalent to) a special case of the third schema, using the formula: φ(z): f(..., x, ...) = f(..., z, ...) Then x = y → (f(..., x, ...) = f(..., x, ...) → f(..., x, ...) = f(..., y, ...)). Since x = y is given, and f(..., x, ...) = f(..., x, ...) true by reflexivity, we have f(..., x, ...) = f(..., y, ...) Many other properties of equality are consequences of the axioms above, for example: Symmetry. If x = y then y = x. Transitivity. If x = y and y = z then x = z. === First-order logic without equality === An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation. When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered. First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted. === Defining equality within a theory === If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument. Some theories allow other ad hoc definitions of equality: In the theory of partial orders with one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t ∧ {\displaystyle \wedge } t ≤ s. In set theory with one relation ∈, one may define s = t to be an abbreviation for ∀x (s ∈ x ↔ t ∈ x) ∧ {\displaystyle \wedge } ∀x (x ∈ s ↔ x ∈ t). This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, which can be stated as ∀ x ∀ y [ ∀ z ( z ∈ x ⇔ z ∈ y ) ⇒ x = y ] {\displaystyle \forall x\forall y[\forall z(z\in x\Leftrightarrow z\in y)\Rightarrow x=y]} , with an alternative formulation ∀ x ∀ y [ ∀ z ( z ∈ x ⇔ z ∈ y ) ⇒ ∀ z ( x ∈ z ⇔ y ∈ z ) ] {\displaystyle \forall x\forall y[\forall z(z\in x\Leftrightarrow z\in y)\Rightarrow \forall z(x\in z\Leftrightarrow y\in z)]} , which says that if sets x and y have the same elements, then they also belong to the same sets. == Metalogical properties == One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories. === Completeness and undecidability === Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ. Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem. There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of first-order logic, as well as two-variable logic. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics. === The Löwenheim–Skolem theorem === The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable). The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox. === The compactness theorem === The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models. The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures). There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in the logic of graphs, that expresses the idea that there is a path from x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as Σ 1 1 {\displaystyle \Sigma _{1}^{1}} also enjoys compactness. === Lindström's theorem === Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type: A logical system satisfying Lindström's definition that contains first-order logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to first-order logic. A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to first-order logic. == Limitations == Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe. For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and the counting quantifiers ∃ ≥ n {\displaystyle \exists ^{\geq n}} and ∃ ≤ n {\displaystyle \exists ^{\leq n}} . === Expressiveness === The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order. === Formalizing natural languages === First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis for knowledge representation languages, such as FO(.). Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic". == Restrictions, extensions, and variations == There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity. === Restricted languages === First-order logic can be studied in languages with fewer logical symbols than were described above: Because ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} can be expressed as ¬ ∀ x ¬ φ ( x ) {\displaystyle \neg \forall x\neg \varphi (x)} , and ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} can be expressed as ¬ ∃ x ¬ φ ( x ) {\displaystyle \neg \exists x\neg \varphi (x)} , either of the two quantifiers ∃ {\displaystyle \exists } and ∀ {\displaystyle \forall } can be dropped. Since φ ∨ ψ {\displaystyle \varphi \lor \psi } can be expressed as ¬ ( ¬ φ ∧ ¬ ψ ) {\displaystyle \lnot (\lnot \varphi \land \lnot \psi )} and φ ∧ ψ {\displaystyle \varphi \land \psi } can be expressed as ¬ ( ¬ φ ∨ ¬ ψ ) {\displaystyle \lnot (\lnot \varphi \lor \lnot \psi )} , either ∨ {\displaystyle \vee } or ∧ {\displaystyle \wedge } can be dropped. In other words, it is sufficient to have ¬ {\displaystyle \neg } and ∨ {\displaystyle \vee } , or ¬ {\displaystyle \neg } and ∧ {\displaystyle \wedge } , as the only logical connectives. Similarly, it is sufficient to have only ¬ {\displaystyle \neg } and → {\displaystyle \rightarrow } as logical connectives, or to have only the Sheffer stroke (NAND) or the Peirce arrow (NOR) operator. It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol 0 {\displaystyle \;0} one may use a predicate 0 ( x ) {\displaystyle \;0(x)} (interpreted as x = 0 {\displaystyle \;x=0} ) and replace every predicate such as P ( 0 , y ) {\displaystyle \;P(0,y)} with ∀ x ( 0 ( x ) → P ( x , y ) ) {\displaystyle \forall x\;(0(x)\rightarrow P(x,y))} . A function such as f ( x 1 , x 2 , . . . , x n ) {\displaystyle f(x_{1},x_{2},...,x_{n})} will similarly be replaced by a predicate F ( x 1 , x 2 , . . . , x n , y ) {\displaystyle F(x_{1},x_{2},...,x_{n},y)} interpreted as y = f ( x 1 , x 2 , . . . , x n ) {\displaystyle y=f(x_{1},x_{2},...,x_{n})} . This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics. Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system. It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied. === Many-sorted logic === Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts, which have different domains. This is also called typed first-order logic, and the sorts called types (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic. When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic.: 296–299  One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols P 1 ( x ) {\displaystyle P_{1}(x)} and P 2 ( x ) {\displaystyle P_{2}(x)} and the axiom: ∀ x ( P 1 ( x ) ∨ P 2 ( x ) ) ∧ ¬ ∃ x ( P 1 ( x ) ∧ P 2 ( x ) ) {\displaystyle \forall x(P_{1}(x)\lor P_{2}(x))\land \lnot \exists x(P_{1}(x)\land P_{2}(x))} . Then the elements satisfying P 1 {\displaystyle P_{1}} are thought of as elements of the first sort, and elements satisfying P 2 {\displaystyle P_{2}} as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula φ ( x ) {\displaystyle \varphi (x)} , one writes: ∃ x ( P 1 ( x ) ∧ φ ( x ) ) {\displaystyle \exists x(P_{1}(x)\land \varphi (x))} . === Additional quantifiers === Additional quantifiers can be added to first-order logic. Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as ∃!x P(x). This notation, called uniqueness quantification, may be taken to abbreviate a formula such as ∃x (P(x) ∧ {\displaystyle \wedge } ∀y (P(y) → (x = y))). First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others. Bounded quantifiers are often used in the study of set theory or arithmetic. === Infinitary logics === Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed. The most commonly studied infinitary logics are denoted Lαβ, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is Lωω. In the logic L∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as Lκω. For example, Lω1ω permits countable conjunctions and disjunctions. The set of free variables in a formula of Lκω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another. In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in Lκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic Lκλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ. === Non-classical and modal logics === Intuitionistic first-order logic uses intuitionistic rather than classical reasoning; for example, ¬¬φ need not be equivalent to φ and ¬ ∀x.φ is in general not equivalent to ∃ x.¬φ . First-order modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard first-order logic we have a single domain, and each predicate is assigned one extension. With first-order modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a philosopher, but might have been a mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all. First-order fuzzy logics are first-order extensions of propositional fuzzy logics rather than classical propositional calculus. === Fixpoint logic === Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators. === Higher-order logics === The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus ∃ a ( Phil ( a ) ) {\displaystyle \exists a({\text{Phil}}(a))} is a legal first-order formula, but ∃ Phil ( Phil ( a ) ) {\displaystyle \exists {\text{Phil}}({\text{Phil}}(a))} is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified. Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics. Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics. == Automated theorem proving and formal methods == Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search. The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct. Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write, results are often formalized as a series of lemmas, for which derivations can be constructed separately. Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences. For the problem of model checking, efficient algorithms are known to decide whether an input finite structure satisfies a first-order formula, in addition to computational complexity bounds: see Model checking § First-order logic. == See also == == Notes == == References == Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York, NY: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6 Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer. Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Raff, Paul (2007); "A formally verified proof of the prime number theorem", ACM Transactions on Computational Logic, vol. 9 no. 1 doi:10.1145/1297658.1297660 Barwise, Jon (1977). "An Introduction to First-Order Logic". In Barwise, Jon (ed.). Handbook of Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Amsterdam, NL: North-Holland (published 1982). ISBN 978-0-444-86388-1. Monk, J. Donald (1976). Mathematical Logic. New York, NY: Springer New York. doi:10.1007/978-1-4684-9452-5. ISBN 978-1-4684-9454-9. Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications (Distributed by the University of Chicago Press) Bocheński, Józef Maria (2007); A Précis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from the French and German editions by Otto Bird Ferreirós, José (2001); The Road to Modern Logic — An Interpretation, Bulletin of Symbolic Logic, Volume 7, Issue 4, 2001, pp. 441–484, doi:10.2307/2687794, JSTOR 2687794 Gamut, L. T. F. (1991), Logic, Language, and Meaning, Volume 2: Intensional Logic and Logical Grammar, Chicago, Illinois: University of Chicago Press, ISBN 0-226-28088-8 Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English translation of Grundzüge der theoretischen Logik, 1928 German first edition) Hodges, Wilfrid (2001); "Classical Logic I: First-Order Logic", in Goble, Lou (ed.); The Blackwell Guide to Philosophical Logic, Blackwell Ebbinghaus, Heinz-Dieter; Flum, Jörg; and Thomas, Wolfgang (1994); Mathematical Logic, Undergraduate Texts in Mathematics, Berlin, DE/New York, NY: Springer-Verlag, Second Edition, ISBN 978-0-387-94258-2 Tarski, Alfred and Givant, Steven (1987); A Formalization of Set Theory without Variables. Vol.41 of American Mathematical Society colloquium publications, Providence RI: American Mathematical Society, ISBN 978-0821810415 == External links == "Predicate calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Stanford Encyclopedia of Philosophy (2000): Shapiro, S., "Classical Logic". Covers syntax, model theory, and metatheory for first-order logic in the natural deduction style. Magnus, P. D.; forall x: an introduction to formal logic. Covers formal semantics and proof theory for first-order logic. Metamath: an ongoing online project to reconstruct mathematics as a huge first-order theory, using first-order logic and the axiomatic set theory ZFC. Principia Mathematica modernized. Podnieks, Karl; Introduction to mathematical logic Cambridge Mathematical Tripos notes (typeset by John Fremlin). These notes cover part of a past Cambridge Mathematical Tripos course taught to undergraduate students (usually) within their third year. The course is entitled "Logic, Computation and Set Theory" and covers Ordinals and cardinals, Posets and Zorn's Lemma, Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories. Tree Proof Generator can validate or invalidate formulas of first-order logic through the semantic tableaux method.
Wikipedia/Quantification_theory
In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms. The subset of all total recursive functions with values in {0,1} is known in computational complexity theory as the complexity class R. == Definition == The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the minimization operator μ. The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive. Primitive or "basic" functions: Constant functions Ckn: For each natural number n and every k C n k ( x 1 , … , x k ) = d e f n {\displaystyle C_{n}^{k}(x_{1},\ldots ,x_{k})\ {\stackrel {\mathrm {def} }{=}}\ n} Alternative definitions use instead a zero function as a primitive function that always returns zero, and build the constant functions from the zero function, the successor function and the composition operator. Successor function S: S ( x ) = d e f x + 1 {\displaystyle S(x)\ {\stackrel {\mathrm {def} }{=}}\ x+1\,} Projection function P i k {\displaystyle P_{i}^{k}} (also called the Identity function): For all natural numbers i , k {\displaystyle i,k} such that 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} : P i k ( x 1 , … , x k ) = d e f x i . {\displaystyle P_{i}^{k}(x_{1},\ldots ,x_{k})\ {\stackrel {\mathrm {def} }{=}}\ x_{i}\,.} Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result): Composition operator ∘ {\displaystyle \circ \,} (also called the substitution operator): Given an m-ary function h ( x 1 , … , x m ) {\displaystyle h(x_{1},\ldots ,x_{m})\,} and m k-ary functions g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) {\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})} : h ∘ ( g 1 , … , g m ) = d e f f , where f ( x 1 , … , x k ) = h ( g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) ) . {\displaystyle h\circ (g_{1},\ldots ,g_{m})\ {\stackrel {\mathrm {def} }{=}}\ f,\quad {\text{where}}\quad f(x_{1},\ldots ,x_{k})=h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})).} This means that f ( x 1 , … , x k ) {\displaystyle f(x_{1},\ldots ,x_{k})} is defined only if g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) , {\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}),} and h ( g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) ) {\displaystyle h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}))} are all defined. Primitive recursion operator ρ: Given the k-ary function g ( x 1 , … , x k ) {\displaystyle g(x_{1},\ldots ,x_{k})\,} and k+2 -ary function h ( y , z , x 1 , … , x k ) {\displaystyle h(y,z,x_{1},\ldots ,x_{k})\,} : ρ ( g , h ) = d e f f where the k+1 -ary function f is defined by f ( 0 , x 1 , … , x k ) = g ( x 1 , … , x k ) f ( S ( y ) , x 1 , … , x k ) = h ( y , f ( y , x 1 , … , x k ) , x 1 , … , x k ) . {\displaystyle {\begin{aligned}\rho (g,h)&\ {\stackrel {\mathrm {def} }{=}}\ f\quad {\text{where the k+1 -ary function }}f{\text{ is defined by}}\\f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k})\\f(S(y),x_{1},\ldots ,x_{k})&=h(y,f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})\,.\end{aligned}}} This means that f ( y , x 1 , … , x k ) {\displaystyle f(y,x_{1},\ldots ,x_{k})} is defined only if g ( x 1 , … , x k ) {\displaystyle g(x_{1},\ldots ,x_{k})} and h ( z , f ( z , x 1 , … , x k ) , x 1 , … , x k ) {\displaystyle h(z,f(z,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})} are defined for all z < y . {\displaystyle z<y.} Minimization operator μ: Given a (k+1)-ary function f ( y , x 1 , … , x k ) {\displaystyle f(y,x_{1},\ldots ,x_{k})\,} , the k-ary function μ ( f ) {\displaystyle \mu (f)} is defined by: μ ( f ) ( x 1 , … , x k ) = z ⟺ d e f f ( i , x 1 , … , x k ) > 0 for i = 0 , … , z − 1 and f ( z , x 1 , … , x k ) = 0 {\displaystyle {\begin{aligned}\mu (f)(x_{1},\ldots ,x_{k})=z{\stackrel {\mathrm {def} }{\iff }}\ f(i,x_{1},\ldots ,x_{k})&>0\quad {\text{for}}\quad i=0,\ldots ,z-1\quad {\text{and}}\\f(z,x_{1},\ldots ,x_{k})&=0\quad \end{aligned}}} Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for which f is not defined, then the search never terminates, and μ ( f ) {\displaystyle \mu (f)} is not defined for the argument ( x 1 , … , x k ) . {\displaystyle (x_{1},\ldots ,x_{k}).} While some textbooks use the μ-operator as defined here, others demand that the μ-operator is applied to total functions f only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (see below). The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total. The strong equality relation ≃ {\displaystyle \simeq } can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that f ( x 1 , … , x k ) ≃ g ( x 1 , … , x l ) {\displaystyle f(x_{1},\ldots ,x_{k})\simeq g(x_{1},\ldots ,x_{l})} holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined. == Examples == Examples not involving the minimization operator can be found at Primitive recursive function#Examples. The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive. The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator. == Total recursive function == A general recursive function is called total recursive function if it is defined for every input, or, equivalently, if it can be computed by a total Turing machine. There is no way to computably tell if a given general recursive function is total - see Halting problem. == Equivalence with other models of computability == In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values). == Normal form theorem == A normal form theorem due to Kleene says that for each k there are primitive recursive functions U ( y ) {\displaystyle U(y)\!} and T ( y , e , x 1 , … , x k ) {\displaystyle T(y,e,x_{1},\ldots ,x_{k})\!} such that for any μ-recursive function f ( x 1 , … , x k ) {\displaystyle f(x_{1},\ldots ,x_{k})\!} with k free variables there is an e such that f ( x 1 , … , x k ) ≃ U ( μ ( T ) ( e , x 1 , … , x k ) ) {\displaystyle f(x_{1},\ldots ,x_{k})\simeq U(\mu (T)(e,x_{1},\ldots ,x_{k}))} . The number e is called an index or Gödel number for the function f.: 52–53  A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function. Minsky observes the U {\displaystyle U} defined above is in essence the μ-recursive equivalent of the universal Turing machine: To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort, and essentially the same ideas, as we have invested in constructing the universal Turing machine == Symbolism == A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xn is abbreviated as x: Constant function: Kleene uses " Cnq(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ": e.g. C713 ( r, s, t, u, v, w, x ) = 13 e.g. const13 ( r, s, t, u, v, w, x ) = 13 Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows: S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc. Identity function: Kleene (1952) uses " Uni " to indicate the identity function over the variables xi; B-B-J use the identity function idni over the variables x1 to xn: Uni( x ) = idni( x ) = xi e.g. U73 = id73 ( r, s, t, u, v, w, x ) = t Composition (Substitution) operator: Kleene uses a bold-face Smn (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn": If we are given h( x )= g( f1(x), ... , fm(x) ) h(x) = Snm(g, f1, ... , fm ) In a similar manner, but without the sub- and superscripts, B-B-J write: h(x')= Cn[g, f1 ,..., fm](x) Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given: base step: h( 0, x )= f( x ), and induction step: h( y+1, x ) = g( y, h(y, x),x ) Example: primitive recursion definition of a + b: base step: f( 0, a ) = a = U11(a) induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U32( b, c, a )) R2 { U11(a), S [ (U32( b, c, a ) ] } Pr{ U11(a), S[ (U32( b, c, a ) ] } Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions S(a) = a' U11(a) = a U32( b, c, a ) = c g(b, c, a) = S(U32( b, c, a )) = c' base step: h( 0, a ) = U11(a) induction step: h( b', a ) = g( b, h( b, a ), a ) He arrives at: a+b = R2[ U11, S31(S, U32) ] == Examples == Fibonacci number McCarthy 91 function == See also == Recursion theory Recursion Recursion (computer science) == References == == External links == Stanford Encyclopedia of Philosophy entry A compiler for transforming a recursive function into an equivalent Turing machine
Wikipedia/Μ-recursive_function
In theoretical computer science, a Markov algorithm is a string rewriting system that uses grammar-like rules to operate on strings of symbols. Markov algorithms have been shown to be Turing-complete, which means that they are suitable as a general model of computation and can represent any mathematical expression from its simple notation. Markov algorithms are named after the Soviet mathematician Andrey Markov, Jr. Refal is a programming language based on Markov algorithms. == Description == Normal algorithms are verbal, that is, intended to be applied to strings in different alphabets. The definition of any normal algorithm consists of two parts: an alphabet, which is a set of symbols, and a scheme. The algorithm is applied to strings of symbols of the alphabet. The scheme is a finite ordered set of substitution formulas. Each formula can be either simple or final. Simple substitution formulas are represented by strings of the form L → D {\displaystyle L\to D} , where L {\displaystyle L} and D {\displaystyle D} are two arbitrary strings in the alphabet. Similarly, final substitution formulas are represented by strings of the form L → ⋅ D {\displaystyle L\to \cdot D} . Here is an example of a normal algorithm scheme in the five-letter alphabet | ∗ a b c {\displaystyle |*abc} : { | b → b a | a b → b a b → ∗ | → b ∗ ∗ → c | c → c a c → c | c → ⋅ {\displaystyle \left\{{\begin{matrix}|b&\to &ba|\\ab&\to &ba\\b&\to &\\{*}|&\to &b*&\\{*}&\to &c&\\|c&\to &c\\ac&\to &c|\\c&\to \cdot \end{matrix}}\right.} The process of applying the normal algorithm to an arbitrary string V {\displaystyle V} in the alphabet of this algorithm is a discrete sequence of elementary steps, consisting of the following. Let’s assume that V ′ {\displaystyle V'} is the word obtained in the previous step of the algorithm (or the original word V {\displaystyle V} , if the current step is the first). If of the substitution formulas there is no left-hand side which is included in the V ′ {\displaystyle V'} , then the algorithm terminates, and the result of its work is considered to be the string V ′ {\displaystyle V'} . Otherwise, the first of the substitution formulae whose left sides are included in V ′ {\displaystyle V'} is selected. If the substitution formula is of the form L → ⋅ D {\displaystyle L\to \cdot D} , then out of all of possible representations of the string V ′ {\displaystyle V'} of the form R L S {\displaystyle RLS} (where R {\displaystyle R} and S {\displaystyle S} are arbitrary strings) the one with the shortest R {\displaystyle R} is chosen. Then the algorithm terminates and the result of its work is considered to be R D S {\displaystyle RDS} . However, if this substitution formula is of the form L → D {\displaystyle L\to D} , then out of all of the possible representations of the string V ′ {\displaystyle V'} of the form of R L S {\displaystyle RLS} the one with the shortest R {\displaystyle R} is chosen, after which the string R D S {\displaystyle RDS} is considered to be the result of the current step, subject to further processing in the next step. For example, the process of applying the algorithm described above to the word | ∗ | | {\displaystyle |*||} results in the sequence of words | b ∗ | {\displaystyle |b*|} , b a | ∗ | {\displaystyle ba|*|} , a | ∗ | {\displaystyle a|*|} , a | b ∗ {\displaystyle a|b*} , a b a | ∗ {\displaystyle aba|*} , b a a | ∗ {\displaystyle baa|*} , a a | ∗ {\displaystyle aa|*} , a a | c {\displaystyle aa|c} , a a c {\displaystyle aac} , a c | {\displaystyle ac|} and c | | {\displaystyle c||} , after which the algorithm stops with the result | | {\displaystyle ||} . For other examples, see below. Any normal algorithm is equivalent to some Turing machine, and vice versa – any Turing machine is equivalent to some normal algorithm. A version of the Church–Turing thesis formulated in relation to the normal algorithm is called the "principle of normalization." Normal algorithms have proved to be a convenient means for the construction of many sections of constructive mathematics. Moreover, inherent in the definition of a normal algorithm are a number of ideas used in programming languages aimed at handling symbolic information – for example, in Refal. == Algorithm == The Rules are a sequence of pairs of strings, usually presented in the form of pattern → replacement. Each rule may be either ordinary or terminating. Given an input string: Check the Rules in order from top to bottom to see whether any of the patterns can be found in the input string. If none is found, the algorithm stops. If one (or more) is found, use the first of them to replace the leftmost occurrence of matched text in the input string with its replacement. If the rule just applied was a terminating one, the algorithm stops. Go to step 1. Note that after each rule application the search starts over from the first rule. == Example == The following example shows the basic operation of a Markov algorithm. === Rules === "A" -> "apple" "B" -> "bag" "S" -> "shop" "T" -> "the" "the shop" -> "my brother" "a never used" -> ."terminating rule" === Symbol string === "I bought a B of As from T S." === Execution === If the algorithm is applied to the above example, the Symbol string will change in the following manner. "I bought a B of As from T S." "I bought a B of apples from T S." "I bought a bag of apples from T S." "I bought a bag of apples from T shop." "I bought a bag of apples from the shop." "I bought a bag of apples from my brother." The algorithm will then terminate. == Another example == These rules give a more interesting example. They rewrite binary numbers to their unary counterparts. For example, 101 will be rewritten to a string of 5 consecutive bars. === Rules === "|0" -> "0||" "1" -> "0|" "0" -> "" === Symbol string === "101" === Execution === If the algorithm is applied to the above example, it will terminate after the following steps. "101" "0|01" "00||1" "00||0|" "00|0|||" "000|||||" "00|||||" "0|||||" "|||||" == See also == Formal grammar == References == Caracciolo di Forino, A. String processing languages and generalized Markov algorithms. In Symbol manipulation languages and techniques, D. G. Bobrow (Ed.), North-Holland Publ. Co., Amsterdam, the Netherlands, 1968, pp. 191–206. Andrey Andreevich Markov (1903-1979) 1960. The Theory of Algorithms. American Mathematical Society Translations, series 2, 15, 1-14. (Translation from the Russian, Trudy Instituta im. Steklova 38 (1951) 176-189) == External links == Yad Studio - Markov algorithms IDE and interpreter (Open Source) Markov algorithm interpreter Markov algorithm interpreter Markov algorithm interpreters at Rosetta-Code A=B, a game about writing substitution rules for a Markov algorithm
Wikipedia/Markov_algorithm
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory. Basic questions addressed by computability theory include: What does it mean for a function on the natural numbers to be computable? How can noncomputable functions be classified into a hierarchy based on their level of noncomputability? Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. The study of which mathematical constructions can be effectively performed is sometimes called recursive mathematics. == Introduction == Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post. The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis": 300  and "Turing's thesis".: 376  Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis:: 84  "Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute notion to an interesting epistemological notion, i.e., one not depending on the formalism chosen.": 84  With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church and Turing were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the Entscheidungsproblem is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false. Many problems in mathematics have been shown to be undecidable after these initial examples were established. In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers. == Turing computability == The main form of computability studied in the field was introduced by Turing in 1936. A set of natural numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the set. A function f from natural numbers to natural numbers is a (Turing) computable, or recursive function if there is a Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator. The terminology for computable functions and sets is not completely standardized. The definition in terms of μ-recursive functions as well as a different definition of rekursiv functions by Gödel led to the traditional name recursive for sets and functions computable by a Turing machine. The word decidable stems from the German word Entscheidungsproblem which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Nigel J. Cutland, it is a partial recursive function (which can be undefined for some inputs), while according to Robert I. Soare it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. In 1996, Soare gave additional comments about the terminology. Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but according to the Cantor's theorem, there are uncountably many sets of natural numbers. Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a computably enumerable (c.e.) set, which is a set that can be enumerated by a Turing machine (other terms for computably enumerable include recursively enumerable and semidecidable). Equivalently, a set is c.e. if and only if it is the range of some computable function. The c.e. sets, although not decidable in general, have been studied in detail in computability theory. == Areas of research == Beginning with the theory of computable sets and functions described above, the field of computability theory has grown to include the study of many closely related topics. These are not independent areas of research: each of these areas draws ideas and results from the others, and most computability theorists are familiar with the majority of them. === Relative computability and the Turing degrees === Computability theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing computability defined using oracle Turing machines, introduced by Turing in 1939. An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form "Is n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable. Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an oracle cannot. Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively) computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives a precise measure of how uncomputable the set is. The natural examples of sets that are not computable, including many different sets that encode variants of the halting problem, have two properties in common: They are computably enumerable, and Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or m-equivalent). Many-one reductions are "stronger" than Turing reductions: if a set A is many-one reducible to a set B, then A is Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets are all many-one equivalent, it is possible to construct computably enumerable sets A and B such that A is Turing reducible to B but not many-one reducible to B. It can be shown that every computably enumerable set is many-one reducible to the halting problem, and thus the halting problem is the most complicated computably enumerable set with respect to many-one reducibility and with respect to Turing reducibility. In 1944, Post asked whether every computably enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is no computably enumerable set with a Turing degree intermediate between those two. As intermediate results, Post defined natural types of computably enumerable sets like the simple, hypersimple and hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of computably enumerable sets of intermediate Turing degree; this problem became known as Post's problem. After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure. There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute a function f which dominates every computable function g in the sense that there is a constant c depending on g such that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets. The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the Turing jump of another set. Post's theorem establishes a close relationship between the Turing jump operation and the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability in arithmetic. Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set of Turing degrees containing computably enumerable sets. A deep theorem of Shore and Slaman states that the function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees. A survey by Ambos-Spies and Fejer gives an overview of this research and its historical progression. === Other reducibilities === An ongoing area of research in computability theory studies reducibility relations other than Turing reducibility. Post introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with. Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one example. The strong reducibilities include: One-one reducibility: A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f such that each n is in A if and only if f(n) is in B. Many-one reducibility: This is essentially one-one reducibility without the constraint that f be injective. A is many-one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only if f(n) is in B. Truth-table reducibility: A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied. Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed in the article Reduction (computability theory). The major research on strong reducibilities has been to compare their theories, both for the class of all computably enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is the union of infinitely many truth-table degrees. Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These reducibilities are closely connected to definability over the standard model of arithmetic. === Rice's theorem and the arithmetical hierarchy === Rice showed that for every nontrivial class C (which contains some but not all c.e. sets) the index set E = {e: the eth c.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that is, can be mapped using a many-one reduction to E (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σn+1 contains just all sets which are computably enumerable relative to Σn; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets. === Reverse mathematics === The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was studied in detail by Stephen Simpson and others; in 1999, Simpson gave a detailed discussion of the program. The set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive comprehension, which states that the powerset of the naturals is closed under Turing reducibility. === Numberings === A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in the numbering on the input x. Numberings can be partial-computable although some of its members are total computable functions. Admissible numberings are those into which all others can be translated. A Friedberg numbering (named after its discoverer) is a one-one numbering of all partial-computable functions; it is necessarily not an admissible numbering. Later research dealt also with numberings of other classes like classes of computably enumerable sets. Goncharov discovered for example a class of computably enumerable sets for which the numberings fall into exactly two classes with respect to computable isomorphisms. === The priority method === Post's problem was solved with a method called the priority method; a proof using this method is called a priority argument. This method is primarily used to construct computably enumerable sets with particular properties. To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties. Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event. Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity. Because complex priority arguments can be technical and difficult to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method. === The lattice of computably enumerable sets === When Post defined the notion of a simple set as a c.e. set with an infinite complement not containing any infinite c.e. set, he started to study the structure of the computably enumerable sets under inclusion. This lattice became a well-studied structure. Computable sets can be defined in this structure by the basic result that a set is computable if and only if the set and its complement are both computably enumerable. Infinite c.e. sets have always infinite computable subsets; but on the other hand, simple sets exist but do not always have a coinfinite computable superset. Post introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are c.e. sets such that every c.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; in 1991, Harrington and Soare found eventually such a property. === Automorphism problems === Another important question is the existence of automorphisms in computability-theoretic structures. One of these structures is that one of computably enumerable sets under inclusion modulo finite difference; in this structure, A is below B if and only if the set difference B − A is finite. Maximal sets (as defined in the previous paragraph) have the property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the computably enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. In 1974, Soare showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative sets, the sets which are many-one equivalent to the halting problem. Besides the lattice of computably enumerable sets, automorphisms are also studied for the structure of the Turing degrees of all sets as well as for the structure of the Turing degrees of c.e. sets. In both cases, Cooper claims to have constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not been verified and some colleagues believe that the construction contains errors and that the question of whether there is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area. === Kolmogorov complexity === The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area. === Frequency computation === This branch of computability theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions A is it possible to compute for any different n inputs x1, x2, ..., xn a tuple of n numbers y1, y2, ..., yn such that at least m of the equations A(xk) = yk are true. Such sets are known as (m, n)-recursive sets. The first major result in this branch of computability theory is Trakhtenbrot's result that a set is computable if it is (m, n)-recursive for some m, n with 2m > n. On the other hand, Jockusch's semirecursive sets (which were already known informally before Jockusch introduced them 1968) are examples of a set which is (m, n)-recursive if and only if 2m < n + 1. There are uncountably many of these sets and also some computably enumerable but noncomputable sets of this type. Later, Degtev established a hierarchy of computably enumerable sets that are (1, n + 1)-recursive but not (1, n)-recursive. After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One of the major results was Kummer's Cardinality Theory which states that a set A is computable if and only if there is an n such that some algorithm enumerates for each tuple of n different numbers up to n many possible choices of the cardinality of this set of n numbers intersected with A; these choices must contain the true cardinality but leave out at least one false one. === Inductive inference === This is the computability-theoretic branch of learning theory. It is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, computable functional) which outputs for any input of the form (f(0), f(1), ..., f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all computably enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. === Generalizations of Turing computability === Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks in 1990. These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level Π 1 1 {\displaystyle \Pi _{1}^{1}} of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory. === Continuous computability theory === Computability theory for digital computation is well developed. Computability theory is less well developed for analog computation that occurs in analog computers, analog signal processing, analog electronics, artificial neural networks and continuous-time control theory, modelled by differential equations and continuous dynamical systems. For example, models of computation such as the Blum–Shub–Smale machine model have formalized computation on the reals. == Relationships between definability, proof and computability == There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by Post's theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and incompleteness theorems. Gödel's proofs show that the set of logical consequences of an effective first-order theory is a computably enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski's indefinability theorem can be interpreted both in terms of definability and in terms of computability. Computability theory is also linked to second-order arithmetic, a formal theory of natural numbers and sets of natural numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be defined in weak subsystems of second-order arithmetic. The program of reverse mathematics uses these subsystems to measure the non-computability inherent in well known mathematical theorems. In 1999, Simpson discussed many aspects of second-order arithmetic and reverse mathematics. The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these weak systems is by characterizing which computable functions the system can prove to be total. For example, in primitive recursive arithmetic any computable function that is provably total is actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an example of such a function is provided by Goodstein's theorem. == Name == The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology. These researchers also use terminology such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow and Simpson. Some commentators argue that both the names recursion theory and computability theory fail to convey the fact that most of the objects studied in computability theory are not computable. In 1967, Rogers has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest in computability theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the natural numbers. == Professional organizations == The main professional organization for computability theory is the Association for Symbolic Logic, which holds several research conferences each year. The interdisciplinary research Association Computability in Europe (CiE) also organizes a series of annual conferences. == See also == Recursion (computer science) Computability logic Transcomputational problem == Notes == == References == == Further reading == Undergraduate level texts Cooper, S. Barry (2004). Computability Theory. Chapman & Hall/CRC. ISBN 1-58488-237-9. Matiyasevich, Yuri Vladimirovich (1993). Hilbert's Tenth Problem. MIT Press. ISBN 0-262-13295-8. Advanced texts Jain, Sanjay; Osherson, Daniel Nathan; Royer, James S.; Sharma, Arun (1999). Systems that learn: an introduction to learning theory (2nd ed.). Bradford Book / MIT Press. ISBN 0-262-10077-0. LCCN 98-34861. Lerman, Manuel (1983). Degrees of unsolvability. Perspectives in Mathematical Logic. Springer-Verlag. ISBN 3-540-12155-2. Nies, André (2009). Computability and Randomness. Oxford University Press. ISBN 978-0-19-923076-1. Odifreddi, Piergiorgio (1989). Classical Recursion Theory. North-Holland. ISBN 0-444-87295-7. Odifreddi, Piergiorgio (1999). Classical Recursion Theory. Vol. II. Elsevier. ISBN 0-444-50205-X. Survey papers and collections Enderton, Herbert Bruce (1977). "Elements of Recursion Theory". In Barwise, Jon (ed.). Handbook of Mathematical Logic. North-Holland. pp. 527–566. ISBN 0-7204-2285-X. Research papers and collections Burgin, Mark; Klinger, Allen (2004). "Experience, Generations, and Limits in Machine Learning". Theoretical Computer Science. 317 (1–3): 71–91. doi:10.1016/j.tcs.2003.12.005. Friedberg, Richard M. (1958). "Three theorems on recursive enumeration: I. Decomposition, II. Maximal Set, III. Enumeration without repetition". The Journal of Symbolic Logic. 23 (3): 309–316. doi:10.2307/2964290. JSTOR 2964290. S2CID 25834814. Gold, E. Mark (1967). "Language Identification in the Limit" (PDF). Information and Control. 10 (5): 447–474. doi:10.1016/s0019-9958(67)91165-5. [1] Jockusch, Carl Groos Jr. (1968). "Semirecursive sets and positive reducibility". Transactions of the American Mathematical Society. 137 (2): 420–436. doi:10.1090/S0002-9947-1968-0220595-7. JSTOR 1994957. Kleene, Stephen Cole; Post, Emil Leon (1954). "The upper semi-lattice of degrees of recursive unsolvability". Annals of Mathematics. Series 2. 59 (3): 379–407. doi:10.2307/1969708. JSTOR 1969708. Myhill, John R. Sr. (1956). "The lattice of recursively enumerable sets". The Journal of Symbolic Logic. 21: 215–220. doi:10.1017/S002248120008525X. S2CID 123260425. Post, Emil Leon (1947). "Recursive unsolvability of a problem of Thue". Journal of Symbolic Logic. 12 (1): 1–11. doi:10.2307/2267170. JSTOR 2267170. S2CID 30320278. Reprinted in Davis 1965. == External links == Association for Symbolic Logic homepage Computability in Europe homepage Archived 2011-02-17 at the Wayback Machine Webpage on Recursion Theory Course at Graduate Level with approximately 100 pages of lecture notes German language lecture notes on inductive inference
Wikipedia/Recursion_theory
In mathematics, dimension theory is the study in terms of commutative algebra of the notion dimension of an algebraic variety (and by extension that of a scheme). The need of a theory for such an apparently simple notion results from the existence of many definitions of dimension that are equivalent only in the most regular cases (see Dimension of an algebraic variety). A large part of dimension theory consists in studying the conditions under which several dimensions are equal, and many important classes of commutative rings may be defined as the rings such that two dimensions are equal; for example, a regular ring is a commutative ring such that the homological dimension is equal to the Krull dimension. The theory is simpler for commutative rings that are finitely generated algebras over a field, which are also quotient rings of polynomial rings in a finite number of indeterminates over a field. In this case, which is the algebraic counterpart of the case of affine algebraic sets, most of the definitions of the dimension are equivalent. For general commutative rings, the lack of geometric interpretation is an obstacle to the development of the theory; in particular, very little is known for non-noetherian rings. (Kaplansky's Commutative rings gives a good account of the non-noetherian case.) Throughout the article, dim {\displaystyle \dim } denotes Krull dimension of a ring and ht {\displaystyle \operatorname {ht} } the height of a prime ideal (i.e., the Krull dimension of the localization at that prime ideal). Rings are assumed to be commutative except in the last section on dimensions of non-commutative rings. == Basic results == Let R be a noetherian ring or valuation ring. Then dim ⁡ R [ x ] = dim ⁡ R + 1. {\displaystyle \dim R[x]=\dim R+1.} If R is noetherian, this follows from the fundamental theorem below (in particular, Krull's principal ideal theorem), but it is also a consequence of a more precise result. For any prime ideal p {\displaystyle {\mathfrak {p}}} in R, ht ⁡ ( p R [ x ] ) = ht ⁡ ( p ) . {\displaystyle \operatorname {ht} ({\mathfrak {p}}R[x])=\operatorname {ht} ({\mathfrak {p}}).} ht ⁡ ( q ) = ht ⁡ ( p ) + 1 {\displaystyle \operatorname {ht} ({\mathfrak {q}})=\operatorname {ht} ({\mathfrak {p}})+1} for any prime ideal q ⊋ p R [ x ] {\displaystyle {\mathfrak {q}}\supsetneq {\mathfrak {p}}R[x]} in R [ x ] {\displaystyle R[x]} that contracts to p {\displaystyle {\mathfrak {p}}} . This can be shown within basic ring theory (cf. Kaplansky, commutative rings). In addition, in each fiber of Spec ⁡ R [ x ] → Spec ⁡ R {\displaystyle \operatorname {Spec} R[x]\to \operatorname {Spec} R} , one cannot have a chain of primes ideals of length ≥ 2 {\displaystyle \geq 2} . Since an artinian ring (e.g., a field) has dimension zero, by induction one gets a formula: for an artinian ring R, dim ⁡ R [ x 1 , … , x n ] = n . {\displaystyle \dim R[x_{1},\dots ,x_{n}]=n.} == Local rings == === Fundamental theorem === Let ( R , m ) {\displaystyle (R,{\mathfrak {m}})} be a noetherian local ring and I a m {\displaystyle {\mathfrak {m}}} -primary ideal (i.e., it sits between some power of m {\displaystyle {\mathfrak {m}}} and m {\displaystyle {\mathfrak {m}}} ). Let F ( t ) {\displaystyle F(t)} be the Poincaré series of the associated graded ring gr I ⁡ R = ⨁ 0 ∞ I n / I n + 1 {\textstyle \operatorname {gr} _{I}R=\bigoplus _{0}^{\infty }I^{n}/I^{n+1}} . That is, F ( t ) = ∑ 0 ∞ ℓ ( I n / I n + 1 ) t n {\displaystyle F(t)=\sum _{0}^{\infty }\ell (I^{n}/I^{n+1})t^{n}} where ℓ {\displaystyle \ell } refers to the length of a module (over an artinian ring ( gr I ⁡ R ) 0 = R / I {\displaystyle (\operatorname {gr} _{I}R)_{0}=R/I} ). If x 1 , … , x s {\displaystyle x_{1},\dots ,x_{s}} generate I, then their image in I / I 2 {\displaystyle I/I^{2}} have degree 1 and generate gr I ⁡ R {\displaystyle \operatorname {gr} _{I}R} as R / I {\displaystyle R/I} -algebra. By the Hilbert–Serre theorem, F is a rational function with exactly one pole at t = 1 {\displaystyle t=1} of order d ≤ s {\displaystyle d\leq s} . Since ( 1 − t ) − d = ∑ 0 ∞ ( d − 1 + j d − 1 ) t j , {\displaystyle (1-t)^{-d}=\sum _{0}^{\infty }{\binom {d-1+j}{d-1}}t^{j},} we find that the coefficient of t n {\displaystyle t^{n}} in F ( t ) = ( 1 − t ) d F ( t ) ( 1 − t ) − d {\displaystyle F(t)=(1-t)^{d}F(t)(1-t)^{-d}} is of the form ∑ 0 N a k ( d − 1 + n − k d − 1 ) = ( 1 − t ) d F ( t ) | t = 1 n d − 1 d − 1 ! + O ( n d − 2 ) . {\displaystyle \sum _{0}^{N}a_{k}{\binom {d-1+n-k}{d-1}}=(1-t)^{d}F(t){\big |}_{t=1}{n^{d-1} \over {d-1}!}+O(n^{d-2}).} That is to say, ℓ ( I n / I n + 1 ) {\displaystyle \ell (I^{n}/I^{n+1})} is a polynomial P {\displaystyle P} in n of degree d − 1 {\displaystyle d-1} . P is called the Hilbert polynomial of gr I ⁡ R {\displaystyle \operatorname {gr} _{I}R} . We set d ( R ) = d {\displaystyle d(R)=d} . We also set δ ( R ) {\displaystyle \delta (R)} to be the minimum number of elements of R that can generate an m {\displaystyle {\mathfrak {m}}} -primary ideal of R. Our ambition is to prove the fundamental theorem: δ ( R ) = d ( R ) = dim ⁡ R . {\displaystyle \delta (R)=d(R)=\dim R.} Since we can take s to be δ ( R ) {\displaystyle \delta (R)} , we already have δ ( R ) ≥ d ( R ) {\displaystyle \delta (R)\geq d(R)} from the above. Next we prove d ( R ) ≥ dim ⁡ R {\displaystyle d(R)\geq \dim R} by induction on d ( R ) {\displaystyle d(R)} . Let p 0 ⊊ ⋯ ⊊ p m {\displaystyle {\mathfrak {p}}_{0}\subsetneq \cdots \subsetneq {\mathfrak {p}}_{m}} be a chain of prime ideals in R. Let D = R / p 0 {\displaystyle D=R/{\mathfrak {p}}_{0}} and x a nonzero nonunit element in D. Since x is not a zero-divisor, we have the exact sequence 0 → D → x D → D / x D → 0. {\displaystyle 0\to D{\overset {x}{\to }}D\to D/xD\to 0.} The degree bound of the Hilbert-Samuel polynomial now implies that d ( D ) > d ( D / x D ) ≥ d ( R / p 1 ) {\displaystyle d(D)>d(D/xD)\geq d(R/{\mathfrak {p}}_{1})} . (This essentially follows from the Artin–Rees lemma; see Hilbert–Samuel function for the statement and the proof.) In R / p 1 {\displaystyle R/{\mathfrak {p}}_{1}} , the chain p i {\displaystyle {\mathfrak {p}}_{i}} becomes a chain of length m − 1 {\displaystyle m-1} and so, by inductive hypothesis and again by the degree estimate, m − 1 ≤ dim ⁡ ( R / p 1 ) ≤ d ( R / p 1 ) ≤ d ( D ) − 1 ≤ d ( R ) − 1. {\displaystyle m-1\leq \dim(R/{\mathfrak {p}}_{1})\leq d(R/{\mathfrak {p}}_{1})\leq d(D)-1\leq d(R)-1.} The claim follows. It now remains to show dim ⁡ R ≥ δ ( R ) . {\displaystyle \dim R\geq \delta (R).} More precisely, we shall show: (Notice: ( x 1 , … , x d ) {\displaystyle (x_{1},\dots ,x_{d})} is then m {\displaystyle {\mathfrak {m}}} -primary.) The proof is omitted. It appears, for example, in Atiyah–MacDonald. But it can also be supplied privately; the idea is to use prime avoidance. === Consequences of the fundamental theorem === Let ( R , m ) {\displaystyle (R,{\mathfrak {m}})} be a noetherian local ring and put k = R / m {\displaystyle k=R/{\mathfrak {m}}} . Then dim ⁡ R ≤ dim k ⁡ m / m 2 {\displaystyle \dim R\leq \dim _{k}{\mathfrak {m}}/{\mathfrak {m}}^{2}} , since a basis of m / m 2 {\displaystyle {\mathfrak {m}}/{\mathfrak {m}}^{2}} lifts to a generating set of m {\displaystyle {\mathfrak {m}}} by Nakayama. If the equality holds, then R is called a regular local ring. dim ⁡ R ^ = dim ⁡ R {\displaystyle \dim {\widehat {R}}=\dim R} , since gr ⁡ R = gr ⁡ R ^ {\displaystyle \operatorname {gr} R=\operatorname {gr} {\widehat {R}}} . (Krull's principal ideal theorem) The height of the ideal generated by elements x 1 , … , x s {\displaystyle x_{1},\dots ,x_{s}} in a noetherian ring is at most s. Conversely, a prime ideal of height s is minimal over an ideal generated by s elements. (Proof: Let p {\displaystyle {\mathfrak {p}}} be a prime ideal minimal over such an ideal. Then s ≥ dim ⁡ R p = ht ⁡ p {\displaystyle s\geq \dim R_{\mathfrak {p}}=\operatorname {ht} {\mathfrak {p}}} . The converse was shown in the course of the proof of the fundamental theorem.) Proof: Let x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} generate a m A {\displaystyle {\mathfrak {m}}_{A}} -primary ideal and y 1 , … , y m {\displaystyle y_{1},\dots ,y_{m}} be such that their images generate a m B / m A B {\displaystyle {\mathfrak {m}}_{B}/{\mathfrak {m}}_{A}B} -primary ideal. Then m B s ⊂ ( y 1 , … , y m ) + m A B {\displaystyle {{\mathfrak {m}}_{B}}^{s}\subset (y_{1},\dots ,y_{m})+{\mathfrak {m}}_{A}B} for some s. Raising both sides to higher powers, we see some power of m B {\displaystyle {\mathfrak {m}}_{B}} is contained in ( y 1 , … , y m , x 1 , … , x n ) {\displaystyle (y_{1},\dots ,y_{m},x_{1},\dots ,x_{n})} ; i.e., the latter ideal is m B {\displaystyle {\mathfrak {m}}_{B}} -primary; thus, m + n ≥ dim ⁡ B {\displaystyle m+n\geq \dim B} . The equality is a straightforward application of the going-down property. Q.E.D. Proof: If p 0 ⊊ p 1 ⊊ ⋯ ⊊ p n {\displaystyle {\mathfrak {p}}_{0}\subsetneq {\mathfrak {p}}_{1}\subsetneq \cdots \subsetneq {\mathfrak {p}}_{n}} are a chain of prime ideals in R, then p i R [ x ] {\displaystyle {\mathfrak {p}}_{i}R[x]} are a chain of prime ideals in R [ x ] {\displaystyle R[x]} while p n R [ x ] {\displaystyle {\mathfrak {p}}_{n}R[x]} is not a maximal ideal. Thus, dim ⁡ R + 1 ≤ dim ⁡ R [ x ] {\displaystyle \dim R+1\leq \dim R[x]} . For the reverse inequality, let m {\displaystyle {\mathfrak {m}}} be a maximal ideal of R [ x ] {\displaystyle R[x]} and p = R ∩ m {\displaystyle {\mathfrak {p}}=R\cap {\mathfrak {m}}} . Clearly, R [ x ] m = R p [ x ] m {\displaystyle R[x]_{\mathfrak {m}}=R_{\mathfrak {p}}[x]_{\mathfrak {m}}} . Since R [ x ] m / p R p R [ x ] m = ( R p / p R p ) [ x ] m {\displaystyle R[x]_{\mathfrak {m}}/{\mathfrak {p}}R_{\mathfrak {p}}R[x]_{\mathfrak {m}}=(R_{\mathfrak {p}}/{\mathfrak {p}}R_{\mathfrak {p}})[x]_{\mathfrak {m}}} is then a localization of a principal ideal domain and has dimension at most one, we get 1 + dim ⁡ R ≥ 1 + dim ⁡ R p ≥ dim ⁡ R [ x ] m {\displaystyle 1+\dim R\geq 1+\dim R_{\mathfrak {p}}\geq \dim R[x]_{\mathfrak {m}}} by the previous inequality. Since m {\displaystyle {\mathfrak {m}}} is arbitrary, it follows 1 + dim ⁡ R ≥ dim ⁡ R [ x ] {\displaystyle 1+\dim R\geq \dim R[x]} . Q.E.D. === Nagata's altitude formula === Proof: First suppose R ′ {\displaystyle R'} is a polynomial ring. By induction on the number of variables, it is enough to consider the case R ′ = R [ x ] {\displaystyle R'=R[x]} . Since R' is flat over R, dim ⁡ R p ′ ′ = dim ⁡ R p + dim ⁡ κ ( p ) ⊗ R R ′ p ′ . {\displaystyle \dim R'_{\mathfrak {p'}}=\dim R_{\mathfrak {p}}+\dim \kappa ({\mathfrak {p}})\otimes _{R}{R'}_{{\mathfrak {p}}'}.} By Noether's normalization lemma, the second term on the right side is: dim ⁡ κ ( p ) ⊗ R R ′ − dim ⁡ κ ( p ) ⊗ R R ′ / p ′ = 1 − t r . d e g κ ( p ) ⁡ κ ( p ′ ) = t r . d e g R ⁡ R ′ − t r . d e g ⁡ κ ( p ′ ) . {\displaystyle \dim \kappa ({\mathfrak {p}})\otimes _{R}R'-\dim \kappa ({\mathfrak {p}})\otimes _{R}R'/{\mathfrak {p}}'=1-\operatorname {tr.deg} _{\kappa ({\mathfrak {p}})}\kappa ({\mathfrak {p}}')=\operatorname {tr.deg} _{R}R'-\operatorname {tr.deg} \kappa ({\mathfrak {p}}').} Next, suppose R ′ {\displaystyle R'} is generated by a single element; thus, R ′ = R [ x ] / I {\displaystyle R'=R[x]/I} . If I = 0, then we are already done. Suppose not. Then R ′ {\displaystyle R'} is algebraic over R and so t r . d e g R ⁡ R ′ = 0 {\displaystyle \operatorname {tr.deg} _{R}R'=0} . Since R is a subring of R', I ∩ R = 0 {\displaystyle I\cap R=0} and so ht ⁡ I = dim ⁡ R [ x ] I = dim ⁡ Q ( R ) [ x ] I = 1 − t r . d e g Q ( R ) ⁡ κ ( I ) = 1 {\displaystyle \operatorname {ht} I=\dim R[x]_{I}=\dim Q(R)[x]_{I}=1-\operatorname {tr.deg} _{Q(R)}\kappa (I)=1} since κ ( I ) = Q ( R ′ ) {\displaystyle \kappa (I)=Q(R')} is algebraic over Q ( R ) {\displaystyle Q(R)} . Let p ′ c {\displaystyle {\mathfrak {p}}^{\prime c}} denote the pre-image in R [ x ] {\displaystyle R[x]} of p ′ {\displaystyle {\mathfrak {p}}'} . Then, as κ ( p ′ c ) = κ ( p ) {\displaystyle \kappa ({\mathfrak {p}}^{\prime c})=\kappa ({\mathfrak {p}})} , by the polynomial case, ht ⁡ p ′ = ht ⁡ p ′ c / I ≤ ht ⁡ p ′ c − ht ⁡ I = dim ⁡ R p − t r . d e g κ ( p ) ⁡ κ ( p ′ ) . {\displaystyle \operatorname {ht} {{\mathfrak {p}}'}=\operatorname {ht} {{\mathfrak {p}}^{\prime c}/I}\leq \operatorname {ht} {{\mathfrak {p}}^{\prime c}}-\operatorname {ht} {I}=\dim R_{\mathfrak {p}}-\operatorname {tr.deg} _{\kappa ({\mathfrak {p}})}\kappa ({\mathfrak {p}}').} Here, note that the inequality is the equality if R' is catenary. Finally, working with a chain of prime ideals, it is straightforward to reduce the general case to the above case. Q.E.D. == Homological methods == === Regular rings === Let R be a noetherian ring. The projective dimension of a finite R-module M is the shortest length of any projective resolution of M (possibly infinite) and is denoted by pd R ⁡ M {\displaystyle \operatorname {pd} _{R}M} . We set g l . d i m ⁡ R = sup { pd R ⁡ M ∣ M is a finite module } {\displaystyle \operatorname {gl.dim} R=\sup\{\operatorname {pd} _{R}M\mid M{\text{ is a finite module}}\}} ; it is called the global dimension of R. Assume R is local with residue field k. Proof: We claim: for any finite R-module M, pd R ⁡ M ≤ n ⇔ Tor n + 1 R ⁡ ( M , k ) = 0. {\displaystyle \operatorname {pd} _{R}M\leq n\Leftrightarrow \operatorname {Tor} _{n+1}^{R}(M,k)=0.} By dimension shifting (cf. the proof of Theorem of Serre below), it is enough to prove this for n = 0 {\displaystyle n=0} . But then, by the local criterion for flatness, Tor 1 R ⁡ ( M , k ) = 0 ⇒ M flat ⇒ M free ⇒ pd R ⁡ ( M ) ≤ 0. {\displaystyle \operatorname {Tor} _{1}^{R}(M,k)=0\Rightarrow M{\text{ flat }}\Rightarrow M{\text{ free }}\Rightarrow \operatorname {pd} _{R}(M)\leq 0.} Now, g l . d i m ⁡ R ≤ n ⇒ pd R ⁡ k ≤ n ⇒ Tor n + 1 R ⁡ ( − , k ) = 0 ⇒ pd R − ≤ n ⇒ g l . d i m ⁡ R ≤ n , {\displaystyle \operatorname {gl.dim} R\leq n\Rightarrow \operatorname {pd} _{R}k\leq n\Rightarrow \operatorname {Tor} _{n+1}^{R}(-,k)=0\Rightarrow \operatorname {pd} _{R}-\leq n\Rightarrow \operatorname {gl.dim} R\leq n,} completing the proof. Q.E.D. Remark: The proof also shows that pd R ⁡ K = pd R ⁡ M − 1 {\displaystyle \operatorname {pd} _{R}K=\operatorname {pd} _{R}M-1} if M is not free and K {\displaystyle K} is the kernel of some surjection from a free module to M. Proof: If pd R ⁡ M = 0 {\displaystyle \operatorname {pd} _{R}M=0} , then M is R-free and thus M ⊗ R 1 {\displaystyle M\otimes R_{1}} is R 1 {\displaystyle R_{1}} -free. Next suppose pd R ⁡ M > 0 {\displaystyle \operatorname {pd} _{R}M>0} . Then we have: pd R ⁡ K = pd R ⁡ M − 1 {\displaystyle \operatorname {pd} _{R}K=\operatorname {pd} _{R}M-1} as in the remark above. Thus, by induction, it is enough to consider the case pd R ⁡ M = 1 {\displaystyle \operatorname {pd} _{R}M=1} . Then there is a projective resolution: 0 → P 1 → P 0 → M → 0 {\displaystyle 0\to P_{1}\to P_{0}\to M\to 0} , which gives: Tor 1 R ⁡ ( M , R 1 ) → P 1 ⊗ R 1 → P 0 ⊗ R 1 → M ⊗ R 1 → 0. {\displaystyle \operatorname {Tor} _{1}^{R}(M,R_{1})\to P_{1}\otimes R_{1}\to P_{0}\otimes R_{1}\to M\otimes R_{1}\to 0.} But Tor 1 R ⁡ ( M , R 1 ) = f M = { m ∈ M ∣ f m = 0 } = 0. {\displaystyle \operatorname {Tor} _{1}^{R}(M,R_{1})={}_{f}M=\{m\in M\mid fm=0\}=0.} Hence, pd R ⁡ ( M ⊗ R 1 ) {\displaystyle \operatorname {pd} _{R}(M\otimes R_{1})} is at most 1. Q.E.D. Proof: If R is regular, we can write k = R / ( f 1 , … , f n ) {\displaystyle k=R/(f_{1},\dots ,f_{n})} , f i {\displaystyle f_{i}} a regular system of parameters. An exact sequence 0 → M → f M → M 1 → 0 {\displaystyle 0\to M{\overset {f}{\to }}M\to M_{1}\to 0} , some f in the maximal ideal, of finite modules, pd R ⁡ M < ∞ {\displaystyle \operatorname {pd} _{R}M<\infty } , gives us: 0 = Tor i + 1 R ⁡ ( M , k ) → Tor i + 1 R ⁡ ( M 1 , k ) → Tor i R ⁡ ( M , k ) → f Tor i R ⁡ ( M , k ) , i ≥ pd R ⁡ M . {\displaystyle 0=\operatorname {Tor} _{i+1}^{R}(M,k)\to \operatorname {Tor} _{i+1}^{R}(M_{1},k)\to \operatorname {Tor} _{i}^{R}(M,k){\overset {f}{\to }}\operatorname {Tor} _{i}^{R}(M,k),\quad i\geq \operatorname {pd} _{R}M.} But f here is zero since it kills k. Thus, Tor i + 1 R ⁡ ( M 1 , k ) ≃ Tor i R ⁡ ( M , k ) {\displaystyle \operatorname {Tor} _{i+1}^{R}(M_{1},k)\simeq \operatorname {Tor} _{i}^{R}(M,k)} and consequently pd R ⁡ M 1 = 1 + pd R ⁡ M {\displaystyle \operatorname {pd} _{R}M_{1}=1+\operatorname {pd} _{R}M} . Using this, we get: pd R ⁡ k = 1 + pd R ⁡ ( R / ( f 1 , … , f n − 1 ) ) = ⋯ = n . {\displaystyle \operatorname {pd} _{R}k=1+\operatorname {pd} _{R}(R/(f_{1},\dots ,f_{n-1}))=\cdots =n.} The proof of the converse is by induction on dim ⁡ R {\displaystyle \dim R} . We begin with the inductive step. Set R 1 = R / f 1 R {\displaystyle R_{1}=R/f_{1}R} , f 1 {\displaystyle f_{1}} among a system of parameters. To show R is regular, it is enough to show R 1 {\displaystyle R_{1}} is regular. But, since dim ⁡ R 1 < dim ⁡ R {\displaystyle \dim R_{1}<\dim R} , by inductive hypothesis and the preceding lemma with M = m {\displaystyle M={\mathfrak {m}}} , g l . d i m ⁡ R < ∞ ⇒ g l . d i m ⁡ R 1 = pd R 1 ⁡ k ≤ pd R 1 ⁡ m / f 1 m < ∞ ⇒ R 1 regular . {\displaystyle \operatorname {gl.dim} R<\infty \Rightarrow \operatorname {gl.dim} R_{1}=\operatorname {pd} _{R_{1}}k\leq \operatorname {pd} _{R_{1}}{\mathfrak {m}}/f_{1}{\mathfrak {m}}<\infty \Rightarrow R_{1}{\text{ regular}}.} The basic step remains. Suppose dim ⁡ R = 0 {\displaystyle \dim R=0} . We claim g l . d i m ⁡ R = 0 {\displaystyle \operatorname {gl.dim} R=0} if it is finite. (This would imply that R is a semisimple local ring; i.e., a field.) If that is not the case, then there is some finite module M {\displaystyle M} with 0 < pd R ⁡ M < ∞ {\displaystyle 0<\operatorname {pd} _{R}M<\infty } and thus in fact we can find M with pd R ⁡ M = 1 {\displaystyle \operatorname {pd} _{R}M=1} . By Nakayama's lemma, there is a surjection F → M {\displaystyle F\to M} from a free module F to M whose kernel K is contained in m F {\displaystyle {\mathfrak {m}}F} . Since dim ⁡ R = 0 {\displaystyle \dim R=0} , the maximal ideal m {\displaystyle {\mathfrak {m}}} is an associated prime of R; i.e., m = ann ⁡ ( s ) {\displaystyle {\mathfrak {m}}=\operatorname {ann} (s)} for some nonzero s in R. Since K ⊂ m F {\displaystyle K\subset {\mathfrak {m}}F} , s K = 0 {\displaystyle sK=0} . Since K is not zero and is free, this implies s = 0 {\displaystyle s=0} , which is absurd. Q.E.D. Proof: Let R be a regular local ring. Then gr ⁡ R ≃ k [ x 1 , … , x d ] {\displaystyle \operatorname {gr} R\simeq k[x_{1},\dots ,x_{d}]} , which is an integrally closed domain. It is a standard algebra exercise to show this implies that R is an integrally closed domain. Now, we need to show every divisorial ideal is principal; i.e., the divisor class group of R vanishes. But, according to Bourbaki, Algèbre commutative, chapitre 7, §. 4. Corollary 2 to Proposition 16, a divisorial ideal is principal if it admits a finite free resolution, which is indeed the case by the theorem. Q.E.D. === Depth === Let R be a ring and M a module over it. A sequence of elements x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} in R {\displaystyle R} is called an M-regular sequence if x 1 {\displaystyle x_{1}} is not a zero-divisor on M {\displaystyle M} and x i {\displaystyle x_{i}} is not a zero divisor on M / ( x 1 , … , x i − 1 ) M {\displaystyle M/(x_{1},\dots ,x_{i-1})M} for each i = 2 , … , n {\displaystyle i=2,\dots ,n} . A priori, it is not obvious whether any permutation of a regular sequence is still regular (see the section below for some positive answer). Let R be a local Noetherian ring with maximal ideal m {\displaystyle {\mathfrak {m}}} and put k = R / m {\displaystyle k=R/{\mathfrak {m}}} . Then, by definition, the depth of a finite R-module M is the supremum of the lengths of all M-regular sequences in m {\displaystyle {\mathfrak {m}}} . For example, we have depth ⁡ M = 0 ⇔ m {\displaystyle \operatorname {depth} M=0\Leftrightarrow {\mathfrak {m}}} consists of zerodivisors on M ⇔ m {\displaystyle \Leftrightarrow {\mathfrak {m}}} is associated with M. By induction, we find depth ⁡ M ≤ dim ⁡ R / p {\displaystyle \operatorname {depth} M\leq \dim R/{\mathfrak {p}}} for any associated primes p {\displaystyle {\mathfrak {p}}} of M. In particular, depth ⁡ M ≤ dim ⁡ M {\displaystyle \operatorname {depth} M\leq \dim M} . If the equality holds for M = R, R is called a Cohen–Macaulay ring. Example: A regular Noetherian local ring is Cohen–Macaulay (since a regular system of parameters is an R-regular sequence). In general, a Noetherian ring is called a Cohen–Macaulay ring if the localizations at all maximal ideals are Cohen–Macaulay. We note that a Cohen–Macaulay ring is universally catenary. This implies for example that a polynomial ring k [ x 1 , … , x d ] {\displaystyle k[x_{1},\dots ,x_{d}]} is universally catenary since it is regular and thus Cohen–Macaulay. Proof: We first prove by induction on n the following statement: for every R-module M and every M-regular sequence x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} in m {\displaystyle {\mathfrak {m}}} , The basic step n = 0 is trivial. Next, by inductive hypothesis, Ext R n − 1 ⁡ ( N , M ) ≃ Hom R ⁡ ( N , M / ( x 1 , … , x n − 1 ) M ) {\displaystyle \operatorname {Ext} _{R}^{n-1}(N,M)\simeq \operatorname {Hom} _{R}(N,M/(x_{1},\dots ,x_{n-1})M)} . But the latter is zero since the annihilator of N contains some power of x n {\displaystyle x_{n}} . Thus, from the exact sequence 0 → M → x 1 M → M 1 → 0 {\displaystyle 0\to M{\overset {x_{1}}{\to }}M\to M_{1}\to 0} and the fact that x 1 {\displaystyle x_{1}} kills N, using the inductive hypothesis again, we get Ext R n ⁡ ( N , M ) ≃ Ext R n − 1 ⁡ ( N , M / x 1 M ) ≃ Hom R ⁡ ( N , M / ( x 1 , … , x n ) M ) , {\displaystyle \operatorname {Ext} _{R}^{n}(N,M)\simeq \operatorname {Ext} _{R}^{n-1}(N,M/x_{1}M)\simeq \operatorname {Hom} _{R}(N,M/(x_{1},\dots ,x_{n})M),} proving (⁎). Now, if n < depth ⁡ M {\displaystyle n<\operatorname {depth} M} , then we can find an M-regular sequence of length more than n and so by (⁎) we see Ext R n ⁡ ( N , M ) = 0 {\displaystyle \operatorname {Ext} _{R}^{n}(N,M)=0} . It remains to show Ext R n ⁡ ( N , M ) ≠ 0 {\displaystyle \operatorname {Ext} _{R}^{n}(N,M)\neq 0} if n = depth ⁡ M {\displaystyle n=\operatorname {depth} M} . By (⁎) we can assume n = 0. Then m {\displaystyle {\mathfrak {m}}} is associated with M; thus is in the support of M. On the other hand, m ∈ Supp ⁡ ( N ) . {\displaystyle {\mathfrak {m}}\in \operatorname {Supp} (N).} It follows by linear algebra that there is a nonzero homomorphism from N to M modulo m {\displaystyle {\mathfrak {m}}} ; hence, one from N to M by Nakayama's lemma. Q.E.D. The Auslander–Buchsbaum formula relates depth and projective dimension. Proof: We argue by induction on pd R ⁡ M {\displaystyle \operatorname {pd} _{R}M} , the basic case (i.e., M free) being trivial. By Nakayama's lemma, we have the exact sequence 0 → K → f F → M → 0 {\displaystyle 0\to K{\overset {f}{\to }}F\to M\to 0} where F is free and the image of f is contained in m F {\displaystyle {\mathfrak {m}}F} . Since pd R ⁡ K = pd R ⁡ M − 1 , {\displaystyle \operatorname {pd} _{R}K=\operatorname {pd} _{R}M-1,} what we need to show is depth ⁡ K = depth ⁡ M + 1 {\displaystyle \operatorname {depth} K=\operatorname {depth} M+1} . Since f kills k, the exact sequence yields: for any i, Ext R i ⁡ ( k , F ) → Ext R i ⁡ ( k , M ) → Ext R i + 1 ⁡ ( k , K ) → 0. {\displaystyle \operatorname {Ext} _{R}^{i}(k,F)\to \operatorname {Ext} _{R}^{i}(k,M)\to \operatorname {Ext} _{R}^{i+1}(k,K)\to 0.} Note the left-most term is zero if i < depth ⁡ R {\displaystyle i<\operatorname {depth} R} . If i < depth ⁡ K − 1 {\displaystyle i<\operatorname {depth} K-1} , then since depth ⁡ K ≤ depth ⁡ R {\displaystyle \operatorname {depth} K\leq \operatorname {depth} R} by inductive hypothesis, we see Ext R i ⁡ ( k , M ) = 0. {\displaystyle \operatorname {Ext} _{R}^{i}(k,M)=0.} If i = depth ⁡ K − 1 {\displaystyle i=\operatorname {depth} K-1} , then Ext R i + 1 ⁡ ( k , K ) ≠ 0 {\displaystyle \operatorname {Ext} _{R}^{i+1}(k,K)\neq 0} and it must be Ext R i ⁡ ( k , M ) ≠ 0. {\displaystyle \operatorname {Ext} _{R}^{i}(k,M)\neq 0.} Q.E.D. As a matter of notation, for any R-module M, we let Γ m ( M ) = { s ∈ M ∣ supp ⁡ ( s ) ⊂ { m } } = { s ∈ M ∣ m j s = 0 for some j } . {\displaystyle \Gamma _{\mathfrak {m}}(M)=\{s\in M\mid \operatorname {supp} (s)\subset \{{\mathfrak {m}}\}\}=\{s\in M\mid {\mathfrak {m}}^{j}s=0{\text{ for some }}j\}.} One sees without difficulty that Γ m {\displaystyle \Gamma _{\mathfrak {m}}} is a left-exact functor and then let H m j = R j Γ m {\displaystyle H_{\mathfrak {m}}^{j}=R^{j}\Gamma _{\mathfrak {m}}} be its j-th right derived functor, called the local cohomology of R. Since Γ m ( M ) = lim → ⁡ Hom R ⁡ ( R / m j , M ) {\displaystyle \Gamma _{\mathfrak {m}}(M)=\varinjlim \operatorname {Hom} _{R}(R/{\mathfrak {m}}^{j},M)} , via abstract nonsense, H m i ( M ) = lim → ⁡ Ext R i ⁡ ( R / m j , M ) . {\displaystyle H_{\mathfrak {m}}^{i}(M)=\varinjlim \operatorname {Ext} _{R}^{i}(R/{\mathfrak {m}}^{j},M).} This observation proves the first part of the theorem below. Proof: 1. is already noted (except to show the nonvanishing at the degree equal to the depth of M; use induction to see this) and 3. is a general fact by abstract nonsense. 2. is a consequence of an explicit computation of a local cohomology by means of Koszul complexes (see below). ◻ {\displaystyle \square } === Koszul complex === Let R be a ring and x an element in it. We form the chain complex K(x) given by K ( x ) i = R {\displaystyle K(x)_{i}=R} for i = 0, 1 and K ( x ) i = 0 {\displaystyle K(x)_{i}=0} for any other i with the differential d : K 1 ( R ) → K 0 ( R ) , r ↦ x r . {\displaystyle d:K_{1}(R)\to K_{0}(R),\,r\mapsto xr.} For any R-module M, we then get the complex K ( x , M ) = K ( x ) ⊗ R M {\displaystyle K(x,M)=K(x)\otimes _{R}M} with the differential d ⊗ 1 {\displaystyle d\otimes 1} and let H ∗ ⁡ ( x , M ) = H ∗ ⁡ ( K ( x , M ) ) {\displaystyle \operatorname {H} _{*}(x,M)=\operatorname {H} _{*}(K(x,M))} be its homology. Note: H 0 ⁡ ( x , M ) = M / x M , {\displaystyle \operatorname {H} _{0}(x,M)=M/xM,} H 1 ⁡ ( x , M ) = x M = { m ∈ M ∣ x m = 0 } . {\displaystyle \operatorname {H} _{1}(x,M)={}_{x}M=\{m\in M\mid xm=0\}.} More generally, given a finite sequence x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} of elements in a ring R, we form the tensor product of complexes: K ( x 1 , … , x n ) = K ( x 1 ) ⊗ ⋯ ⊗ K ( x n ) {\displaystyle K(x_{1},\dots ,x_{n})=K(x_{1})\otimes \dots \otimes K(x_{n})} and let H ∗ ⁡ ( x 1 , … , x n , M ) = H ∗ ⁡ ( K ( x 1 , … , x n , M ) ) {\displaystyle \operatorname {H} _{*}(x_{1},\dots ,x_{n},M)=\operatorname {H} _{*}(K(x_{1},\dots ,x_{n},M))} its homology. As before, H 0 ⁡ ( x _ , M ) = M / ( x 1 , … , x n ) M , {\displaystyle \operatorname {H} _{0}({\underline {x}},M)=M/(x_{1},\dots ,x_{n})M,} H n ⁡ ( x _ , M ) = Ann M ⁡ ( ( x 1 , … , x n ) ) . {\displaystyle \operatorname {H} _{n}({\underline {x}},M)=\operatorname {Ann} _{M}((x_{1},\dots ,x_{n})).} We now have the homological characterization of a regular sequence. A Koszul complex is a powerful computational tool. For instance, it follows from the theorem and the corollary H m i ⁡ ( M ) ≃ lim → ⁡ H i ⁡ ( K ( x 1 j , … , x n j ; M ) ) {\displaystyle \operatorname {H} _{\mathfrak {m}}^{i}(M)\simeq \varinjlim \operatorname {H} ^{i}(K(x_{1}^{j},\dots ,x_{n}^{j};M))} (Here, one uses the self-duality of a Koszul complex; see Proposition 17.15. of Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry.) Another instance would be Remark: The theorem can be used to give a second quick proof of Serre's theorem, that R is regular if and only if it has finite global dimension. Indeed, by the above theorem, Tor s R ⁡ ( k , k ) ≠ 0 {\displaystyle \operatorname {Tor} _{s}^{R}(k,k)\neq 0} and thus g l . d i m ⁡ R ≥ s {\displaystyle \operatorname {gl.dim} R\geq s} . On the other hand, as g l . d i m ⁡ R = pd R ⁡ k {\displaystyle \operatorname {gl.dim} R=\operatorname {pd} _{R}k} , the Auslander–Buchsbaum formula gives g l . d i m ⁡ R = dim ⁡ R {\displaystyle \operatorname {gl.dim} R=\dim R} . Hence, dim ⁡ R ≤ s ≤ g l . d i m ⁡ R = dim ⁡ R {\displaystyle \dim R\leq s\leq \operatorname {gl.dim} R=\dim R} . We next use a Koszul homology to define and study complete intersection rings. Let R be a Noetherian local ring. By definition, the first deviation of R is the vector space dimension ϵ 1 ( R ) = dim k ⁡ H 1 ⁡ ( x _ ) {\displaystyle \epsilon _{1}(R)=\dim _{k}\operatorname {H} _{1}({\underline {x}})} where x _ = ( x 1 , … , x d ) {\displaystyle {\underline {x}}=(x_{1},\dots ,x_{d})} is a system of parameters. By definition, R is a complete intersection ring if dim ⁡ R + ϵ 1 ( R ) {\displaystyle \dim R+\epsilon _{1}(R)} is the dimension of the tangent space. (See Hartshorne for a geometric meaning.) === Injective dimension and Tor dimensions === Let R be a ring. The injective dimension of an R-module M denoted by id R ⁡ M {\displaystyle \operatorname {id} _{R}M} is defined just like a projective dimension: it is the minimal length of an injective resolution of M. Let Mod R {\displaystyle \operatorname {Mod} _{R}} be the category of R-modules. Proof: Suppose g l . d i m ⁡ R ≤ n {\displaystyle \operatorname {gl.dim} R\leq n} . Let M be an R-module and consider a resolution 0 → M → I 0 → ϕ 0 I 1 → ⋯ → I n − 1 → ϕ n − 1 N → 0 {\displaystyle 0\to M\to I_{0}{\overset {\phi _{0}}{\to }}I_{1}\to \dots \to I_{n-1}{\overset {\phi _{n-1}}{\to }}N\to 0} where I i {\displaystyle I_{i}} are injective modules. For any ideal I, Ext R 1 ⁡ ( R / I , N ) ≃ Ext R 2 ⁡ ( R / I , ker ⁡ ( ϕ n − 1 ) ) ≃ ⋯ ≃ Ext R n + 1 ⁡ ( R / I , M ) , {\displaystyle \operatorname {Ext} _{R}^{1}(R/I,N)\simeq \operatorname {Ext} _{R}^{2}(R/I,\operatorname {ker} (\phi _{n-1}))\simeq \dots \simeq \operatorname {Ext} _{R}^{n+1}(R/I,M),} which is zero since Ext R n + 1 ⁡ ( R / I , − ) {\displaystyle \operatorname {Ext} _{R}^{n+1}(R/I,-)} is computed via a projective resolution of R / I {\displaystyle R/I} . Thus, by Baer's criterion, N is injective. We conclude that sup { id R ⁡ M | M } ≤ n {\displaystyle \sup\{\operatorname {id} _{R}M|M\}\leq n} . Essentially by reversing the arrows, one can also prove the implication in the other way. Q.E.D. The theorem suggests that we consider a sort of a dual of a global dimension: w . g l . d i m = inf { n ∣ Tor i R ⁡ ( M , N ) = 0 , i > n , M , N ∈ Mod R } . {\displaystyle \operatorname {w.gl.dim} =\inf\{n\mid \operatorname {Tor} _{i}^{R}(M,N)=0,\,i>n,M,N\in \operatorname {Mod} _{R}\}.} It was originally called the weak global dimension of R but today it is more commonly called the Tor dimension of R. Remark: for any ring R, w . g l . d i m ⁡ R ≤ g l . d i m ⁡ R {\displaystyle \operatorname {w.gl.dim} R\leq \operatorname {gl.dim} R} . == Dimensions of non-commutative rings == Let A be a graded algebra over a field k. If V is a finite-dimensional generating subspace of A, then we let f ( n ) = dim k ⁡ V n {\displaystyle f(n)=\dim _{k}V^{n}} and then put gk ⁡ ( A ) = lim sup n → ∞ log ⁡ f ( n ) log ⁡ n . {\displaystyle \operatorname {gk} (A)=\limsup _{n\to \infty }{\log f(n) \over \log n}.} It is called the Gelfand–Kirillov dimension of A. It is easy to show gk ⁡ ( A ) {\displaystyle \operatorname {gk} (A)} is independent of a choice of V. Given a graded right (or left) module M over A one may similarly define the Gelfand-Kirillov dimension g k ( M ) {\displaystyle {gk}(M)} of M. Example: If A is finite-dimensional, then gk(A) = 0. If A is an affine ring, then gk(A) = Krull dimension of A. Example: If A n = k [ x 1 , . . . , x n , ∂ 1 , . . . , ∂ n ] {\displaystyle A_{n}=k[x_{1},...,x_{n},\partial _{1},...,\partial _{n}]} is the n-th Weyl algebra then gk ⁡ ( A n ) = 2 n . {\displaystyle \operatorname {gk} (A_{n})=2n.} == See also == Multiplicity theory Bass number Perfect complex amplitude == Notes == == References == Bruns, Winfried; Herzog, Jürgen (1993), Cohen-Macaulay rings, Cambridge Studies in Advanced Mathematics, vol. 39, Cambridge University Press, ISBN 978-0-521-41068-7, MR 1251956 Part II of Eisenbud, David (1995), Commutative algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, New York: Springer-Verlag, ISBN 0-387-94268-8, MR 1322960. Chapter 10 of Atiyah, Michael Francis; Macdonald, I.G. (1969), Introduction to Commutative Algebra, Westview Press, ISBN 978-0-201-40751-8. Kaplansky, Irving, Commutative rings, Allyn and Bacon, 1970. Matsumura, H. (1987). Commutative Ring Theory. Cambridge Studies in Advanced Mathematics. Vol. 8. Translated by M. Reid. Cambridge University Press. doi:10.1017/CBO9781139171762. ISBN 978-0-521-36764-6. Serre, Jean-Pierre (1975), Algèbre locale. Multiplicités, Cours au Collège de France, 1957–1958, rédigé par Pierre Gabriel. Troisième édition, 1975. Lecture Notes in Mathematics (in French), vol. 11, Berlin, New York: Springer-Verlag Weibel, Charles A. (1995). An Introduction to Homological Algebra. Cambridge University Press.
Wikipedia/Dimension_theory_(algebra)
In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra. == Examples == === Linear complex structure === One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as C = R [ x ] / ( x 2 + 1 ) , {\displaystyle \mathbb {C} =\mathbb {R} [x]/(x^{2}+1),} which corresponds to i2 = −1. Then a representation of C is a real vector space V, together with an action of C on V (a map C → E n d ( V ) {\displaystyle \mathbb {C} \to \mathrm {End} (V)} ). Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End(V)) is denoted J to avoid confusion with the identity matrix I. === Polynomial algebras === Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in k variables over the field K is concretely a K-vector space with k commuting operators, and is often denoted K [ T 1 , … , T k ] , {\displaystyle K[T_{1},\dots ,T_{k}],} meaning the representation of the abstract algebra K [ x 1 , … , x k ] {\displaystyle K[x_{1},\dots ,x_{k}]} where x i ↦ T i . {\displaystyle x_{i}\mapsto T_{i}.} A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable. Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by K [ T ] {\displaystyle K[T]} and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form. In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult. == Weights == Eigenvalues and eigenvectors can be generalized to algebra representations. The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation λ : A → R {\displaystyle \lambda \colon A\to R} (i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative). This is known as a weight, and the analog of an eigenvector and eigenspace are called weight vector and weight space. The case of the eigenvalue of a single operator corresponds to the algebra R [ T ] , {\displaystyle R[T],} and a map of algebras R [ T ] → R {\displaystyle R[T]\to R} is determined by which scalar it maps the generator T to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing A × M → M {\displaystyle A\times M\to M} is bilinear, "which multiple" is an A-linear functional of A (an algebra map A → R), namely the weight. In symbols, a weight vector is a vector m ∈ M {\displaystyle m\in M} such that a m = λ ( a ) m {\displaystyle am=\lambda (a)m} for all elements a ∈ A , {\displaystyle a\in A,} for some linear functional λ {\displaystyle \lambda } – note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication. Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra A {\displaystyle {\mathcal {A}}} – equivalently, it vanishes on the derived algebra – in terms of matrices, if v {\displaystyle v} is a common eigenvector of operators T {\displaystyle T} and U {\displaystyle U} , then T U v = U T v {\displaystyle TUv=UTv} (because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra F [ T 1 , … , T k ] {\displaystyle \mathbf {F} [T_{1},\dots ,T_{k}]} in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a k {\displaystyle k} -tuple of scalars λ = ( λ 1 , … , λ k ) {\displaystyle \lambda =(\lambda _{1},\dots ,\lambda _{k})} corresponding to the eigenvalue of each matrix, and hence geometrically to a point in k {\displaystyle k} -space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras. As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on k {\displaystyle k} generators, it corresponds geometrically to an algebraic variety in k {\displaystyle k} -dimensional space, and the weight must fall on the variety – i.e., it satisfies the defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable. == See also == Representation theory Intertwiner Representation theory of Hopf algebras Lie algebra representation Schur’s lemma Jacobson density theorem Double commutant theorem == Notes == == References ==
Wikipedia/Representation_of_an_algebra
In commutative and homological algebra, depth is an important invariant of rings and modules. Although depth can be defined more generally, the most common case considered is the case of modules over a commutative Noetherian local ring. In this case, the depth of a module is related with its projective dimension by the Auslander–Buchsbaum formula. A more elementary property of depth is the inequality d e p t h ( M ) ≤ dim ⁡ ( M ) , {\displaystyle \mathrm {depth} (M)\leq \dim(M),} where dim ⁡ M {\displaystyle \dim M} denotes the Krull dimension of the module M {\displaystyle M} . Depth is used to define classes of rings and modules with good properties, for example, Cohen-Macaulay rings and modules, for which equality holds. == Definition == Let R {\displaystyle R} be a commutative ring, I {\displaystyle I} an ideal of R {\displaystyle R} and M {\displaystyle M} a finitely generated R {\displaystyle R} -module with the property that I M {\displaystyle IM} is properly contained in M {\displaystyle M} . (That is, some elements of M {\displaystyle M} are not in I M {\displaystyle IM} .) Then the I {\displaystyle I} -depth of M {\displaystyle M} , also commonly called the grade of M {\displaystyle M} , is defined as d e p t h I ( M ) = min { i : Ext i ⁡ ( R / I , M ) ≠ 0 } . {\displaystyle \mathrm {depth} _{I}(M)=\min\{i:\operatorname {Ext} ^{i}(R/I,M)\neq 0\}.} By definition, the depth of a local ring R {\displaystyle R} with a maximal ideal m {\displaystyle {\mathfrak {m}}} is its m {\displaystyle {\mathfrak {m}}} -depth as a module over itself. If R {\displaystyle R} is a Cohen-Macaulay local ring, then depth of R {\displaystyle R} is equal to the dimension of R {\displaystyle R} . By a theorem of David Rees, the depth can also be characterized using the notion of a regular sequence. === Theorem (Rees) === Suppose that R {\displaystyle R} is a commutative Noetherian local ring with the maximal ideal m {\displaystyle {\mathfrak {m}}} and M {\displaystyle M} is a finitely generated R {\displaystyle R} -module. Then all maximal regular sequences x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} for M {\displaystyle M} , where each x i {\displaystyle x_{i}} belongs to m {\displaystyle {\mathfrak {m}}} , have the same length n {\displaystyle n} equal to the m {\displaystyle {\mathfrak {m}}} -depth of M {\displaystyle M} . == Depth and projective dimension == The projective dimension and the depth of a module over a commutative Noetherian local ring are complementary to each other. This is the content of the Auslander–Buchsbaum formula, which is not only of fundamental theoretical importance, but also provides an effective way to compute the depth of a module. Suppose that R {\displaystyle R} is a commutative Noetherian local ring with the maximal ideal m {\displaystyle {\mathfrak {m}}} and M {\displaystyle M} is a finitely generated R {\displaystyle R} -module. If the projective dimension of M {\displaystyle M} is finite, then the Auslander–Buchsbaum formula states p d R ( M ) + d e p t h ( M ) = d e p t h ( R ) . {\displaystyle \mathrm {pd} _{R}(M)+\mathrm {depth} (M)=\mathrm {depth} (R).} == Depth zero rings == A commutative Noetherian local ring R {\displaystyle R} has depth zero if and only if its maximal ideal m {\displaystyle {\mathfrak {m}}} is an associated prime, or, equivalently, when there is a nonzero element x {\displaystyle x} of R {\displaystyle R} such that x m = 0 {\displaystyle x{\mathfrak {m}}=0} (that is, x {\displaystyle x} annihilates m {\displaystyle {\mathfrak {m}}} ). This means, essentially, that the closed point is an embedded component. For example, the ring k [ x , y ] / ( x 2 , x y ) {\displaystyle k[x,y]/(x^{2},xy)} (where k {\displaystyle k} is a field), which represents a line ( x = 0 {\displaystyle x=0} ) with an embedded double point at the origin, has depth zero at the origin, but dimension one: this gives an example of a ring which is not Cohen–Macaulay. == References == Eisenbud, David (1995), Commutative algebra with a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94269-8, MR 1322960 Winfried Bruns; Jürgen Herzog, Cohen–Macaulay rings. Cambridge Studies in Advanced Mathematics, 39. Cambridge University Press, Cambridge, 1993. xii+403 pp. ISBN 0-521-41068-1
Wikipedia/Depth_(ring_theory)
In abstract algebra, Morita equivalence is a relationship defined between rings that preserves many ring-theoretic properties. More precisely, two rings R, S are Morita equivalent (denoted by R ≈ S {\displaystyle R\approx S} ) if their categories of modules are additively equivalent (denoted by R M ≈ S M {\displaystyle {}_{R}M\approx {}_{S}M} ). It is named after Japanese mathematician Kiiti Morita who defined equivalence and a similar notion of duality in 1958. == Motivation == Rings are commonly studied in terms of their modules, as modules can be viewed as representations of rings. Every ring R has a natural R-module structure on itself where the module action is defined as the multiplication in the ring, so the approach via modules is more general and gives useful information. Because of this, one often studies a ring by studying the category of modules over that ring. Morita equivalence takes this viewpoint to a natural conclusion by defining rings to be Morita equivalent if their module categories are equivalent. This notion is of interest only when dealing with noncommutative rings, since it can be shown that two commutative rings are Morita equivalent if and only if they are isomorphic. == Definition == Two rings R and S (associative, with 1) are said to be (Morita) equivalent if there is an equivalence of the category of (left) modules over R, R-Mod, and the category of (left) modules over S, S-Mod. It can be shown that the left module categories R-Mod and S-Mod are equivalent if and only if the right module categories Mod-R and Mod-S are equivalent. Further it can be shown that any functor from R-Mod to S-Mod that yields an equivalence is automatically additive. == Examples == Any two isomorphic rings are Morita equivalent. The ring of n-by-n matrices with elements in R, denoted Mn R, is Morita-equivalent to R for any integer n > 0. Notice that this generalizes the classification of simple Artinian rings given by Artin–Wedderburn theory. To see the equivalence, notice that if X is a left R-module then Xn is an Mn(R)-module where the module structure is given by matrix multiplication on the left of column vectors from X. This allows the definition of a functor from the category of left R-modules to the category of left Mn(R)-modules. The inverse functor is defined by realizing that for any Mn(R)-module there is a left R-module X such that the Mn(R)-module is obtained from X as described above. == Criteria for equivalence == Equivalences can be characterized as follows: if F : R-Mod → {\displaystyle \to } S-Mod and G : S-Mod → {\displaystyle \to } R-Mod are additive (covariant) functors, then F and G are an equivalence if and only if there is a balanced (S,R)-bimodule P such that SP and PR are finitely generated projective generators and there are natural isomorphisms of the functors F ⁡ ( − ) ≅ P ⊗ R − {\displaystyle \operatorname {F} (-)\cong P\otimes _{R}-} , and of the functors G ⁡ ( − ) ≅ Hom ⁡ ( S P , − ) . {\displaystyle \operatorname {G} (-)\cong \operatorname {Hom} (_{S}P,-).} Finitely generated projective generators are also sometimes called progenerators for their module category. For every right-exact functor F from the category of left R-modules to the category of left S-modules that commutes with direct sums, a theorem of homological algebra shows that there is a (S,R)-bimodule E such that the functor F ⁡ ( − ) {\displaystyle \operatorname {F} (-)} is naturally isomorphic to the functor E ⊗ R − {\displaystyle E\otimes _{R}-} . Since equivalences are by necessity exact and commute with direct sums, this implies that R and S are Morita equivalent if and only if there are bimodules RMS and SNR such that M ⊗ S N ≅ R {\displaystyle M\otimes _{S}N\cong R} as (R,R)-bimodules and N ⊗ R M ≅ S {\displaystyle N\otimes _{R}M\cong S} as (S,S)-bimodules. Moreover, N and M are related via an (S,R)-bimodule isomorphism: N ≅ Hom ⁡ ( M S , S S ) {\displaystyle N\cong \operatorname {Hom} (M_{S},S_{S})} . More concretely, two rings R and S are Morita equivalent if and only if S ≅ End ⁡ ( P R ) {\displaystyle S\cong \operatorname {End} (P_{R})} for a progenerator module PR, which is the case if and only if S ≅ e M n ( R ) e {\displaystyle S\cong e\mathbb {M} _{n}(R)e} (isomorphism of rings) for some positive integer n and full idempotent e in the matrix ring Mn R. It is known that if R is Morita equivalent to S, then the ring Z(R) is isomorphic to the ring Z(S), where Z(-) denotes the center of the ring, and furthermore R/J(R) is Morita equivalent to S/J(S), where J(-) denotes the Jacobson radical. While isomorphic rings are Morita equivalent, Morita equivalent rings can be nonisomorphic. An easy example is that a division ring D is Morita equivalent to all of its matrix rings Mn D, but cannot be isomorphic when n > 1. In the special case of commutative rings, Morita equivalent rings are actually isomorphic. This follows immediately from the comment above, for if R is Morita equivalent to S then R = Z ⁡ ( R ) ≅ Z ⁡ ( S ) = S {\displaystyle R=\operatorname {Z} (R)\cong \operatorname {Z} (S)=S} . == Properties preserved by equivalence == Many properties are preserved by the equivalence functor for the objects in the module category. Generally speaking, any property of modules defined purely in terms of modules and their homomorphisms (and not to their underlying elements or ring) is a categorical property which will be preserved by the equivalence functor. For example, if F(-) is the equivalence functor from R-Mod to S-Mod, then the R module M has any of the following properties if and only if the S module F(M) does: injective, projective, flat, faithful, simple, semisimple, finitely generated, finitely presented, Artinian, and Noetherian. Examples of properties not necessarily preserved include being free, and being cyclic. Many ring-theoretic properties are stated in terms of their modules, and so these properties are preserved between Morita equivalent rings. Properties shared between equivalent rings are called Morita invariant properties. For example, a ring R is semisimple if and only if all of its modules are semisimple, and since semisimple modules are preserved under Morita equivalence, an equivalent ring S must also have all of its modules semisimple, and therefore be a semisimple ring itself. Sometimes it is not immediately obvious why a property should be preserved. For example, using one standard definition of von Neumann regular ring (for all a in R, there exists x in R such that a = axa) it is not clear that an equivalent ring should also be von Neumann regular. However another formulation is: a ring is von Neumann regular if and only if all of its modules are flat. Since flatness is preserved across Morita equivalence, it is now clear that von Neumann regularity is Morita invariant. The following properties are Morita invariant: simple, semisimple von Neumann regular right (or left) Noetherian, right (or left) Artinian right (or left) self-injective quasi-Frobenius prime, right (or left) primitive, semiprime, semiprimitive right (or left) (semi-)hereditary right (or left) nonsingular right (or left) coherent semiprimary, right (or left) perfect, semiperfect semilocal Examples of properties which are not Morita invariant include commutative, local, reduced, domain, right (or left) Goldie, Frobenius, invariant basis number, and Dedekind finite. There are at least two other tests for determining whether or not a ring property P {\displaystyle {\mathcal {P}}} is Morita invariant. An element e in a ring R is a full idempotent when e2 = e and ReR = R. P {\displaystyle {\mathcal {P}}} is Morita invariant if and only if whenever a ring R satisfies P {\displaystyle {\mathcal {P}}} , then so does eRe for every full idempotent e and so does every matrix ring Mn R for every positive integer n; or P {\displaystyle {\mathcal {P}}} is Morita invariant if and only if: for any ring R and full idempotent e in R, R satisfies P {\displaystyle {\mathcal {P}}} if and only if the ring eRe satisfies P {\displaystyle {\mathcal {P}}} . == Further directions == Dual to the theory of equivalences is the theory of dualities between the module categories, where the functors used are contravariant rather than covariant. This theory, though similar in form, has significant differences because there is no duality between the categories of modules for any rings, although dualities may exist for subcategories. In other words, because infinite-dimensional modules are not generally reflexive, the theory of dualities applies more easily to finitely generated algebras over noetherian rings. Perhaps not surprisingly, the criterion above has an analogue for dualities, where the natural isomorphism is given in terms of the hom functor rather than the tensor functor. Morita equivalence can also be defined in more structured situations, such as for symplectic groupoids and C*-algebras. In the case of C*-algebras, a stronger type equivalence, called strong Morita equivalence, is needed to obtain results useful in applications, because of the additional structure of C*-algebras (coming from the involutive *-operation) and also because C*-algebras do not necessarily have an identity element. == Significance in K-theory == If two rings are Morita equivalent, there is an induced equivalence of the respective categories of projective modules since the Morita equivalences will preserve exact sequences (and hence projective modules). Since the algebraic K-theory of a ring is defined (in Quillen's approach) in terms of the homotopy groups of (roughly) the classifying space of the nerve of the (small) category of finitely generated projective modules over the ring, Morita equivalent rings must have isomorphic K-groups. == Notes == == Citations == == References == == Further reading == Reiner, I. (2003). Maximal Orders. London Mathematical Society Monographs. New Series. Vol. 28. Oxford University Press. pp. 154–169. ISBN 0-19-852673-3. Zbl 1024.16008.
Wikipedia/Morita_theory
In commutative algebra, the Krull dimension of a commutative ring R, named after Wolfgang Krull, is the supremum of the lengths of all chains of prime ideals. The Krull dimension need not be finite even for a Noetherian ring. More generally the Krull dimension can be defined for modules over possibly non-commutative rings as the deviation of the poset of submodules. The Krull dimension was introduced to provide an algebraic definition of the dimension of an algebraic variety: the dimension of the affine variety defined by an ideal I in a polynomial ring R is the Krull dimension of R/I. A field k has Krull dimension 0; more generally, k[x1, ..., xn] has Krull dimension n. A principal ideal domain that is not a field has Krull dimension 1. A local ring has Krull dimension 0 if and only if every element of its maximal ideal is nilpotent. There are several other ways that have been used to define the dimension of a ring. Most of them coincide with the Krull dimension for Noetherian rings, but can differ for non-Noetherian rings. == Explanation == We say that a chain of prime ideals of the form p 0 ⊊ p 1 ⊊ … ⊊ p n {\displaystyle {\mathfrak {p}}_{0}\subsetneq {\mathfrak {p}}_{1}\subsetneq \ldots \subsetneq {\mathfrak {p}}_{n}} has length n. That is, the length is the number of strict inclusions, not the number of primes; these differ by 1. We define the Krull dimension of R {\displaystyle R} to be the supremum of the lengths of all chains of prime ideals in R {\displaystyle R} . Given a prime ideal p {\displaystyle {\mathfrak {p}}} in R, we define the height of p {\displaystyle {\mathfrak {p}}} , written ht ⁡ ( p ) {\displaystyle \operatorname {ht} ({\mathfrak {p}})} , to be the supremum of the lengths of all chains of prime ideals contained in p {\displaystyle {\mathfrak {p}}} , meaning that p 0 ⊊ p 1 ⊊ … ⊊ p n = p {\displaystyle {\mathfrak {p}}_{0}\subsetneq {\mathfrak {p}}_{1}\subsetneq \ldots \subsetneq {\mathfrak {p}}_{n}={\mathfrak {p}}} . In other words, the height of p {\displaystyle {\mathfrak {p}}} is the Krull dimension of the localization of R at p {\displaystyle {\mathfrak {p}}} . A prime ideal has height zero if and only if it is a minimal prime ideal. The Krull dimension of a ring is the supremum of the heights of all maximal ideals, or those of all prime ideals. The height is also sometimes called the codimension, rank, or altitude of a prime ideal. In a Noetherian ring, every prime ideal has finite height. Nonetheless, Nagata gave an example of a Noetherian ring of infinite Krull dimension. A ring is called catenary if any inclusion p ⊂ q {\displaystyle {\mathfrak {p}}\subset {\mathfrak {q}}} of prime ideals can be extended to a maximal chain of prime ideals between p {\displaystyle {\mathfrak {p}}} and q {\displaystyle {\mathfrak {q}}} , and any two maximal chains between p {\displaystyle {\mathfrak {p}}} and q {\displaystyle {\mathfrak {q}}} have the same length. A ring is called universally catenary if any finitely generated algebra over it is catenary. Nagata gave an example of a Noetherian ring which is not catenary. In a Noetherian ring, a prime ideal has height at most n if and only if it is a minimal prime ideal over an ideal generated by n elements (Krull's height theorem and its converse). It implies that the descending chain condition holds for prime ideals in such a way the lengths of the chains descending from a prime ideal are bounded by the number of generators of the prime. More generally, the height of an ideal I is the infimum of the heights of all prime ideals containing I. In the language of algebraic geometry, this is the codimension of the subvariety of Spec( R {\displaystyle R} ) corresponding to I. == Schemes == It follows readily from the definition of the spectrum of a ring Spec(R), the space of prime ideals of R equipped with the Zariski topology, that the Krull dimension of R is equal to the dimension of its spectrum as a topological space, meaning the supremum of the lengths of all chains of irreducible closed subsets. This follows immediately from the Galois connection between ideals of R and closed subsets of Spec(R) and the observation that, by the definition of Spec(R), each prime ideal p {\displaystyle {\mathfrak {p}}} of R corresponds to a generic point of the closed subset associated to p {\displaystyle {\mathfrak {p}}} by the Galois connection. == Examples == The dimension of a polynomial ring over a field k[x1, ..., xn] is the number of variables n. In the language of algebraic geometry, this says that the affine space of dimension n over a field has dimension n, as expected. In general, if R is a Noetherian ring of dimension n, then the dimension of R[x] is n + 1. If the Noetherian hypothesis is dropped, then R[x] can have dimension anywhere between n + 1 and 2n + 1. For example, the ideal p = ( y 2 − x , y ) ⊂ C [ x , y ] {\displaystyle {\mathfrak {p}}=(y^{2}-x,y)\subset \mathbb {C} [x,y]} has height 2 since we can form the maximal ascending chain of prime ideals ( 0 ) = p 0 ⊊ ( y 2 − x ) = p 1 ⊊ ( y 2 − x , y ) = p 2 = p {\displaystyle (0)={\mathfrak {p}}_{0}\subsetneq (y^{2}-x)={\mathfrak {p}}_{1}\subsetneq (y^{2}-x,y)={\mathfrak {p}}_{2}={\mathfrak {p}}} . Given an irreducible polynomial f ∈ C [ x , y , z ] {\displaystyle f\in \mathbb {C} [x,y,z]} , the ideal I = ( f 3 ) {\displaystyle I=(f^{3})} is not prime (since f ⋅ f 2 ∈ I {\displaystyle f\cdot f^{2}\in I} , but neither of the factors are), but we can easily compute the height since the smallest prime ideal containing I {\displaystyle I} is just ( f ) {\displaystyle (f)} . The ring of integers Z has dimension 1. More generally, any principal ideal domain that is not a field has dimension 1. An integral domain is a field if and only if its Krull dimension is zero. Dedekind domains that are not fields (for example, discrete valuation rings) have dimension one. The Krull dimension of the zero ring is typically defined to be either − ∞ {\displaystyle -\infty } or − 1 {\displaystyle -1} . The zero ring is the only ring with a negative dimension. A ring is Artinian if and only if it is Noetherian and its Krull dimension is ≤0. An integral extension of a ring has the same dimension as the ring does. Let R be an algebra over a field k that is an integral domain. Then the Krull dimension of R is less than or equal to the transcendence degree of the field of fractions of R over k. The equality holds if R is finitely generated as an algebra (for instance by the Noether normalization lemma). Let R be a Noetherian ring, I an ideal and gr I ⁡ ( R ) = ⨁ k = 0 ∞ I k / I k + 1 {\displaystyle \operatorname {gr} _{I}(R)=\bigoplus _{k=0}^{\infty }I^{k}/I^{k+1}} be the associated graded ring (geometers call it the ring of the normal cone of I). Then dim ⁡ gr I ⁡ ( R ) {\displaystyle \operatorname {dim} \operatorname {gr} _{I}(R)} is the supremum of the heights of maximal ideals of R containing I. A commutative Noetherian ring of Krull dimension zero is a direct product of a finite number (possibly one) of local rings of Krull dimension zero. A Noetherian local ring is called a Cohen–Macaulay ring if its dimension is equal to its depth. A regular local ring is an example of such a ring. A Noetherian integral domain is a unique factorization domain if and only if every height 1 prime ideal is principal. For a commutative Noetherian ring the three following conditions are equivalent: being a reduced ring of Krull dimension zero, being a field or a direct product of fields, being von Neumann regular. == Of a module == If R is a commutative ring, and M is an R-module, we define the Krull dimension of M to be the Krull dimension of the quotient of R making M a faithful module. That is, we define it by the formula: dim R ⁡ M := dim ⁡ ( R / Ann R ⁡ ( M ) ) {\displaystyle \dim _{R}M:=\dim(R/{\operatorname {Ann} _{R}(M)})} where AnnR(M), the annihilator, is the kernel of the natural map R → EndR(M) of R into the ring of R-linear endomorphisms of M. In the language of schemes, finitely generated modules are interpreted as coherent sheaves, or generalized finite rank vector bundles. == For non-commutative rings == The Krull dimension of a module over a possibly non-commutative ring is defined as the deviation of the poset of submodules ordered by inclusion. For commutative Noetherian rings, this is the same as the definition using chains of prime ideals. The two definitions can be different for commutative rings which are not Noetherian. == See also == Analytic spread Dimension theory (algebra) Gelfand–Kirillov dimension Hilbert function Homological conjectures in commutative algebra Krull's principal ideal theorem == Notes == == Bibliography == Irving Kaplansky, Commutative rings (revised ed.), University of Chicago Press, 1974, ISBN 0-226-42454-5. Page 32. L.A. Bokhut'; I.V. L'vov; V.K. Kharchenko (1991). "I. Noncommuative rings". In Kostrikin, A.I.; Shafarevich, I.R. (eds.). Algebra II. Encyclopaedia of Mathematical Sciences. Vol. 18. Springer-Verlag. ISBN 3-540-18177-6. Sect.4.7. Eisenbud, David (1995), Commutative algebra with a view toward algebraic geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94268-1, MR 1322960 Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157 Matsumura, Hideyuki (1989), Commutative Ring Theory, Cambridge Studies in Advanced Mathematics (2nd ed.), Cambridge University Press, ISBN 978-0-521-36764-6 Serre, Jean-Pierre (2000). Local Algebra. Springer Monographs in Mathematics (in German). doi:10.1007/978-3-662-04203-8. ISBN 978-3-662-04203-8. OCLC 864077388.
Wikipedia/Height_(ring_theory)
In mathematics, particularly topology, an atlas is a concept used to describe a manifold. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fiber bundles. == Charts == The definition of an atlas depends on the notion of a chart. A chart for a topological space M is a homeomorphism φ {\displaystyle \varphi } from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair ( U , φ ) {\displaystyle (U,\varphi )} . When a coordinate system is chosen in the Euclidean space, this defines coordinates on U {\displaystyle U} : the coordinates of a point P {\displaystyle P} of U {\displaystyle U} are defined as the coordinates of φ ( P ) . {\displaystyle \varphi (P).} The pair formed by a chart and such a coordinate system is called a local coordinate system, coordinate chart, coordinate patch, coordinate map, or local frame. == Formal definition of atlas == An atlas for a topological space M {\displaystyle M} is an indexed family { ( U α , φ α ) : α ∈ I } {\displaystyle \{(U_{\alpha },\varphi _{\alpha }):\alpha \in I\}} of charts on M {\displaystyle M} which covers M {\displaystyle M} (that is, ⋃ α ∈ I U α = M {\textstyle \bigcup _{\alpha \in I}U_{\alpha }=M} ). If for some fixed n, the image of each chart is an open subset of n-dimensional Euclidean space, then M {\displaystyle M} is said to be an n-dimensional manifold. The plural of atlas is atlases, although some authors use atlantes. An atlas ( U i , φ i ) i ∈ I {\displaystyle \left(U_{i},\varphi _{i}\right)_{i\in I}} on an n {\displaystyle n} -dimensional manifold M {\displaystyle M} is called an adequate atlas if the following conditions hold: The image of each chart is either R n {\displaystyle \mathbb {R} ^{n}} or R + n {\displaystyle \mathbb {R} _{+}^{n}} , where R + n {\displaystyle \mathbb {R} _{+}^{n}} is the closed half-space, ( U i ) i ∈ I {\displaystyle \left(U_{i}\right)_{i\in I}} is a locally finite open cover of M {\displaystyle M} , and M = ⋃ i ∈ I φ i − 1 ( B 1 ) {\textstyle M=\bigcup _{i\in I}\varphi _{i}^{-1}\left(B_{1}\right)} , where B 1 {\displaystyle B_{1}} is the open ball of radius 1 centered at the origin. Every second-countable manifold admits an adequate atlas. Moreover, if V = ( V j ) j ∈ J {\displaystyle {\mathcal {V}}=\left(V_{j}\right)_{j\in J}} is an open covering of the second-countable manifold M {\displaystyle M} , then there is an adequate atlas ( U i , φ i ) i ∈ I {\displaystyle \left(U_{i},\varphi _{i}\right)_{i\in I}} on M {\displaystyle M} , such that ( U i ) i ∈ I {\displaystyle \left(U_{i}\right)_{i\in I}} is a refinement of V {\displaystyle {\mathcal {V}}} . == Transition maps == A transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other. This composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. (For example, if we have a chart of Europe and a chart of Russia, then we can compare these two charts on their overlap, namely the European part of Russia.) To be more precise, suppose that ( U α , φ α ) {\displaystyle (U_{\alpha },\varphi _{\alpha })} and ( U β , φ β ) {\displaystyle (U_{\beta },\varphi _{\beta })} are two charts for a manifold M such that U α ∩ U β {\displaystyle U_{\alpha }\cap U_{\beta }} is non-empty. The transition map τ α , β : φ α ( U α ∩ U β ) → φ β ( U α ∩ U β ) {\displaystyle \tau _{\alpha ,\beta }:\varphi _{\alpha }(U_{\alpha }\cap U_{\beta })\to \varphi _{\beta }(U_{\alpha }\cap U_{\beta })} is the map defined by τ α , β = φ β ∘ φ α − 1 . {\displaystyle \tau _{\alpha ,\beta }=\varphi _{\beta }\circ \varphi _{\alpha }^{-1}.} Note that since φ α {\displaystyle \varphi _{\alpha }} and φ β {\displaystyle \varphi _{\beta }} are both homeomorphisms, the transition map τ α , β {\displaystyle \tau _{\alpha ,\beta }} is also a homeomorphism. == More structure == One often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, then it is necessary to construct an atlas whose transition functions are differentiable. Such a manifold is called differentiable. Given a differentiable manifold, one can unambiguously define the notion of tangent vectors and then directional derivatives. If each transition function is a smooth map, then the atlas is called a smooth atlas, and the manifold itself is called smooth. Alternatively, one could require that the transition maps have only k continuous derivatives in which case the atlas is said to be C k {\displaystyle C^{k}} . Very generally, if each transition function belongs to a pseudogroup G {\displaystyle {\mathcal {G}}} of homeomorphisms of Euclidean space, then the atlas is called a G {\displaystyle {\mathcal {G}}} -atlas. If the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle. == See also == Smooth atlas Smooth frame == References == == External links == Atlas by Rowland, Todd
Wikipedia/Chart_(topology)
In computer programming, a function (also procedure, method, subroutine, routine, or subprogram) is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times. Callable units provide a powerful programming tool. The primary purpose is to allow for the decomposition of a large and/or complicated problem into chunks that have relatively low cognitive load and to assign the chunks meaningful names (unless they are anonymous). Judicious application can reduce the cost of developing and maintaining software, while increasing its quality and reliability. Callable units are present at multiple levels of abstraction in the programming environment. For example, a programmer may write a function in source code that is compiled to machine code that implements similar semantics. There is a callable unit in the source code and an associated one in the machine code, but they are different kinds of callable units – with different implications and features. == Terminology == Some programming languages, such as COBOL and BASIC, make a distinction between functions that return a value (typically called "functions") and those that do not (typically called "subprogram", "subroutine", or "procedure"). Other programming languages, such as C, C++, and Rust, only use the term "function" irrespective of whether they return a value or not. Some object-oriented languages, such as Java and C#, refer to functions inside classes as "methods". == History == The idea of a callable unit was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines." Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack. The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site. Subroutines were implemented in Konrad Zuse's Z4 in 1945. In 1945, Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery' under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting ...the structure of the machine need not be complicated one bit. It is possible, since all the logical characteristics essential to this procedure are available, to evolve a coding instruction for placing the subroutines in the memory at places known to the machine, and in such a way that they may easily be called into use.In other words, one can designate subroutine A as division and subroutine B as complex multiplication and subroutine C as the evaluation of a standard error of a sequence of numbers, and so on through the list of subroutines needed for a particular problem. ... All these subroutines will then be stored in the machine, and all one needs to do is make a brief reference to them by number, as they are indicated in the coding. Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II. She and the other ENIAC programmers used the subroutines to help calculate missile trajectories. Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines. Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The Burroughs B5000 (1961) is one of the first computers to store subroutine return data on a stack. The DEC PDP-6 (1964) is one of the first accumulator-based machines to have a subroutine call instruction that saved the return address in a stack addressed by an accumulator or index register. The later PDP-10 (1966), PDP-11 (1970) and VAX-11 (1976) lines followed suit; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines. === Language support === In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together. One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming. === Libraries === Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs. Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program (or "mainline"); and the same subroutine tape could then be used by many different programs. A similar approach was used in computers that loaded program instructions from punched cards. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or decks of cards for collective use. === Return by indirect jump === To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address. On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable. === Jump to subroutine === Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly. In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack. In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example to call a subroutine called MYSUB from the main program. The subroutine would be coded as The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB. Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls. Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC. === Call stack === Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address. The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose. The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter. Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible. When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management. However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data. In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records. ==== Delayed stacking ==== One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both. This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns. == Features == In general, a callable unit is a list of instructions that, starting at the first instruction, executes sequentially except as directed via its internal logic. It can be invoked (called) many times during the execution of a program. Execution continues at the next instruction after the call instruction when it returns control. == Implementations == The features of implementations of callable units evolved over time and varies by context. This section describes features of the various common implementations. === General characteristics === Most modern programming languages provide features to define and call functions, including syntax for accessing such features, including: Delimit the implementation of a function from the rest of the program Assign an identifier, name, to a function Define formal parameters with a name and data type for each Assign a data type to the return value, if any Specify a return value in the function body Call a function Provide actual parameters that correspond to a called function's formal parameters Return control to the caller at the point of call Consume the return value in the caller Dispose of the values returned by a call Provide a private naming scope for variables Identify variables outside the function that are accessible within it Propagate an exceptional condition out of a function and to handle it in the calling context Package functions into a container such as module, library, object, or class === Naming === Some languages, such as Pascal, Fortran, Ada and many dialects of BASIC, use a different name for a callable unit that returns a value (function or subprogram) vs. one that does not (subroutine or procedure). Other languages, such as C, C++, C# and Lisp, use only one name for a callable unit, function. The C-family languages use the keyword void to indicate no return value. === Call syntax === If declared to return a value, a call can be embedded in an expression in order to consume the return value. For example, a square root callable unit might be called like y = sqrt(x). A callable unit that does not return a value is called as a stand-alone statement like print("hello"). This syntax can also be used for a callable unit that returns a value, but the return value will be ignored. Some older languages require a keyword for calls that do not consume a return value, like CALL print("hello"). === Parameters === Most implementations, especially in modern languages, support parameters which the callable declares as formal parameters. A caller passes actual parameters, a.k.a. arguments, to match. Different programming languages provide different conventions for passing arguments. === Return value === In some languages, such as BASIC, a callable has different syntax (i.e. keyword) for a callable that returns a value vs. one that does not. In other languages, the syntax is the same regardless. In some of these languages an extra keyword is used to declare no return value; for example void in C, C++ and C#. In some languages, such as Python, the difference is whether the body contains a return statement with a value, and a particular callable may return with or without a value based on control flow. === Side effects === In many contexts, a callable may have side effect behavior such as modifying passed or global data, reading from or writing to a peripheral device, accessing a file, halting the program or the machine, or temporarily pausing program execution. Side effects are considered undesireble by Robert C. Martin, who is known for promoting design principles. Martin argues that side effects can result in temporal coupling or order dependencies. In strictly functional programming languages such as Haskell, a function can have no side effects, which means it cannot change the state of the program. Functions always return the same result for the same input. Such languages typically only support functions that return a value, since there is no value in a function that has neither return value nor side effect. === Local variables === Most contexts support local variables – memory owned by a callable to hold intermediate values. These variables are typically stored in the call's activation record on the call stack along with other information such as the return address. === Nested call – recursion === If supported by the language, a callable may call itself, causing its execution to suspend while another nested execution of the same callable executes. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages provide a new copy of local variables on each call. If the programmer desires the recursive callable to use the same variables instead of using locals, they typically declare them in a shared context such static or global. Languages going back to ALGOL, PL/I and C and modern languages, almost invariably use a call stack, usually supported by the instruction sets to provide an activation record for each call. That way, a nested call can modify its local variables without affecting any of the suspended calls variables. Recursion allows direct implementation of functionality defined by mathematical induction and recursive divide and conquer algorithms. Here is an example of a recursive function in C/C++ to find Fibonacci numbers: Early languages like Fortran did not initially support recursion because only one set of variables and return address were allocated for each callable. Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer. === Nested scope === Some languages, e.g., Ada, Pascal, PL/I, Python, support declaring and defining a function inside, e.g., a function body, such that the name of the inner is only visible within the body of the outer. === Reentrancy === If a callable can be executed properly even when another execution of the same callable is already in progress, that callable is said to be reentrant. A reentrant callable is also useful in multi-threaded situations since multiple threads can call the same callable without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads. === Overloading === Some languages support overloading – allow multiple callables with the same name in the same scope, but operating on different types of input. Consider the square root function applied to real number, complex number and matrix input. The algorithm for each type of input is different, and the return value may have a different type. By writing three separate callables with the same name. i.e. sqrt, the resulting code may be easier to write and to maintain since each one has a name that is relatively easy to understand and to remember instead of giving longer and more complicated names like sqrt_real, sqrt_complex, qrt_matrix. Overloading is supported in many languages that support strong typing. Often the compiler selects the overload to call based on the type of the input arguments or it fails if the input arguments do not select an overload. Older and weakly-typed languages generally do not support overloading. Here is an example of overloading in C++, two functions Area that accept different types: PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example: DECLARE gen_name GENERIC( name WHEN(FIXED BINARY), flame WHEN(FLOAT), pathname OTHERWISE); Multiple argument definitions may be specified for each entry. A call to "gen_name" will result in a call to "name" when the argument is FIXED BINARY, "flame" when FLOAT", etc. If the argument matches none of the choices "pathname" will be called. === Closure === A closure is a callable plus values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects. === Exception reporting === Besides its happy path behavior, a callable may need to inform the caller about an exceptional condition that occurred during its execution. Most modern languages support exceptions which allows for exceptional control flow that pops the call stack until an exception handler is found to handle the condition. Languages that do not support exceptions can use the return value to indicate success or failure of a call. Another approach is to use a well-known location like a global variable for success indication. A callable writes the value and the caller reads it after a call. In the IBM System/360, where return code was expected from a subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example: === Call overhead === A call has runtime overhead, which may include but is not limited to: Allocating and reclaiming call stack storage Saving and restoring processor registers Copying input variables Copying values after the call into the caller's context Automatic testing of the return code Handling of exceptions Dispatching such as for a virtual method in an object-oriented language Various techniques are employed to minimize the runtime cost of calls. ==== Compiler optimization ==== Some optimizations for minimizing call overhead may seem straight forward, but cannot be used if the callable has side effects. For example, in the expression (f(x)-1)/(f(x)+1), the function f cannot be called only once with its value used two times since the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a callable has a side effect is difficult – indeed, undecidable by virtue of Rice's theorem. So, while this optimization is safe in a purely functional programming language, a compiler for a language not limited to functional typically assumes the worst case, that every callable may have side effects. ==== Inlining ==== Inlining eliminates calls for particular callables. The compiler replaces each call with the compiled code of the callable. Not only does this avoid the call overhead, but it also allows the compiler to optimize code of the caller more effectively by taking into account the context and arguments at that call. Inlining, however, usually increases the compiled code size, except when only called once or the body is very short, like one line. === Sharing === Callables can be defined within a program, or separately in a library that can be used by multiple programs. === Inter-operability === A compiler translates call and return statements into machine instructions according to a well-defined calling convention. For code compiled by the same or a compatible compiler, functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue. === Built-in functions === A built-in function, or builtin function, or intrinsic function, is a function for which the compiler generates code at compile time or provides in a way other than for other functions. A built-in function does not need to be defined like other functions since it is built in to the programming language. == Programming == === Trade-offs === ==== Advantages ==== Advantages of breaking a program into functions include: Decomposing a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures Reducing duplicate code within a program Enabling reuse of code across multiple programs Dividing a large programming task among various programmers or various stages of a project Hiding implementation details from users of the function Improving readability of code by replacing a block of code with a function call where a descriptive function name serves to describe the block of code. This makes the calling code concise and readable even if the function is not meant to be reused. Improving traceability (i.e. most languages offer ways to obtain the call trace which includes the names of the involved functions and perhaps even more information such as file names and line numbers); by not decomposing the code into functions, debugging would be severely impaired ==== Disadvantages ==== Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism. A function typically requires standard housekeeping code – both at the entry to, and exit from, the function (function prologue and epilogue – usually saving general purpose registers and return address as a minimum). === Conventions === Many programming conventions have been developed regarding callables. With respect to naming, many developers name a callable with a phrase starting with a verb when it does a certain task, with an adjective when it makes an inquiry, and with a noun when it is used to substitute variables. Some programmers suggest that a callable should perform exactly one task, and if it performs more than one task, it should be split up into multiple callables. They argue that callables are key components in software maintenance, and their roles in the program must remain distinct. Proponents of modular programming advocate that each callable should have minimal dependency on the rest of the codebase. For example, the use of global variables is generally deemed unwise, because it adds coupling between all callables that use the global variables. If such coupling is not necessary, they advise to refactor callables to accept passed parameters instead. == Examples == === Early BASIC === Early BASIC variants require each line to have a unique number (line number) that orders the lines for execution, provides no separation of the code that is callable, no mechanism for passing arguments or to return a value and all variables are global. It provides the command GOSUB where sub is short for sub procedure, subprocedure or subroutine. Control jumps to the specified line number and then continues on the next line on return. This code repeatedly asks the user to enter a number and reports the square root of the value. Lines 100-130 are the callable. === Small Basic === In Microsoft Small Basic, targeted to the student first learning how to program in a text-based language, a callable unit is called a subroutine. The Sub keyword denotes the start of a subroutine and is followed by a name identifier. Subsequent lines are the body which ends with the EndSub keyword. This can be called as SayHello(). === Visual Basic === In later versions of Visual Basic (VB), including the latest product line and VB6, the term procedure is used for the callable unit concept. The keyword Sub is used to return no value and Function to return a value. When used in the context of a class, a procedure is a method. Each parameter has a data type that can be specified, but if not, defaults to Object for later versions based on .NET and variant for VB6. VB supports parameter passing conventions by value and by reference via the keywords ByVal and ByRef, respectively. Unless ByRef is specified, an argument is passed ByVal. Therefore, ByVal is rarely explicitly specified. For a simple type like a number these conventions are relatively clear. Passing ByRef allows the procedure to modify the passed variable whereas passing ByVal does not. For an object, semantics can confuse programmers since an object is always treated as a reference. Passing an object ByVal copies the reference; not the state of the object. The called procedure can modify the state of the object via its methods yet cannot modify the object reference of the actual parameter. The does not return a value and has to be called stand-alone, like DoSomething This returns the value 5, and a call can be part of an expression like y = x + GiveMeFive() This has a side-effect – modifies the variable passed by reference and could be called for variable v like AddTwo(v). Giving v is 5 before the call, it will be 7 after. === C and C++ === In C and C++, a callable unit is called a function. A function definition starts with the name of the type of value that it returns or void to indicate that it does not return a value. This is followed by the function name, formal arguments in parentheses, and body lines in braces. In C++, a function declared in a class (as non-static) is called a member function or method. A function outside of a class can be called a free function to distinguish it from a member function. This function does not return a value and is always called stand-alone, like doSomething() This function returns the integer value 5. The call can be stand-alone or in an expression like y = x + giveMeFive() This function has a side-effect – modifies the value passed by address to the input value plus 2. It could be called for variable v as addTwo(&v) where the ampersand (&) tells the compiler to pass the address of a variable. Giving v is 5 before the call, it will be 7 after. This function requires C++ – would not compile as C. It has the same behavior as the preceding example but passes the actual parameter by reference rather than passing its address. A call such as addTwo(v) does not include an ampersand since the compiler handles passing by reference without syntax in the call. === PL/I === In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A (trivial) function to change the sign of each element of a two-dimensional array might look like: change_sign: procedure(array); declare array(*,*) float; array = -array; end change_sign; This could be called with various arrays as follows: /* first array bounds from -5 to +10 and 3 to 9 */ declare array1 (-5:10, 3:9)float; /* second array bounds from 1 to 16 and 1 to 16 */ declare array2 (16,16) float; call change_sign(array1); call change_sign(array2); === Python === In Python, the keyword def denotes the start of a function definition. The statements of the function body follow as indented on subsequent lines and end at the line that is indented the same as the first line or end of file. The first function returns greeting text that includes the name passed by the caller. The second function calls the first and is called like greet_martin() to write "Welcome Martin" to the console. === Prolog === In the procedural interpretation of logic programs, logical implications behave as goal-reduction procedures. A rule (or clause) of the form: A :- B which has the logical reading: A if B behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB. Consider, for example, the Prolog program: Notice that the motherhood function, X = mother(Y) is represented by a relation, as in a relational database. However, relations in Prolog function as callable units. For example, the procedure call ?- parent_child(X, charles) produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example: == See also == Asynchronous procedure call, a subprogram that is called after its parameters are set by other activities Command–query separation (CQS) Compound operation Coroutines, subprograms that call each other as if both were the main programs Evaluation strategy Event handler, a subprogram that is called in response to an input event or interrupt Function (mathematics) Functional programming Fused operation Intrinsic function Lambda function (computer programming), a function that is not bound to an identifier Logic programming Modular programming Operator overloading Protected procedure Transclusion == References ==
Wikipedia/Function_(programming)
In software engineering and computer science, abstraction is the process of generalizing concrete details, such as attributes, away from the study of objects and systems to focus attention on details of greater importance. Abstraction is a fundamental concept in computer science and software engineering, especially within the object-oriented programming paradigm. Examples of this include: the usage of abstract data types to separate usage from working representations of data within programs; the concept of functions or subroutines which represent a specific way of implementing control flow; the process of reorganizing common behavior from groups of non-abstract classes into abstract classes using inheritance and sub-classes, as seen in object-oriented programming languages. == Rationale == Computing mostly operates independently of the concrete world. The hardware implements a model of computation that is interchangeable with others. The software is structured in architectures to enable humans to create the enormous systems by concentrating on a few issues at a time. These architectures are made of specific choices of abstractions. Greenspun's tenth rule is an aphorism on how such an architecture is both inevitable and complex. Language abstraction is a central form of abstraction in computing: new artificial languages are developed to express specific aspects of a system. Modeling languages help in planning. Computer languages can be processed with a computer. An example of this abstraction process is the generational development of programming language from the first-generation programming language (machine language) to the second-generation programming language (assembly language) and the third-generation programming language (high-level programming language). Each stage can be used as a stepping stone for the next stage. The language abstraction continues for example in scripting languages and domain-specific languages. Within a programming language, some features let the programmer create new abstractions. These include subroutines, modules, polymorphism, and software components. Some other abstractions such as software design patterns and architectural styles remain invisible to a translator and operate only in the design of a system. Some abstractions try to limit the range of concepts a programmer needs to be aware of, by completely hiding the abstractions they are built on. The software engineer and writer Joel Spolsky has criticized these efforts by claiming that all abstractions are leaky – that they can never completely hide the details below; however, this does not negate the usefulness of abstraction. Some abstractions are designed to inter-operate with other abstractions – for example, a programming language may contain a foreign function interface for making calls to the lower-level language. == Abstraction features == === Programming languages === Different programming languages provide different types of abstraction, depending on the intended applications for the language. For example: In object-oriented programming languages such as C++, Object Pascal, or Java, the concept of abstraction has become a declarative statement – using the syntax function(parameters) = 0; (in C++) or the reserved words (keywords) abstract and interface (in Java). After such a declaration, it is the responsibility of the programmer to implement a class to instantiate the object of the declaration. Functional programming languages commonly exhibit abstractions related to functions, such as lambda abstractions (making a term into a function of some variable) and higher-order functions (parameters are functions). Modern members of the Lisp programming language family such as Clojure, Scheme and Common Lisp support macro systems to allow syntactic abstraction. Other programming languages such as Scala also have macros, or very similar metaprogramming features (for example, Haskell has Template Haskell, OCaml has MetaOCaml). These can allow programs to omit boilerplate code, abstract away tedious function call sequences, implement new control flow structures, and implement domain-specific languages (DSLs), which allow domain-specific concepts to be expressed in concise and elegant ways. All of these, when used correctly, improve both the programmer's efficiency and the clarity of source code by making the intended purpose more explicit. A consequence of syntactic abstraction is also that any Lisp dialect, and almost any programming language, can in principle, be implemented in any modern Lisp with significantly reduced (but still non-trivial in most cases) effort when compared to "more traditional" programming languages such as Python, C or Java. === Specification methods === Analysts have developed various methods to formally specify software systems. Some known methods include: Abstract-model based method (VDM, Z); Algebraic techniques (Larch, CLEAR, OBJ, ACT ONE, CASL); Process-based techniques (LOTOS, SDL, Estelle); Trace-based techniques (SPECIAL, TAM); Knowledge-based techniques (Refine, Gist). === Specification languages === Specification languages generally rely on abstractions of one kind or another, since specifications are typically defined earlier in a project, (and at a more abstract level) than an eventual implementation. The Unified Modeling Language (UML) specification language, for example, allows the definition of abstract classes, which in a waterfall project, remain abstract during the architecture and specification phase of the project. == Control abstraction == Programming languages offer control abstraction as one of the main purposes of their use. Computer machines understand operations at the very low level such as moving some bits from one location of the memory to another location and producing the sum of two sequences of bits. Programming languages allow this to be done in the higher level. For example, consider this statement written in a Pascal-like fashion: a := (1 + 2) * 5 To a human, this seems a fairly simple and obvious calculation ("one plus two is three, times five is fifteen"). However, the low-level steps necessary to carry out this evaluation, and return the value "15", and then assign that value to the variable "a", are actually quite subtle and complex. The values need to be converted to binary representation (often a much more complicated task than one would think) and the calculations decomposed (by the compiler or interpreter) into assembly instructions (again, which are much less intuitive to the programmer: operations such as shifting a binary register left, or adding the binary complement of the contents of one register to another, are simply not how humans think about the abstract arithmetical operations of addition or multiplication). Finally, assigning the resulting value of "15" to the variable labeled "a", so that "a" can be used later, involves additional 'behind-the-scenes' steps of looking up a variable's label and the resultant location in physical or virtual memory, storing the binary representation of "15" to that memory location, etc. Without control abstraction, a programmer would need to specify all the register/binary-level steps each time they simply wanted to add or multiply a couple of numbers and assign the result to a variable. Such duplication of effort has two serious negative consequences: it forces the programmer to constantly repeat fairly common tasks every time a similar operation is needed it forces the programmer to program for the particular hardware and instruction set === Structured programming === Structured programming involves the splitting of complex program tasks into smaller pieces with clear flow-control and interfaces between components, with a reduction of the complexity potential for side-effects. In a simple program, this may aim to ensure that loops have single or obvious exit points and (where possible) to have single exit points from functions and procedures. In a larger system, it may involve breaking down complex tasks into many different modules. Consider a system which handles payroll on ships and at shore offices: The uppermost level may feature a menu of typical end-user operations. Within that could be standalone executables or libraries for tasks such as signing on and off employees or printing checks. Within each of those standalone components there could be many different source files, each containing the program code to handle a part of the problem, with only selected interfaces available to other parts of the program. A sign on program could have source files for each data entry screen and the database interface (which may itself be a standalone third party library or a statically linked set of library routines). Either the database or the payroll application also has to initiate the process of exchanging data with between ship and shore, and that data transfer task will often contain many other components. These layers produce the effect of isolating the implementation details of one component and its assorted internal methods from the others. Object-oriented programming embraces and extends this concept. == Data abstraction == Data abstraction enforces a clear separation between the abstract properties of a data type and the concrete details of its implementation. The abstract properties are those that are visible to client code that makes use of the data type—the interface to the data type—while the concrete implementation is kept entirely private, and indeed can change, for example to incorporate efficiency improvements over time. The idea is that such changes are not supposed to have any impact on client code, since they involve no difference in the abstract behaviour. For example, one could define an abstract data type called lookup table which uniquely associates keys with values, and in which values may be retrieved by specifying their corresponding keys. Such a lookup table may be implemented in various ways: as a hash table, a binary search tree, or even a simple linear list of (key:value) pairs. As far as client code is concerned, the abstract properties of the type are the same in each case. Of course, this all relies on getting the details of the interface right in the first place, since any changes there can have major impacts on client code. As one way to look at this: the interface forms a contract on agreed behaviour between the data type and client code; anything not spelled out in the contract is subject to change without notice. == Manual data abstraction == While much of data abstraction occurs through computer science and automation, there are times when this process is done manually and without programming intervention. One way this can be understood is through data abstraction within the process of conducting a systematic review of the literature. In this methodology, data is abstracted by one or several abstractors when conducting a meta-analysis, with errors reduced through dual data abstraction followed by independent checking, known as adjudication. == Abstraction in object oriented programming == In object-oriented programming theory, abstraction involves the facility to define objects that represent abstract "actors" that can perform work, report on and change their state, and "communicate" with other objects in the system. The term encapsulation refers to the hiding of state details, but extending the concept of data type from earlier programming languages to associate behavior most strongly with the data, and standardizing the way that different data types interact, is the beginning of abstraction. When abstraction proceeds into the operations defined, enabling objects of different types to be substituted, it is called polymorphism. When it proceeds in the opposite direction, inside the types or classes, structuring them to simplify a complex set of relationships, it is called delegation or inheritance. Various object-oriented programming languages offer similar facilities for abstraction, all to support a general strategy of polymorphism in object-oriented programming, which includes the substitution of one data type for another in the same or similar role. Although not as generally supported, a configuration or image or package may predetermine a great many of these bindings at compile time, link time, or load time. This would leave only a minimum of such bindings to change at run-time. Common Lisp Object System or Self, for example, feature less of a class-instance distinction and more use of delegation for polymorphism. Individual objects and functions are abstracted more flexibly to better fit with a shared functional heritage from Lisp. C++ exemplifies another extreme: it relies heavily on templates and overloading and other static bindings at compile-time, which in turn has certain flexibility problems. Although these examples offer alternate strategies for achieving the same abstraction, they do not fundamentally alter the need to support abstract nouns in code – all programming relies on an ability to abstract verbs as functions, nouns as data structures, and either as processes. Consider for example a sample Java fragment to represent some common farm "animals" to a level of abstraction suitable to model simple aspects of their hunger and feeding. It defines an Animal class to represent both the state of the animal and its functions: With the above definition, one could create objects of type Animal and call their methods like this: In the above example, the class Animal is an abstraction used in place of an actual animal, LivingThing is a further abstraction (in this case a generalisation) of Animal. If one requires a more differentiated hierarchy of animals – to differentiate, say, those who provide milk from those who provide nothing except meat at the end of their lives – that is an intermediary level of abstraction, probably DairyAnimal (cows, goats) who would eat foods suitable to giving good milk, and MeatAnimal (pigs, steers) who would eat foods to give the best meat-quality. Such an abstraction could remove the need for the application coder to specify the type of food, so they could concentrate instead on the feeding schedule. The two classes could be related using inheritance or stand alone, and the programmer could define varying degrees of polymorphism between the two types. These facilities tend to vary drastically between languages, but in general each can achieve anything that is possible with any of the others. A great many operation overloads, data type by data type, can have the same effect at compile-time as any degree of inheritance or other means to achieve polymorphism. The class notation is simply a coder's convenience. === Object-oriented design === Decisions regarding what to abstract and what to keep under the control of the coder become the major concern of object-oriented design and domain analysis—actually determining the relevant relationships in the real world is the concern of object-oriented analysis or legacy analysis. In general, to determine appropriate abstraction, one must make many small decisions about scope (domain analysis), determine what other systems one must cooperate with (legacy analysis), then perform a detailed object-oriented analysis which is expressed within project time and budget constraints as an object-oriented design. In our simple example, the domain is the barnyard, the live pigs and cows and their eating habits are the legacy constraints, the detailed analysis is that coders must have the flexibility to feed the animals what is available and thus there is no reason to code the type of food into the class itself, and the design is a single simple Animal class of which pigs and cows are instances with the same functions. A decision to differentiate DairyAnimal would change the detailed analysis but the domain and legacy analysis would be unchanged—thus it is entirely under the control of the programmer, and it is called an abstraction in object-oriented programming as distinct from abstraction in domain or legacy analysis. == Considerations == When discussing formal semantics of programming languages, formal methods or abstract interpretation, abstraction refers to the act of considering a less detailed, but safe, definition of the observed program behaviors. For instance, one may observe only the final result of program executions instead of considering all the intermediate steps of executions. Abstraction is defined to a concrete (more precise) model of execution. Abstraction may be exact or faithful with respect to a property if one can answer a question about the property equally well on the concrete or abstract model. For instance, if one wishes to know what the result of the evaluation of a mathematical expression involving only integers +, -, ×, is worth modulo n, then one needs only perform all operations modulo n (a familiar form of this abstraction is casting out nines). Abstractions, however, though not necessarily exact, should be sound. That is, it should be possible to get sound answers from them—even though the abstraction may simply yield a result of undecidability. For instance, students in a class may be abstracted by their minimal and maximal ages; if one asks whether a certain person belongs to that class, one may simply compare that person's age with the minimal and maximal ages; if his age lies outside the range, one may safely answer that the person does not belong to the class; if it does not, one may only answer "I don't know". The level of abstraction included in a programming language can influence its overall usability. The Cognitive dimensions framework includes the concept of abstraction gradient in a formalism. This framework allows the designer of a programming language to study the trade-offs between abstraction and other characteristics of the design, and how changes in abstraction influence the language usability. Abstractions can prove useful when dealing with computer programs, because non-trivial properties of computer programs are essentially undecidable (see Rice's theorem). As a consequence, automatic methods for deriving information on the behavior of computer programs either have to drop termination (on some occasions, they may fail, crash or never yield out a result), soundness (they may provide false information), or precision (they may answer "I don't know" to some questions). Abstraction is the core concept of abstract interpretation. Model checking generally takes place on abstract versions of the studied systems. == Levels of abstraction == Computer science commonly presents levels (or, less commonly, layers) of abstraction, wherein each level represents a different model of the same information and processes, but with varying amounts of detail. Each level uses a system of expression involving a unique set of objects and compositions that apply only to a particular domain. Each relatively abstract, "higher" level builds on a relatively concrete, "lower" level, which tends to provide an increasingly "granular" representation. For example, gates build on electronic circuits, binary on gates, machine language on binary, programming language on machine language, applications and operating systems on programming languages. Each level is embodied, but not determined, by the level beneath it, making it a language of description that is somewhat self-contained. === Database systems === Since many users of database systems lack in-depth familiarity with computer data-structures, database developers often hide complexity through the following levels: Physical level – The lowest level of abstraction describes how a system actually stores data. The physical level describes complex low-level data structures in detail. Logical level – The next higher level of abstraction describes what data the database stores, and what relationships exist among those data. The logical level thus describes an entire database in terms of a small number of relatively simple structures. Although implementation of the simple structures at the logical level may involve complex physical level structures, the user of the logical level does not need to be aware of this complexity. This is referred to as physical data independence. Database administrators, who must decide what information to keep in a database, use the logical level of abstraction. View level – The highest level of abstraction describes only part of the entire database. Even though the logical level uses simpler structures, complexity remains because of the variety of information stored in a large database. Many users of a database system do not need all this information; instead, they need to access only a part of the database. The view level of abstraction exists to simplify their interaction with the system. The system may provide many views for the same database. === Layered architecture === The ability to provide a design of different levels of abstraction can simplify the design considerably enable different role players to effectively work at various levels of abstraction support the portability of software artifacts (model-based ideally) Systems design and business process design can both use this. Some design processes specifically generate designs that contain various levels of abstraction. Layered architecture partitions the concerns of the application into stacked groups (layers). It is a technique used in designing computer software, hardware, and communications in which system or network components are isolated in layers so that changes can be made in one layer without affecting the others. == See also == Abstraction principle (computer programming) Abstraction inversion for an anti-pattern of one danger in abstraction Abstract data type for an abstract description of a set of data Algorithm for an abstract description of a computational procedure Bracket abstraction for making a term into a function of a variable Data modeling for structuring data independent of the processes that use it Encapsulation for abstractions that hide implementation details Greenspun's Tenth Rule for an aphorism about an (the?) optimum point in the space of abstractions Higher-order function for abstraction where functions produce or consume other functions Lambda abstraction for making a term into a function of some variable List of abstractions (computer science) Refinement for the opposite of abstraction in computing Integer (computer science) Heuristic (computer science) == References == == Further reading == == External links == SimArch example of layered architecture for distributed simulation systems.
Wikipedia/Abstraction_(computer_science)
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function. Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept. A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as f ( x ) = x 2 + 1 ; {\displaystyle f(x)=x^{2}+1;} in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if f ( x ) = x 2 + 1 , {\displaystyle f(x)=x^{2}+1,} then f ( 4 ) = 4 2 + 1 = 17. {\displaystyle f(4)=4^{2}+1=17.} Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details. == Definition == A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function. If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written y = f ( x ) . {\displaystyle y=f(x).} In this notation, x is the argument or variable of the function. A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain. A function f, its domain X, and its codomain Y are often specified by the notation f : X → Y . {\displaystyle f:X\to Y.} One may write x ↦ y {\displaystyle x\mapsto y} instead of y = f ( x ) {\displaystyle y=f(x)} , where the symbol ↦ {\displaystyle \mapsto } (read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function x ↦ x 2 . {\displaystyle x\mapsto x^{2}.} The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a real function, the determination of the domain of the function x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function. A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S. === Formal definition === The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ X {\displaystyle x\in X} and y ∈ Y . {\displaystyle y\in Y.} The set of all these pairs is called the Cartesian product of X and Y and denoted X × Y . {\displaystyle X\times Y.} Thus, the above definition may be formalized as follows. A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions: For every x {\displaystyle x} in X {\displaystyle X} there exists y {\displaystyle y} in Y {\displaystyle Y} such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} If ( x , y ) ∈ R {\displaystyle (x,y)\in R} and ( x , z ) ∈ R , {\displaystyle (x,z)\in R,} then y = z . {\displaystyle y=z.} This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation): A function is formed by three sets, the domain X , {\displaystyle X,} the codomain Y , {\displaystyle Y,} and the graph R {\displaystyle R} that satisfy the three following conditions. R ⊆ { ( x , y ) ∣ x ∈ X , y ∈ Y } {\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}} ∀ x ∈ X , ∃ y ∈ Y , ( x , y ) ∈ R {\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad } ( x , y ) ∈ R ∧ ( x , z ) ∈ R ⟹ y = z {\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad } === Partial functions === Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every x ∈ X , {\displaystyle x\in X,} there is at most one y in Y such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} Using functional notation, this means that, given x ∈ X , {\displaystyle x\in X,} either f ( x ) {\displaystyle f(x)} is in Y, or it is undefined. The set of the elements of X such that f ( x ) {\displaystyle f(x)} is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus an ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function. In several areas of mathematics, the term "function" refers to partial functions rather than to ordinary (total) functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain. In calculus, a real-valued function of a real variable or real function is a partial function from the set R {\displaystyle \mathbb {R} } of the real numbers to itself. Given a real function f : x ↦ f ( x ) {\displaystyle f:x\mapsto f(x)} its multiplicative inverse x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse. Similarly, a function of a complex variable is generally a partial function whose domain of definition is a subset of the complex numbers C {\displaystyle \mathbb {C} } . The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function z ↦ 1 / ζ ( z ) {\displaystyle z\mapsto 1/\zeta (z)} is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis. In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem). === Multivariate functions === A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation. Commonly, an n-tuple is denoted enclosed between parentheses, such as in ( 1 , 2 , … , n ) . {\displaystyle (1,2,\ldots ,n).} When using functional notation, one usually omits the parentheses surrounding tuples, writing f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} instead of f ( ( x 1 , … , x n ) ) . {\displaystyle f((x_{1},\ldots ,x_{n})).} Given n sets X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} such that x 1 ∈ X 1 , … , x n ∈ X n {\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}} is called the Cartesian product of X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} and denoted X 1 × ⋯ × X n . {\displaystyle X_{1}\times \cdots \times X_{n}.} Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain. f : U → Y , {\displaystyle f:U\to Y,} where the domain U has the form U ⊆ X 1 × ⋯ × X n . {\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.} If all the X i {\displaystyle X_{i}} are equal to the set R {\displaystyle \mathbb {R} } of the real numbers or to the set C {\displaystyle \mathbb {C} } of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables. == Notation == There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. === Functional notation === The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in f ( x ) , sin ⁡ ( 3 ) , or f ( x 2 + 1 ) . {\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).} The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain ( x 2 + 1 {\displaystyle x^{2}+1} in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let f ( x ) = sin ⁡ ( x 2 + 1 ) {\displaystyle f(x)=\sin(x^{2}+1)} ". When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x). Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols. The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let f ( x ) {\displaystyle f(x)} be a function". This is an abuse of notation that is useful for a simpler formulation. === Arrow notation === Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, x ↦ x + 1 {\displaystyle x\mapsto x+1} is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of R {\displaystyle \mathbb {R} } is implied. The domain and codomain can also be explicitly stated, for example: sqr : Z → Z x ↦ x 2 . {\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}} This defines a function sqr from the integers to the integers that returns the square of its input. As a common application of the arrow notation, suppose f : X × X → Y ; ( x , t ) ↦ f ( x , t ) {\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)} is a function in two variables, and we want to refer to a partially applied function X → Y {\displaystyle X\to Y} produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} using the arrow notation. The expression x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} (read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). === Index notation === Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes f x . {\displaystyle f_{x}.} This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element f n {\displaystyle f_{n}} is called the nth element of the sequence. The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map x ↦ f ( x , t ) {\displaystyle x\mapsto f(x,t)} (see above) would be denoted f t {\displaystyle f_{t}} using index notation, if we define the collection of maps f t {\displaystyle f_{t}} by the formula f t ( x ) = f ( x , t ) {\displaystyle f_{t}(x)=f(x,t)} for all x , t ∈ X {\displaystyle x,t\in X} . === Dot notation === In the notation x ↦ f ( x ) , {\displaystyle x\mapsto f(x),} the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x. For example, a ( ⋅ ) 2 {\displaystyle a(\cdot )^{2}} may stand for the function x ↦ a x 2 {\displaystyle x\mapsto ax^{2}} , and ∫ a ( ⋅ ) f ( u ) d u {\textstyle \int _{a}^{\,(\cdot )}f(u)\,du} may stand for a function defined by an integral with variable upper bound: x ↦ ∫ a x f ( u ) d u {\textstyle x\mapsto \int _{a}^{x}f(u)\,du} . === Specialized notations === There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above. === Functions of more than one variable === In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers ( x , y ) {\displaystyle (x,y)} to the sum of their squares, x 2 + y 2 {\displaystyle x^{2}+y^{2}} . Such a function is commonly written as f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as f ( w , x , y ) {\displaystyle f(w,x,y)} , f ( w , x , y , z ) {\displaystyle f(w,x,y,z)} . == Other terms == A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map. Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function. == Specifying a function == Given a function f {\displaystyle f} , by definition, to each element x {\displaystyle x} of the domain of the function f {\displaystyle f} , there is a unique element associated to it, the value f ( x ) {\displaystyle f(x)} of f {\displaystyle f} at x {\displaystyle x} . There are several ways to specify or describe how x {\displaystyle x} is related to f ( x ) {\displaystyle f(x)} , both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function f {\displaystyle f} . === By listing function values === On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if A = { 1 , 2 , 3 } {\displaystyle A=\{1,2,3\}} , then one can define a function f : A → R {\displaystyle f:A\to \mathbb {R} } by f ( 1 ) = 2 , f ( 2 ) = 3 , f ( 3 ) = 4. {\displaystyle f(1)=2,f(2)=3,f(3)=4.} === By a formula === Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example, f {\displaystyle f} can be defined by the formula f ( n ) = n + 1 {\displaystyle f(n)=n+1} , for n ∈ { 1 , 2 , 3 } {\displaystyle n\in \{1,2,3\}} . When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example, f ( x ) = 1 + x 2 {\displaystyle f(x)={\sqrt {1+x^{2}}}} defines a function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } whose domain is R , {\displaystyle \mathbb {R} ,} because 1 + x 2 {\displaystyle 1+x^{2}} is always positive if x is a real number. On the other hand, f ( x ) = 1 − x 2 {\displaystyle f(x)={\sqrt {1-x^{2}}}} defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.) Functions can be classified by the nature of formulas that define them: A quadratic function is a function that may be written f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} where a, b, c are constants. More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example, f ( x ) = x 3 − 3 x − 1 {\displaystyle f(x)=x^{3}-3x-1} and f ( x ) = ( x − 1 ) ( x 3 + 1 ) + 2 x 2 − 1 {\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1} are polynomial functions of x {\displaystyle x} . A rational function is the same, with divisions also allowed, such as f ( x ) = x − 1 x + 1 , {\displaystyle f(x)={\frac {x-1}{x+1}},} and f ( x ) = 1 x + 1 + 3 x − 2 x − 1 . {\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.} An algebraic function is the same, with nth roots and roots of polynomials also allowed. An elementary function is the same, with logarithms and exponential functions allowed. === Inverse and implicit functions === A function f : X → Y , {\displaystyle f:X\to Y,} with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function f − 1 : Y → X {\displaystyle f^{-1}:Y\to X} that maps y ∈ Y {\displaystyle y\in Y} to the element x ∈ X {\displaystyle x\in X} such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers. If a function f : X → Y {\displaystyle f:X\to Y} is not bijective, it may occur that one can select subsets E ⊆ X {\displaystyle E\subseteq X} and F ⊆ Y {\displaystyle F\subseteq Y} such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly. More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every x ∈ E , {\displaystyle x\in E,} there is some y ∈ Y {\displaystyle y\in Y} such that x R y. If one has a criterion allowing selecting such a y for every x ∈ E , {\displaystyle x\in E,} this defines a function f : E → Y , {\displaystyle f:E\to Y,} called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0]. In this example, the equation can be solved in y, giving y = ± 1 − x 2 , {\displaystyle y=\pm {\sqrt {1-x^{2}}},} but, in more complicated examples, this is impossible. For example, the relation y 5 + y + x = 0 {\displaystyle y^{5}+y+x=0} defines y as an implicit function of x, called the Bring radical, which has R {\displaystyle \mathbb {R} } as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point. === Using differential calculus === Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function. More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0. Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by e x = ∑ n = 0 ∞ x n n ! {\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}} . However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. === By recurrence === Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations. The factorial function on the nonnegative integers ( n ↦ n ! {\displaystyle n\mapsto n!} ) is a basic example, as it can be defined by the recurrence relation n ! = n ( n − 1 ) ! for n > 0 , {\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0 ! = 1. {\displaystyle 0!=1.} == Representing a function == A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts. === Graphs and plots === Given a function f : X → Y , {\displaystyle f:X\to Y,} its graph is, formally, the set G = { ( x , f ( x ) ) ∣ x ∈ X } . {\displaystyle G=\{(x,f(x))\mid x\in X\}.} In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element ( x , y ) ∈ G {\displaystyle (x,y)\in G} may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} consisting of all points with coordinates ( x , x 2 ) {\displaystyle (x,x^{2})} for x ∈ R , {\displaystyle x\in \mathbb {R} ,} yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates ( r , θ ) = ( x , x 2 ) , {\displaystyle (r,\theta )=(x,x^{2}),} the plot obtained is Fermat's spiral. === Tables === A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function f : { 1 , … , 5 } 2 → R {\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} } defined as f ( x , y ) = x y {\displaystyle f(x,y)=xy} can be represented by the familiar multiplication table On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. === Bar chart === A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis). == General properties == This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. === Standard functions === There are a number of standard functions that occur frequently: For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function ∅ → X {\displaystyle \varnothing \to X} is not equal to ∅ → Y {\displaystyle \varnothing \to Y} if and only if X ≠ Y {\displaystyle X\neq Y} , although their graphs are both the empty set. For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set. Given a function f : X → Y , {\displaystyle f:X\to Y,} the canonical surjection of f onto its image f ( X ) = { f ( x ) ∣ x ∈ X } {\displaystyle f(X)=\{f(x)\mid x\in X\}} is the function from X to f(X) that maps x to f(x). For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself. The identity function on a set X, often denoted by idX, is the inclusion of X into itself. === Function composition === Given two functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f:X\rightarrow Z} defined by ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} That is, the value of g ∘ f {\displaystyle g\circ f} is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right. The composition g ∘ f {\displaystyle g\circ f} is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} satisfy these conditions, the composition is not necessarily commutative, that is, the functions g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then g ( f ( x ) ) = x 2 + 1 {\displaystyle g(f(x))=x^{2}+1} and f ( g ( x ) ) = ( x + 1 ) 2 {\displaystyle f(g(x))=(x+1)^{2}} agree just for x = 0. {\displaystyle x=0.} The function composition is associative in the sense that, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal, that is, ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle (h\circ g)\circ f=h\circ (g\circ f).} Therefore, it is usual to just write h ∘ g ∘ f . {\displaystyle h\circ g\circ f.} The identity functions id X {\displaystyle \operatorname {id} _{X}} and id Y {\displaystyle \operatorname {id} _{Y}} are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has f ∘ id X = id Y ∘ f = f . {\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.} === Image and preimage === Let f : X → Y . {\displaystyle f:X\to Y.} The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is, f ( A ) = { f ( x ) ∣ x ∈ A } . {\displaystyle f(A)=\{f(x)\mid x\in A\}.} The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain. On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by f − 1 ( y ) {\displaystyle f^{-1}(y)} and is given by the equation f − 1 ( y ) = { x ∈ X ∣ f ( x ) = y } . {\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.} Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by f − 1 ( B ) {\displaystyle f^{-1}(B)} and is given by the equation f − 1 ( B ) = { x ∈ X ∣ f ( x ) ∈ B } . {\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.} For example, the preimage of { 4 , 9 } {\displaystyle \{4,9\}} under the square function is the set { − 3 , − 2 , 2 , 3 } {\displaystyle \{-3,-2,2,3\}} . By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then f − 1 ( 0 ) = Z {\displaystyle f^{-1}(0)=\mathbb {Z} } . If f : X → Y {\displaystyle f:X\to Y} is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties: A ⊆ B ⟹ f ( A ) ⊆ f ( B ) {\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)} C ⊆ D ⟹ f − 1 ( C ) ⊆ f − 1 ( D ) {\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)} A ⊆ f − 1 ( f ( A ) ) {\displaystyle A\subseteq f^{-1}(f(A))} C ⊇ f ( f − 1 ( C ) ) {\displaystyle C\supseteq f(f^{-1}(C))} f ( f − 1 ( f ( A ) ) ) = f ( A ) {\displaystyle f(f^{-1}(f(A)))=f(A)} f − 1 ( f ( f − 1 ( C ) ) ) = f − 1 ( C ) {\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)} The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f. If a function f has an inverse (see below), this inverse is denoted f − 1 . {\displaystyle f^{-1}.} In this case f − 1 ( C ) {\displaystyle f^{-1}(C)} may denote either the image by f − 1 {\displaystyle f^{-1}} or the preimage by f of C. This is not a problem, as these sets are equal. The notation f ( A ) {\displaystyle f(A)} and f − 1 ( C ) {\displaystyle f^{-1}(C)} may be ambiguous in the case of sets that contain some subsets as elements, such as { x , { x } } . {\displaystyle \{x,\{x\}\}.} In this case, some care may be needed, for example, by using square brackets f [ A ] , f − 1 [ C ] {\displaystyle f[A],f^{-1}[C]} for images and preimages of subsets and ordinary parentheses for images and preimages of elements. === Injective, surjective and bijective functions === Let f : X → Y {\displaystyle f:X\to Y} be a function. The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element x 0 {\displaystyle x_{0}} in X (which exists as X is supposed to be nonempty), and one defines g by g ( y ) = x {\displaystyle g(y)=x} if y = f ( x ) {\displaystyle y=f(x)} and g ( y ) = x 0 {\displaystyle g(y)=x_{0}} if y ∉ f ( X ) . {\displaystyle y\not \in f(X).} Conversely, if g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} and y = f ( x ) , {\displaystyle y=f(x),} then x = g ( y ) , {\displaystyle x=g(y),} and thus f − 1 ( y ) = { x } . {\displaystyle f^{-1}(y)=\{x\}.} The function f is surjective (or onto, or is a surjection) if its range f ( X ) {\displaystyle f(X)} equals its codomain Y {\displaystyle Y} , that is, if, for each element y {\displaystyle y} of the codomain, there exists some element x {\displaystyle x} of the domain such that f ( x ) = y {\displaystyle f(x)=y} (in other words, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of every y ∈ Y {\displaystyle y\in Y} is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that f ∘ g = id Y , {\displaystyle f\circ g=\operatorname {id} _{Y},} that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by g ( y ) = x , {\displaystyle g(y)=x,} where x {\displaystyle x} is an arbitrarily chosen element of f − 1 ( y ) . {\displaystyle f^{-1}(y).} The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X {\displaystyle g\circ f=\operatorname {id} _{X}} and f ∘ g = id Y . {\displaystyle f\circ g=\operatorname {id} _{Y}.} (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every function f : X → Y {\displaystyle f:X\to Y} may be factorized as the composition i ∘ s {\displaystyle i\circ s} of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. === Restriction and extension === If f : X → Y {\displaystyle f:X\to Y} is a function and S is a subset of X, then the restriction of f {\displaystyle f} to S, denoted f | S {\displaystyle f|_{S}} , is the function from S to Y defined by f | S ( x ) = f ( x ) {\displaystyle f|_{S}(x)=f(x)} for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function f {\displaystyle f} such that f | S {\displaystyle f|_{S}} is injective, then the canonical surjection of f | S {\displaystyle f|_{S}} onto its image f | S ( S ) = f ( S ) {\displaystyle f|_{S}(S)=f(S)} is a bijection, and thus has an inverse function from f ( S ) {\displaystyle f(S)} to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos. Function restriction may also be used for "gluing" functions together. Let X = ⋃ i ∈ I U i {\textstyle X=\bigcup _{i\in I}U_{i}} be the decomposition of X as a union of subsets, and suppose that a function f i : U i → Y {\displaystyle f_{i}:U_{i}\to Y} is defined on each U i {\displaystyle U_{i}} such that for each pair i , j {\displaystyle i,j} of indices, the restrictions of f i {\displaystyle f_{i}} and f j {\displaystyle f_{j}} to U i ∩ U j {\displaystyle U_{i}\cap U_{j}} are equal. Then this defines a unique function f : X → Y {\displaystyle f:X\to Y} such that f | U i = f i {\displaystyle f|_{U_{i}}=f_{i}} for all i. This is the way that functions on manifolds are defined. An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function h ( x ) = a x + b c x + d {\displaystyle h(x)={\frac {ax+b}{cx+d}}} such that ad − bc ≠ 0. Its domain is the set of all real numbers different from − d / c , {\displaystyle -d/c,} and its image is the set of all real numbers different from a / c . {\displaystyle a/c.} If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting h ( ∞ ) = a / c {\displaystyle h(\infty )=a/c} and h ( − d / c ) = ∞ {\displaystyle h(-d/c)=\infty } . == In calculus == The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. === Real function === A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval. Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by ( f + g ) ( x ) = f ( x ) + g ( x ) ( f − g ) ( x ) = f ( x ) − g ( x ) ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) . {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.} The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by f g ( x ) = f ( x ) g ( x ) , {\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},} but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function x ↦ 1 x , {\displaystyle x\mapsto {\frac {1}{x}},} whose graph is a hyperbola, and whose domain is the whole real line except for 0. The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm. A real function f is monotonic in an interval if the sign of f ( x ) − f ( y ) x − y {\displaystyle {\frac {f(x)-f(y)}{x-y}}} does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function. Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation y ″ + y = 0 {\displaystyle y''+y=0} such that sin ⁡ 0 = 0 , cos ⁡ 0 = 1 , ∂ sin ⁡ x ∂ x ( 0 ) = 1 , ∂ cos ⁡ x ∂ x ( 0 ) = 0. {\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.} === Vector-valued function === When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function. Some vector-valued functions are defined on a subset of R n {\displaystyle \mathbb {R} ^{n}} or other spaces that share geometric or topological properties of R n {\displaystyle \mathbb {R} ^{n}} , such as manifolds. These vector-valued functions are given the name vector fields. == Function space == In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces. == Multi-valued functions == Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point x 0 , {\displaystyle x_{0},} there are several possible starting values for the function. For example, in defining the square root as the inverse function of the square function, for any positive real number x 0 , {\displaystyle x_{0},} there are two choices for the value of the square root, one of which is positive and denoted x 0 , {\displaystyle {\sqrt {x_{0}}},} and another which is negative and denoted − x 0 . {\displaystyle -{\sqrt {x_{0}}}.} These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of x 3 − 3 x − y = 0 {\displaystyle x^{3}-3x-y=0} (see the figure on the right). For y = 0 one may choose either 0 , 3 , or − 3 {\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}} for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy. == In the foundations of mathematics == The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, the singleton set may be considered as a function x ↦ { x } . {\displaystyle x\mapsto \{x\}.} Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions. These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set. In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus. == In computer science == In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects. In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory. Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions. General recursive functions are partial functions from integers to integers that can be defined from constant functions, successor, and projection functions via the operators composition, primitive recursion, and minimization. Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.), every sequence of symbols may be coded as a sequence of bits, a bit sequence can be interpreted as the binary representation of an integer. Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus. == See also == === Subpages === === Generalizations === === Related topics === == Notes == == References == == Sources == == Further reading == == External links == The Wolfram Functions – website giving formulae and visualizations of many mathematical functions NIST Digital Library of Mathematical Functions
Wikipedia/Functional_notation
Algebraic analysis is an area of mathematics that deals with systems of linear partial differential equations by using sheaf theory and complex analysis to study properties and generalizations of functions such as hyperfunctions and microfunctions. Semantically, it is the application of algebraic operations on analytic quantities. As a research programme, it was started by the Japanese mathematician Mikio Sato in 1959. This can be seen as an algebraic geometrization of analysis. According to Schapira, parts of Sato's work can be regarded as a manifestation of Grothendieck's style of mathematics within the realm of classical analysis. It derives its meaning from the fact that the differential operator is right-invertible in several function spaces. It helps in the simplification of the proofs due to an algebraic description of the problem considered. == Microfunction == Let M be a real-analytic manifold of dimension n, and let X be its complexification. The sheaf of microlocal functions on M is given as H n ( μ M ( O X ) ⊗ o r M / X ) {\displaystyle {\mathcal {H}}^{n}(\mu _{M}({\mathcal {O}}_{X})\otimes {\mathcal {or}}_{M/X})} where μ M {\displaystyle \mu _{M}} denotes the microlocalization functor, o r M / X {\displaystyle {\mathcal {or}}_{M/X}} is the relative orientation sheaf. A microfunction can be used to define a Sato's hyperfunction. By definition, the sheaf of Sato's hyperfunctions on M is the restriction of the sheaf of microfunctions to M, in parallel to the fact the sheaf of real-analytic functions on M is the restriction of the sheaf of holomorphic functions on X to M. == See also == Hyperfunction D-module Microlocal analysis Generalized function Edge-of-the-wedge theorem FBI transform Localization of a ring Vanishing cycle Gauss–Manin connection Differential algebra Perverse sheaf Mikio Sato Masaki Kashiwara Lars Hörmander == Citations == == Sources == == Further reading == Masaki Kashiwara and Algebraic Analysis Archived 25 February 2012 at the Wayback Machine Foundations of algebraic analysis book review
Wikipedia/Microfunction
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering. As a differentiable function of a complex variable is equal to the sum function given by its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions. The concept can be extended to functions of several complex variables. Complex analysis is contrasted with real analysis, which deals with the study of real numbers and functions of a real variable. == History == Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory. == Complex functions == A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane. For any complex function, the values z {\displaystyle z} from the domain and their images f ( z ) {\displaystyle f(z)} in the range may be separated into real and imaginary parts: z = x + i y and f ( z ) = f ( x + i y ) = u ( x , y ) + i v ( x , y ) , {\displaystyle z=x+iy\quad {\text{ and }}\quad f(z)=f(x+iy)=u(x,y)+iv(x,y),} where x , y , u ( x , y ) , v ( x , y ) {\displaystyle x,y,u(x,y),v(x,y)} are all real-valued. In other words, a complex function f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } may be decomposed into u : R 2 → R {\displaystyle u:\mathbb {R} ^{2}\to \mathbb {R} \quad } and v : R 2 → R , {\displaystyle \quad v:\mathbb {R} ^{2}\to \mathbb {R} ,} i.e., into two real-valued functions ( u {\displaystyle u} , v {\displaystyle v} ) of two real variables ( x {\displaystyle x} , y {\displaystyle y} ). Similarly, any complex-valued function f on an arbitrary set X (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: (Re f, Im f) or, alternatively, as a vector-valued function from X into R 2 . {\displaystyle \mathbb {R} ^{2}.} Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions. == Holomorphic functions == Complex functions that are differentiable at every point of an open subset Ω {\displaystyle \Omega } of the complex plane are said to be holomorphic on Ω {\displaystyle \Omega } . In the context of complex analysis, the derivative of f {\displaystyle f} at z 0 {\displaystyle z_{0}} is defined to be f ′ ( z 0 ) = lim z → z 0 f ( z ) − f ( z 0 ) z − z 0 . {\displaystyle f'(z_{0})=\lim _{z\to z_{0}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}.} Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach z 0 {\displaystyle z_{0}} in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on Ω {\displaystyle \Omega } can be approximated arbitrarily well by polynomials in some neighborhood of every point in Ω {\displaystyle \Omega } . This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see Non-analytic smooth function § A smooth function which is nowhere real analytic. Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions C → C {\displaystyle \mathbb {C} \to \mathbb {C} } , are holomorphic over the entire complex plane, making them entire functions, while rational functions p / q {\displaystyle p/q} , where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions z ↦ ℜ ( z ) {\displaystyle z\mapsto \Re (z)} , z ↦ | z | {\displaystyle z\mapsto |z|} , and z ↦ z ¯ {\displaystyle z\mapsto {\bar {z}}} are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below). An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } , defined by f ( z ) = f ( x + i y ) = u ( x , y ) + i v ( x , y ) {\displaystyle f(z)=f(x+iy)=u(x,y)+iv(x,y)} , where x , y , u ( x , y ) , v ( x , y ) ∈ R {\displaystyle x,y,u(x,y),v(x,y)\in \mathbb {R} } , is holomorphic on a region Ω {\displaystyle \Omega } , then for all z 0 ∈ Ω {\displaystyle z_{0}\in \Omega } , ∂ f ∂ z ¯ ( z 0 ) = 0 , where ∂ ∂ z ¯ := 1 2 ( ∂ ∂ x + i ∂ ∂ y ) . {\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}(z_{0})=0,\ {\text{where }}{\frac {\partial }{\partial {\bar {z}}}}\mathrel {:=} {\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right).} In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations u x = v y {\displaystyle u_{x}=v_{y}} and u y = − v x {\displaystyle u_{y}=-v_{x}} , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem). Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: C {\displaystyle \mathbb {C} } , C ∖ { z 0 } {\displaystyle \mathbb {C} \setminus \{z_{0}\}} , or { z 0 } {\displaystyle \{z_{0}\}} for some z 0 ∈ C {\displaystyle z_{0}\in \mathbb {C} } . In other words, if two distinct complex numbers z {\displaystyle z} and w {\displaystyle w} are not in the range of an entire function f {\displaystyle f} , then f {\displaystyle f} is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset. == Conformal map == == Major results == One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials. A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed. If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface. All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions. A major application of certain complex spaces is in quantum mechanics as wave functions. == See also == Complex geometry Hypercomplex analysis Vector calculus List of complex analysis topics Monodromy theorem Riemann–Roch theorem Runge's theorem == References == == Sources == Ablowitz, M. J. & A. S. Fokas, Complex Variables: Introduction and Applications (Cambridge, 2003). Ahlfors, L., Complex Analysis (McGraw-Hill, 1953). Cartan, H., Théorie élémentaire des fonctions analytiques d'une ou plusieurs variables complexes. (Hermann, 1961). English translation, Elementary Theory of Analytic Functions of One or Several Complex Variables. (Addison-Wesley, 1963). Carathéodory, C., Funktionentheorie. (Birkhäuser, 1950). English translation, Theory of Functions of a Complex Variable (Chelsea, 1954). [2 volumes.] Carrier, G. F., M. Krook, & C. E. Pearson, Functions of a Complex Variable: Theory and Technique. (McGraw-Hill, 1966). Conway, J. B., Functions of One Complex Variable. (Springer, 1973). Fisher, S., Complex Variables. (Wadsworth & Brooks/Cole, 1990). Forsyth, A., Theory of Functions of a Complex Variable (Cambridge, 1893). Freitag, E. & R. Busam, Funktionentheorie. (Springer, 1995). English translation, Complex Analysis. (Springer, 2005). Goursat, E., Cours d'analyse mathématique, tome 2. (Gauthier-Villars, 1905). English translation, A course of mathematical analysis, vol. 2, part 1: Functions of a complex variable. (Ginn, 1916). Henrici, P., Applied and Computational Complex Analysis (Wiley). [Three volumes: 1974, 1977, 1986.] Kreyszig, E., Advanced Engineering Mathematics. (Wiley, 1962). Lavrentyev, M. & B. Shabat, Методы теории функций комплексного переменного. (Methods of the Theory of Functions of a Complex Variable). (1951, in Russian). Markushevich, A. I., Theory of Functions of a Complex Variable, (Prentice-Hall, 1965). [Three volumes.] Marsden & Hoffman, Basic Complex Analysis. (Freeman, 1973). Needham, T., Visual Complex Analysis. (Oxford, 1997). http://usf.usfca.edu/vca/ Remmert, R., Theory of Complex Functions. (Springer, 1990). Rudin, W., Real and Complex Analysis. (McGraw-Hill, 1966). Shaw, W. T., Complex Analysis with Mathematica (Cambridge, 2006). Stein, E. & R. Shakarchi, Complex Analysis. (Princeton, 2003). Sveshnikov, A. G. & A. N. Tikhonov, Теория функций комплексной переменной. (Nauka, 1967). English translation, The Theory Of Functions Of A Complex Variable (MIR, 1978). Titchmarsh, E. C., The Theory of Functions. (Oxford, 1932). Wegert, E., Visual Complex Functions. (Birkhäuser, 2012). Whittaker, E. T. & G. N. Watson, A Course of Modern Analysis. (Cambridge, 1902). 3rd ed. (1920) == External links == Wolfram Research's MathWorld Complex Analysis Page
Wikipedia/Functions_of_a_complex_variable
In the mathematical field of real analysis, a simple function is a real (or complex)-valued function over a subset of the real line, similar to a step function. Simple functions are sufficiently "nice" that using them makes mathematical reasoning, theory, and proof easier. For example, simple functions attain only a finite number of values. Some authors also require simple functions to be measurable, as used in practice. A basic example of a simple function is the floor function over the half-open interval [1, 9), whose only values are {1, 2, 3, 4, 5, 6, 7, 8}. A more advanced example is the Dirichlet function over the real line, which takes the value 1 if x is rational and 0 otherwise. (Thus the "simple" of "simple function" has a technical meaning somewhat at odds with common language.) All step functions are simple. Simple functions are used as a first stage in the development of theories of integration, such as the Lebesgue integral, because it is easy to define integration for a simple function and also it is straightforward to approximate more general functions by sequences of simple functions. == Definition == Formally, a simple function is a finite linear combination of indicator functions of measurable sets. More precisely, let (X, Σ) be a measurable space. Let A1, ..., An ∈ Σ be a sequence of disjoint measurable sets, and let a1, ..., an be a sequence of real or complex numbers. A simple function is a function f : X → C {\displaystyle f\colon X\to \mathbb {C} } of the form f ( x ) = ∑ k = 1 n a k 1 A k ( x ) , {\displaystyle f(x)=\sum _{k=1}^{n}a_{k}{\mathbf {1} }_{A_{k}}(x),} where 1 A {\displaystyle {\mathbf {1} }_{A}} is the indicator function of the set A. == Properties of simple functions == The sum, difference, and product of two simple functions are again simple functions, and multiplication by constant keeps a simple function simple; hence it follows that the collection of all simple functions on a given measurable space forms a commutative algebra over C {\displaystyle \mathbb {C} } . == Integration of simple functions == If a measure μ {\displaystyle \mu } is defined on the space ( X , Σ ) {\displaystyle (X,\Sigma )} , the integral of a simple function f : X → R {\displaystyle f\colon X\to \mathbb {R} } with respect to μ {\displaystyle \mu } is defined to be ∫ X f d μ = ∑ k = 1 n a k μ ( A k ) , {\displaystyle \int _{X}fd\mu =\sum _{k=1}^{n}a_{k}\mu (A_{k}),} if all summands are finite. == Relation to Lebesgue integration == The above integral of simple functions can be extended to a more general class of functions, which is how the Lebesgue integral is defined. This extension is based on the following fact. Theorem. Any non-negative measurable function f : X → R + {\displaystyle f\colon X\to \mathbb {R} ^{+}} is the pointwise limit of a monotonic increasing sequence of non-negative simple functions. It is implied in the statement that the sigma-algebra in the co-domain R + {\displaystyle \mathbb {R} ^{+}} is the restriction of the Borel σ-algebra B ( R ) {\displaystyle {\mathfrak {B}}(\mathbb {R} )} to R + {\displaystyle \mathbb {R} ^{+}} . The proof proceeds as follows. Let f {\displaystyle f} be a non-negative measurable function defined over the measure space ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} . For each n ∈ N {\displaystyle n\in \mathbb {N} } , subdivide the co-domain of f {\displaystyle f} into 2 2 n + 1 {\displaystyle 2^{2n}+1} intervals, 2 2 n {\displaystyle 2^{2n}} of which have length 2 − n {\displaystyle 2^{-n}} . That is, for each n {\displaystyle n} , define I n , k = [ k − 1 2 n , k 2 n ) {\displaystyle I_{n,k}=\left[{\frac {k-1}{2^{n}}},{\frac {k}{2^{n}}}\right)} for k = 1 , 2 , … , 2 2 n {\displaystyle k=1,2,\ldots ,2^{2n}} , and I n , 2 2 n + 1 = [ 2 n , ∞ ) {\displaystyle I_{n,2^{2n}+1}=[2^{n},\infty )} , which are disjoint and cover the non-negative real line ( R + ⊆ ∪ k I n , k , ∀ n ∈ N {\displaystyle \mathbb {R} ^{+}\subseteq \cup _{k}I_{n,k},\forall n\in \mathbb {N} } ). Now define the sets A n , k = f − 1 ( I n , k ) {\displaystyle A_{n,k}=f^{-1}(I_{n,k})\,} for k = 1 , 2 , … , 2 2 n + 1 , {\displaystyle k=1,2,\ldots ,2^{2n}+1,} which are measurable ( A n , k ∈ Σ {\displaystyle A_{n,k}\in \Sigma } ) because f {\displaystyle f} is assumed to be measurable. Then the increasing sequence of simple functions f n = ∑ k = 1 2 2 n + 1 k − 1 2 n 1 A n , k {\displaystyle f_{n}=\sum _{k=1}^{2^{2n}+1}{\frac {k-1}{2^{n}}}{\mathbf {1} }_{A_{n,k}}} converges pointwise to f {\displaystyle f} as n → ∞ {\displaystyle n\to \infty } . Note that, when f {\displaystyle f} is bounded, the convergence is uniform. == See also == Bochner measurable function == References == J. F. C. Kingman, S. J. Taylor. Introduction to Measure and Probability, 1966, Cambridge. S. Lang. Real and Functional Analysis, 1993, Springer-Verlag. W. Rudin. Real and Complex Analysis, 1987, McGraw-Hill. H. L. Royden. Real Analysis, 1968, Collier Macmillan.
Wikipedia/Simple_function
In mathematics, the inverse function of a function f (also called the inverse of f) is a function that undoes the operation of f. The inverse of f exists if and only if f is bijective, and if it exists, is denoted by f − 1 . {\displaystyle f^{-1}.} For a function f : X → Y {\displaystyle f\colon X\to Y} , its inverse f − 1 : Y → X {\displaystyle f^{-1}\colon Y\to X} admits an explicit description: it sends each element y ∈ Y {\displaystyle y\in Y} to the unique element x ∈ X {\displaystyle x\in X} such that f(x) = y. As an example, consider the real-valued function of a real variable given by f(x) = 5x − 7. One can think of f as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of f is the function f − 1 : R → R {\displaystyle f^{-1}\colon \mathbb {R} \to \mathbb {R} } defined by f − 1 ( y ) = y + 7 5 . {\displaystyle f^{-1}(y)={\frac {y+7}{5}}.} == Definitions == Let f be a function whose domain is the set X, and whose codomain is the set Y. Then f is invertible if there exists a function g from Y to X such that g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} and f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} . If f is invertible, then there is exactly one function g satisfying this property. The function g is called the inverse of f, and is usually denoted as f −1, a notation introduced by John Frederick William Herschel in 1813. The function f is invertible if and only if it is bijective. This is because the condition g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} implies that f is injective, and the condition f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} implies that f is surjective. The inverse function f −1 to f can be explicitly described as the function f − 1 ( y ) = ( the unique element x ∈ X such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in X{\text{ such that }}f(x)=y)} . === Inverses and composition === Recall that if f is an invertible function with domain X and codomain Y, then f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}\left(f(x)\right)=x} , for every x ∈ X {\displaystyle x\in X} and f ( f − 1 ( y ) ) = y {\displaystyle f\left(f^{-1}(y)\right)=y} for every y ∈ Y {\displaystyle y\in Y} . Using the composition of functions, this statement can be rewritten to the following equations between functions: f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y , {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y},} where idX is the identity function on the set X; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation f −1. Repeatedly composing a function f: X→X with itself is called iteration. If f is applied n times, starting with the value x, then this is written as f n(x); so f 2(x) = f (f (x)), etc. Since f −1(f (x)) = x, composing f −1 and f n yields f n−1, "undoing" the effect of one application of f. === Notation === While the notation f −1(x) might be misunderstood, (f(x))−1 certainly denotes the multiplicative inverse of f(x) and has nothing to do with the inverse function of f. The notation f ⟨ − 1 ⟩ {\displaystyle f^{\langle -1\rangle }} might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like sin−1(x) to denote the inverse of the sine function applied to x (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of sin (x), which can be denoted as (sin (x))−1. To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin arcus). For instance, the inverse of the sine function is typically called the arcsine function, written as arcsin(x). Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ārea). For instance, the inverse of the hyperbolic sine function is typically written as arsinh(x). The expressions like sin−1(x) can still be useful to distinguish the multivalued inverse from the partial inverse: sin − 1 ⁡ ( x ) = { ( − 1 ) n arcsin ⁡ ( x ) + π n : n ∈ Z } {\displaystyle \sin ^{-1}(x)=\{(-1)^{n}\arcsin(x)+\pi n:n\in \mathbb {Z} \}} . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the f −1 notation should be avoided. == Examples == === Squaring and square root functions === The function f: R → [0,∞) given by f(x) = x2 is not injective because ( − x ) 2 = x 2 {\displaystyle (-x)^{2}=x^{2}} for all x ∈ R {\displaystyle x\in \mathbb {R} } . Therefore, f is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function f : [ 0 , ∞ ) → [ 0 , ∞ ) ; x ↦ x 2 {\displaystyle f\colon [0,\infty )\to [0,\infty );\ x\mapsto x^{2}} with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by x ↦ x {\displaystyle x\mapsto {\sqrt {x}}} . === Standard inverse functions === The following table shows several standard functions and their inverses: === Formula for the inverse === Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse f − 1 {\displaystyle f^{-1}} of an invertible function f : R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } has an explicit description as f − 1 ( y ) = ( the unique element x ∈ R such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in \mathbb {R} {\text{ such that }}f(x)=y)} . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if f is the function f ( x ) = ( 2 x + 8 ) 3 {\displaystyle f(x)=(2x+8)^{3}} then to determine f − 1 ( y ) {\displaystyle f^{-1}(y)} for a real number y, one must find the unique real number x such that (2x + 8)3 = y. This equation can be solved: y = ( 2 x + 8 ) 3 y 3 = 2 x + 8 y 3 − 8 = 2 x y 3 − 8 2 = x . {\displaystyle {\begin{aligned}y&=(2x+8)^{3}\\{\sqrt[{3}]{y}}&=2x+8\\{\sqrt[{3}]{y}}-8&=2x\\{\dfrac {{\sqrt[{3}]{y}}-8}{2}}&=x.\end{aligned}}} Thus the inverse function f −1 is given by the formula f − 1 ( y ) = y 3 − 8 2 . {\displaystyle f^{-1}(y)={\frac {{\sqrt[{3}]{y}}-8}{2}}.} Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if f is the function f ( x ) = x − sin ⁡ x , {\displaystyle f(x)=x-\sin x,} then f is a bijection, and therefore possesses an inverse function f −1. The formula for this inverse has an expression as an infinite sum: f − 1 ( y ) = ∑ n = 1 ∞ y n / 3 n ! lim θ → 0 ( d n − 1 d θ n − 1 ( θ θ − sin ⁡ ( θ ) 3 ) n ) . {\displaystyle f^{-1}(y)=\sum _{n=1}^{\infty }{\frac {y^{n/3}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right).} == Properties == Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. === Uniqueness === If an inverse function exists for a given function f, then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by f. === Symmetry === There is a symmetry between a function and its inverse. Specifically, if f is an invertible function with domain X and codomain Y, then its inverse f −1 has domain Y and image X, and the inverse of f −1 is the original function f. In symbols, for functions f:X → Y and f−1:Y → X, f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y . {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y}.} This statement is a consequence of the implication that for f to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by ( f − 1 ) − 1 = f . {\displaystyle \left(f^{-1}\right)^{-1}=f.} The inverse of a composition of functions is given by ( g ∘ f ) − 1 = f − 1 ∘ g − 1 . {\displaystyle (g\circ f)^{-1}=f^{-1}\circ g^{-1}.} Notice that the order of g and f have been reversed; to undo f followed by g, we must first undo g, and then undo f. For example, let f(x) = 3x and let g(x) = x + 5. Then the composition g ∘ f is the function that first multiplies by three and then adds five, ( g ∘ f ) ( x ) = 3 x + 5. {\displaystyle (g\circ f)(x)=3x+5.} To reverse this process, we must first subtract five, and then divide by three, ( g ∘ f ) − 1 ( x ) = 1 3 ( x − 5 ) . {\displaystyle (g\circ f)^{-1}(x)={\tfrac {1}{3}}(x-5).} This is the composition (f −1 ∘ g −1)(x). === Self-inverses === If X is a set, then the identity function on X is its own inverse: id X − 1 = id X . {\displaystyle {\operatorname {id} _{X}}^{-1}=\operatorname {id} _{X}.} More generally, a function f : X → X is equal to its own inverse, if and only if the composition f ∘ f is equal to idX. Such a function is called an involution. === Graph of the inverse === If f is invertible, then the graph of the function y = f − 1 ( x ) {\displaystyle y=f^{-1}(x)} is the same as the graph of the equation x = f ( y ) . {\displaystyle x=f(y).} This is identical to the equation y = f(x) that defines the graph of f, except that the roles of x and y have been reversed. Thus the graph of f −1 can be obtained from the graph of f by switching the positions of the x and y axes. This is equivalent to reflecting the graph across the line y = x. === Inverses and derivatives === By the inverse function theorem, a continuous function of a single variable f : A → R {\displaystyle f\colon A\to \mathbb {R} } (where A ⊆ R {\displaystyle A\subseteq \mathbb {R} } ) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function f ( x ) = x 3 + x {\displaystyle f(x)=x^{3}+x} is invertible, since the derivative f′(x) = 3x2 + 1 is always positive. If the function f is differentiable on an interval I and f′(x) ≠ 0 for each x ∈ I, then the inverse f −1 is differentiable on f(I). If y = f(x), the derivative of the inverse is given by the inverse function theorem, ( f − 1 ) ′ ( y ) = 1 f ′ ( x ) . {\displaystyle \left(f^{-1}\right)^{\prime }(y)={\frac {1}{f'\left(x\right)}}.} Using Leibniz's notation the formula above can be written as d x d y = 1 d y / d x . {\displaystyle {\frac {dx}{dy}}={\frac {1}{dy/dx}}.} This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function f : Rn → Rn is invertible in a neighborhood of a point p as long as the Jacobian matrix of f at p is invertible. In this case, the Jacobian of f −1 at f(p) is the matrix inverse of the Jacobian of f at p. == Real-world examples == Let f be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, F = f ( C ) = 9 5 C + 32 ; {\displaystyle F=f(C)={\tfrac {9}{5}}C+32;} then its inverse function converts degrees Fahrenheit to degrees Celsius, C = f − 1 ( F ) = 5 9 ( F − 32 ) , {\displaystyle C=f^{-1}(F)={\tfrac {5}{9}}(F-32),} since f − 1 ( f ( C ) ) = f − 1 ( 9 5 C + 32 ) = 5 9 ( ( 9 5 C + 32 ) − 32 ) = C , for every value of C , and f ( f − 1 ( F ) ) = f ( 5 9 ( F − 32 ) ) = 9 5 ( 5 9 ( F − 32 ) ) + 32 = F , for every value of F . {\displaystyle {\begin{aligned}f^{-1}(f(C))={}&f^{-1}\left({\tfrac {9}{5}}C+32\right)={\tfrac {5}{9}}\left(({\tfrac {9}{5}}C+32)-32\right)=C,\\&{\text{for every value of }}C,{\text{ and }}\\[6pt]f\left(f^{-1}(F)\right)={}&f\left({\tfrac {5}{9}}(F-32)\right)={\tfrac {9}{5}}\left({\tfrac {5}{9}}(F-32)\right)+32=F,\\&{\text{for every value of }}F.\end{aligned}}} Suppose f assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, f ( Allan ) = 2005 , f ( Brad ) = 2007 , f ( Cary ) = 2001 f − 1 ( 2005 ) = Allan , f − 1 ( 2007 ) = Brad , f − 1 ( 2001 ) = Cary {\displaystyle {\begin{aligned}f({\text{Allan}})&=2005,\quad &f({\text{Brad}})&=2007,\quad &f({\text{Cary}})&=2001\\f^{-1}(2005)&={\text{Allan}},\quad &f^{-1}(2007)&={\text{Brad}},\quad &f^{-1}(2001)&={\text{Cary}}\end{aligned}}} Let R be the function that leads to an x percentage rise of some quantity, and F be the function producing an x percentage fall. Applied to $100 with x = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is pH = −log10[H+]. In many cases we need to find the concentration of acid from a pH measurement. The inverse function [H+] = 10−pH is used. == Generalizations == === Partial inverses === Even if a function f is not one-to-one, it may be possible to define a partial inverse of f by restricting the domain. For example, the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} is not one-to-one, since x2 = (−x)2. However, the function becomes one-to-one if we restrict to the domain x ≥ 0, in which case f − 1 ( y ) = y . {\displaystyle f^{-1}(y)={\sqrt {y}}.} (If we instead restrict to the domain x ≤ 0, then the inverse is the negative of the square root of y.) === Full inverses === Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: f − 1 ( y ) = ± y . {\displaystyle f^{-1}(y)=\pm {\sqrt {y}}.} Sometimes, this multivalued inverse is called the full inverse of f, and the portions (such as √x and −√x) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at y is called the principal value of f −1(y). For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). === Trigonometric inverses === The above considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since sin ⁡ ( x + 2 π ) = sin ⁡ ( x ) {\displaystyle \sin(x+2\pi )=\sin(x)} for every real x (and more generally sin(x + 2πn) = sin(x) for every integer n). However, the sine is one-to-one on the interval [−⁠π/2⁠, ⁠π/2⁠], and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between −⁠π/2⁠ and ⁠π/2⁠. The following table describes the principal branch of each inverse trigonometric function: === Left and right inverses === Function composition on the left and on the right need not coincide. In general, the conditions "There exists g such that g(f(x))=x" and "There exists g such that f(g(x))=x" imply different properties of f. For example, let f: R → [0, ∞) denote the squaring map, such that f(x) = x2 for all x in R, and let g: [0, ∞) → R denote the square root map, such that g(x) = √x for all x ≥ 0. Then f(g(x)) = x for all x in [0, ∞); that is, g is a right inverse to f. However, g is not a left inverse to f, since, e.g., g(f(−1)) = 1 ≠ −1. ==== Left inverses ==== If f: X → Y, a left inverse for f (or retraction of f ) is a function g: Y → X such that composing f with g from the left gives the identity function g ∘ f = id X ⁡ . {\displaystyle g\circ f=\operatorname {id} _{X}{\text{.}}} That is, the function g satisfies the rule If f(x)=y, then g(y)=x. The function g must equal the inverse of f on the image of f, but may take any values for elements of Y not in the image. A function f with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: If g is the left inverse of f, and f(x) = f(y), then g(f(x)) = g(f(y)) = x = y. If nonempty f: X → Y is injective, construct a left inverse g: Y → X as follows: for all y ∈ Y, if y is in the image of f, then there exists x ∈ X such that f(x) = y. Let g(y) = x; this definition is unique because f is injective. Otherwise, let g(y) be an arbitrary element of X.For all x ∈ X, f(x) is in the image of f. By construction, g(f(x)) = x, the condition for a left inverse. In classical mathematics, every injective function f with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion {0,1} → R of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set {0,1}. ==== Right inverses ==== A right inverse for f (or section of f ) is a function h: Y → X such that f ∘ h = id Y . {\displaystyle f\circ h=\operatorname {id} _{Y}.} That is, the function h satisfies the rule If h ( y ) = x {\displaystyle \displaystyle h(y)=x} , then f ( x ) = y . {\displaystyle \displaystyle f(x)=y.} Thus, h(y) may be any of the elements of X that map to y under f. A function f has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). If h is the right inverse of f, then f is surjective. For all y ∈ Y {\displaystyle y\in Y} , there is x = h ( y ) {\displaystyle x=h(y)} such that f ( x ) = f ( h ( y ) ) = y {\displaystyle f(x)=f(h(y))=y} . If f is surjective, f has a right inverse h, which can be constructed as follows: for all y ∈ Y {\displaystyle y\in Y} , there is at least one x ∈ X {\displaystyle x\in X} such that f ( x ) = y {\displaystyle f(x)=y} (because f is surjective), so we choose one to be the value of h(y). ==== Two-sided inverses ==== An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If g {\displaystyle g} is a left inverse and h {\displaystyle h} a right inverse of f {\displaystyle f} , for all y ∈ Y {\displaystyle y\in Y} , g ( y ) = g ( f ( h ( y ) ) = h ( y ) {\displaystyle g(y)=g(f(h(y))=h(y)} . A function has a two-sided inverse if and only if it is bijective. A bijective function f is injective, so it has a left inverse (if f is the empty function, f : ∅ → ∅ {\displaystyle f\colon \varnothing \to \varnothing } is its own left inverse). f is surjective, so it has a right inverse. By the above, the left and right inverse are the same. If f has a two-sided inverse g, then g is a left inverse and right inverse of f, so f is injective and surjective. === Preimages === If f: X → Y is any function (not necessarily invertible), the preimage (or inverse image) of an element y ∈ Y is defined to be the set of all elements of X that map to y: f − 1 ( y ) = { x ∈ X : f ( x ) = y } . {\displaystyle f^{-1}(y)=\left\{x\in X:f(x)=y\right\}.} The preimage of y can be thought of as the image of y under the (multivalued) full inverse of the function f. The notion can be generalized to subsets of the range. Specifically, if S is any subset of Y, the preimage of S, denoted by f − 1 ( S ) {\displaystyle f^{-1}(S)} , is the set of all elements of X that map to S: f − 1 ( S ) = { x ∈ X : f ( x ) ∈ S } . {\displaystyle f^{-1}(S)=\left\{x\in X:f(x)\in S\right\}.} For example, take the function f: R → R; x ↦ x2. This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. f − 1 ( { 1 , 4 , 9 , 16 } ) = { − 4 , − 3 , − 2 , − 1 , 1 , 2 , 3 , 4 } {\displaystyle f^{-1}(\left\{1,4,9,16\right\})=\left\{-4,-3,-2,-1,1,2,3,4\right\}} . The original notion and its generalization are related by the identity f − 1 ( y ) = f − 1 ( { y } ) , {\displaystyle f^{-1}(y)=f^{-1}(\{y\}),} The preimage of a single element y ∈ Y – a singleton set {y}  – is sometimes called the fiber of y. When Y is the set of real numbers, it is common to refer to f −1({y}) as a level set. == See also == Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function Integral of inverse functions Inverse Fourier transform Reversible computing == Notes == == References == == Bibliography == Briggs, William; Cochran, Lyle (2011). Calculus / Early Transcendentals Single Variable. Addison-Wesley. ISBN 978-0-321-66414-3. Devlin, Keith J. (2004). Sets, Functions, and Logic / An Introduction to Abstract Mathematics (3 ed.). Chapman & Hall / CRC Mathematics. ISBN 978-1-58488-449-1. Fletcher, Peter; Patty, C. Wayne (1988). Foundations of Higher Mathematics. PWS-Kent. ISBN 0-87150-164-3. Lay, Steven R. (2006). Analysis / With an Introduction to Proof (4 ed.). Pearson / Prentice Hall. ISBN 978-0-13-148101-5. Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006). A Transition to Advanced Mathematics (6 ed.). Thompson Brooks/Cole. ISBN 978-0-534-39900-9. Thomas Jr., George Brinton (1972). Calculus and Analytic Geometry Part 1: Functions of One Variable and Analytic Geometry (Alternate ed.). Addison-Wesley. Wolf, Robert S. (1998). Proof, Logic, and Conjecture / The Mathematician's Toolbox. W. H. Freeman and Co. ISBN 978-0-7167-3050-7. == Further reading == Amazigo, John C.; Rubenfeld, Lester A. (1980). "Implicit Functions; Jacobians; Inverse Functions". Advanced Calculus and its Applications to the Engineering and Physical Sciences. New York: Wiley. pp. 103–120. ISBN 0-471-04934-4. Binmore, Ken G. (1983). "Inverse Functions". Calculus. New York: Cambridge University Press. pp. 161–197. ISBN 0-521-28952-1. Spivak, Michael (1994). Calculus (3 ed.). Publish or Perish. ISBN 0-914098-89-6. Stewart, James (2002). Calculus (5 ed.). Brooks Cole. ISBN 978-0-534-39339-7. == External links == "Inverse function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Right_inverse_function
In computer science, purely functional programming usually designates a programming paradigm—a style of building the structure and elements of computer programs—that treats all computation as the evaluation of mathematical functions. Program state and mutable objects are usually modeled with temporal logic, as explicit variables that represent the program state at each step of a program execution: a variable state is passed as an input parameter of a state-transforming function, which returns the updated state as part of its return value. This style handles state changes without losing the referential transparency of the program expressions. Purely functional programming consists of ensuring that functions, inside the functional paradigm, will only depend on their arguments, regardless of any global or local state. A pure functional subroutine only has visibility of changes of state represented by state variables included in its scope. == Difference between pure and impure functional programming == The exact difference between pure and impure functional programming is a matter of controversy. Sabry's proposed definition of purity is that all common evaluation strategies (call-by-name, call-by-value, and call-by-need) produce the same result, ignoring strategies that error or diverge. A program is usually said to be functional when it uses some concepts of functional programming, such as first-class functions and higher-order functions. However, a first-class function need not be purely functional, as it may use techniques from the imperative paradigm, such as arrays or input/output methods that use mutable cells, which update their state as side effects. In fact, the earliest programming languages cited as being functional, IPL and Lisp, are both "impure" functional languages by Sabry's definition. == Properties of purely functional programming == === Strict versus non-strict evaluation === Each evaluation strategy which ends on a purely functional program returns the same result. In particular, it ensures that the programmer does not have to consider in which order programs are evaluated, since eager evaluation will return the same result as lazy evaluation. However, it is still possible that an eager evaluation may not terminate while the lazy evaluation of the same program halts. An advantage of this is that lazy evaluation can be implemented much more easily; as all expressions will return the same result at any moment (regardless of program state), their evaluation can be delayed as much as necessary. === Parallel computing === In a purely functional language, the only dependencies between computations are data dependencies, and computations are deterministic. Therefore, to program in parallel, the programmer need only specify the pieces that should be computed in parallel, and the runtime can handle all other details such as distributing tasks to processors, managing synchronization and communication, and collecting garbage in parallel. This style of programming avoids common issues such as race conditions and deadlocks, but has less control than an imperative language. To ensure a speedup, the granularity of tasks must be carefully chosen to be neither too big nor too small. In theory, it is possible to use runtime profiling and compile-time analysis to judge whether introducing parallelism will speed up the program, and thus automatically parallelize purely functional programs. In practice, this has not been terribly successful, and fully automatic parallelization is not practical. === Data structures === Purely functional data structures are persistent. Persistency is required for functional programming; without it, the same computation could return different results. Functional programming may use persistent non-purely functional data structures, while those data structures may not be used in purely functional programs. Purely functional data structures are often represented in a different way than their imperative counterparts. For example, array with constant-time access and update is a basic component of most imperative languages and many imperative data-structures, such as hash table and binary heap, are based on arrays. Arrays can be replaced by map or random access list, which admits purely functional implementation, but the access and update time is logarithmic. Therefore, purely functional data structures can be used in languages which are non-functional, but they may not be the most efficient tool available, especially if persistency is not required. In general, conversion of an imperative program to a purely functional one also requires ensuring that the formerly-mutable structures are now explicitly returned from functions that update them, a program structure called store-passing style. == Purely functional language == A purely functional language is a language which only admits purely functional programming. Purely functional programs can however be written in languages which are not purely functional. == References ==
Wikipedia/Purely_functional_programming
In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line R ∪ { ± ∞ } , {\displaystyle \mathbb {R} \cup \{\pm \infty \},} which consists of the real numbers R {\displaystyle \mathbb {R} } and ± ∞ . {\displaystyle \pm \infty .} A set function generally aims to measure subsets in some way. Measures are typical examples of "measuring" set functions. Therefore, the term "set function" is often used for avoiding confusion between the mathematical meaning of "measure" and its common language meaning. == Definitions == If F {\displaystyle {\mathcal {F}}} is a family of sets over Ω {\displaystyle \Omega } (meaning that F ⊆ ℘ ( Ω ) {\displaystyle {\mathcal {F}}\subseteq \wp (\Omega )} where ℘ ( Ω ) {\displaystyle \wp (\Omega )} denotes the powerset) then a set function on F {\displaystyle {\mathcal {F}}} is a function μ {\displaystyle \mu } with domain F {\displaystyle {\mathcal {F}}} and codomain [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} or, sometimes, the codomain is instead some vector space, as with vector measures, complex measures, and projection-valued measures. The domain of a set function may have any number properties; the commonly encountered properties and categories of families are listed in the table below. In general, it is typically assumed that μ ( E ) + μ ( F ) {\displaystyle \mu (E)+\mu (F)} is always well-defined for all E , F ∈ F , {\displaystyle E,F\in {\mathcal {F}},} or equivalently, that μ {\displaystyle \mu } does not take on both − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } as values. This article will henceforth assume this; although alternatively, all definitions below could instead be qualified by statements such as "whenever the sum/series is defined". This is sometimes done with subtraction, such as with the following result, which holds whenever μ {\displaystyle \mu } is finitely additive: Set difference formula: μ ( F ) − μ ( E ) = μ ( F ∖ E ) whenever μ ( F ) − μ ( E ) {\displaystyle \mu (F)-\mu (E)=\mu (F\setminus E){\text{ whenever }}\mu (F)-\mu (E)} is defined with E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} satisfying E ⊆ F {\displaystyle E\subseteq F} and F ∖ E ∈ F . {\displaystyle F\setminus E\in {\mathcal {F}}.} Null sets A set F ∈ F {\displaystyle F\in {\mathcal {F}}} is called a null set (with respect to μ {\displaystyle \mu } ) or simply null if μ ( F ) = 0. {\displaystyle \mu (F)=0.} Whenever μ {\displaystyle \mu } is not identically equal to either − ∞ {\displaystyle -\infty } or + ∞ {\displaystyle +\infty } then it is typically also assumed that: null empty set: μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} if ∅ ∈ F . {\displaystyle \varnothing \in {\mathcal {F}}.} Variation and mass The total variation of a set S {\displaystyle S} is | μ | ( S ) = def sup { | μ ( F ) | : F ∈ F and F ⊆ S } {\displaystyle |\mu |(S)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\sup\{|\mu (F)|:F\in {\mathcal {F}}{\text{ and }}F\subseteq S\}} where | ⋅ | {\displaystyle |\,\cdot \,|} denotes the absolute value (or more generally, it denotes the norm or seminorm if μ {\displaystyle \mu } is vector-valued in a (semi)normed space). Assuming that ∪ F = def ⋃ F ∈ F F ∈ F , {\displaystyle \cup {\mathcal {F}}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\textstyle \bigcup \limits _{F\in {\mathcal {F}}}F\in {\mathcal {F}},} then | μ | ( ∪ F ) {\displaystyle |\mu |\left(\cup {\mathcal {F}}\right)} is called the total variation of μ {\displaystyle \mu } and μ ( ∪ F ) {\displaystyle \mu \left(\cup {\mathcal {F}}\right)} is called the mass of μ . {\displaystyle \mu .} A set function is called finite if for every F ∈ F , {\displaystyle F\in {\mathcal {F}},} the value μ ( F ) {\displaystyle \mu (F)} is finite (which by definition means that μ ( F ) ≠ ∞ {\displaystyle \mu (F)\neq \infty } and μ ( F ) ≠ − ∞ {\displaystyle \mu (F)\neq -\infty } ; an infinite value is one that is equal to ∞ {\displaystyle \infty } or − ∞ {\displaystyle -\infty } ). Every finite set function must have a finite mass. === Common properties of set functions === A set function μ {\displaystyle \mu } on F {\displaystyle {\mathcal {F}}} is said to be non-negative if it is valued in [ 0 , ∞ ] . {\displaystyle [0,\infty ].} finitely additive if ∑ i = 1 n μ ( F i ) = μ ( ⋃ i = 1 n F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{n}\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{n}F_{i}\right)} for all pairwise disjoint finite sequences F 1 , … , F n ∈ F {\displaystyle F_{1},\ldots ,F_{n}\in {\mathcal {F}}} such that ⋃ i = 1 n F i ∈ F . {\displaystyle \textstyle \bigcup \limits _{i=1}^{n}F_{i}\in {\mathcal {F}}.} If F {\displaystyle {\mathcal {F}}} is closed under binary unions then μ {\displaystyle \mu } is finitely additive if and only if μ ( E ∪ F ) = μ ( E ) + μ ( F ) {\displaystyle \mu (E\cup F)=\mu (E)+\mu (F)} for all disjoint pairs E , F ∈ F . {\displaystyle E,F\in {\mathcal {F}}.} If μ {\displaystyle \mu } is finitely additive and if ∅ ∈ F {\displaystyle \varnothing \in {\mathcal {F}}} then taking E := F := ∅ {\displaystyle E:=F:=\varnothing } shows that μ ( ∅ ) = μ ( ∅ ) + μ ( ∅ ) {\displaystyle \mu (\varnothing )=\mu (\varnothing )+\mu (\varnothing )} which is only possible if μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} or μ ( ∅ ) = ± ∞ , {\displaystyle \mu (\varnothing )=\pm \infty ,} where in the latter case, μ ( E ) = μ ( E ∪ ∅ ) = μ ( E ) + μ ( ∅ ) = μ ( E ) + ( ± ∞ ) = ± ∞ {\displaystyle \mu (E)=\mu (E\cup \varnothing )=\mu (E)+\mu (\varnothing )=\mu (E)+(\pm \infty )=\pm \infty } for every E ∈ F {\displaystyle E\in {\mathcal {F}}} (so only the case μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} is useful). countably additive or σ-additive if in addition to being finitely additive, for all pairwise disjoint sequences F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F {\displaystyle {\mathcal {F}}} such that ⋃ i = 1 ∞ F i ∈ F , {\displaystyle \textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\in {\mathcal {F}},} all of the following hold: ∑ i = 1 ∞ μ ( F i ) = μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} The series on the left hand side is defined in the usual way as the limit ∑ i = 1 ∞ μ ( F i ) = def lim n → ∞ μ ( F 1 ) + ⋯ + μ ( F n ) . {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\displaystyle \lim _{n\to \infty }}\mu \left(F_{1}\right)+\cdots +\mu \left(F_{n}\right).} As a consequence, if ρ : N → N {\displaystyle \rho :\mathbb {N} \to \mathbb {N} } is any permutation/bijection then ∑ i = 1 ∞ μ ( F i ) = ∑ i = 1 ∞ μ ( F ρ ( i ) ) ; {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)=\textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{\rho (i)}\right);} this is because ⋃ i = 1 ∞ F i = ⋃ i = 1 ∞ F ρ ( i ) {\displaystyle \textstyle \bigcup \limits _{i=1}^{\infty }F_{i}=\textstyle \bigcup \limits _{i=1}^{\infty }F_{\rho (i)}} and applying this condition (a) twice guarantees that both ∑ i = 1 ∞ μ ( F i ) = μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} and μ ( ⋃ i = 1 ∞ F ρ ( i ) ) = ∑ i = 1 ∞ μ ( F ρ ( i ) ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{\rho (i)}\right)=\textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{\rho (i)}\right)} hold. By definition, a convergent series with this property is said to be unconditionally convergent. Stated in plain English, this means that rearranging/relabeling the sets F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots } to the new order F ρ ( 1 ) , F ρ ( 2 ) , … {\displaystyle F_{\rho (1)},F_{\rho (2)},\ldots } does not affect the sum of their measures. This is desirable since just as the union F = def ⋃ i ∈ N F i {\displaystyle F~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\textstyle \bigcup \limits _{i\in \mathbb {N} }F_{i}} does not depend on the order of these sets, the same should be true of the sums μ ( F ) = μ ( F 1 ) + μ ( F 2 ) + ⋯ {\displaystyle \mu (F)=\mu \left(F_{1}\right)+\mu \left(F_{2}\right)+\cdots } and μ ( F ) = μ ( F ρ ( 1 ) ) + μ ( F ρ ( 2 ) ) + ⋯ . {\displaystyle \mu (F)=\mu \left(F_{\rho (1)}\right)+\mu \left(F_{\rho (2)}\right)+\cdots \,.} if μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} is not infinite then this series ∑ i = 1 ∞ μ ( F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} must also converge absolutely, which by definition means that ∑ i = 1 ∞ | μ ( F i ) | {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\left|\mu \left(F_{i}\right)\right|} must be finite. This is automatically true if μ {\displaystyle \mu } is non-negative (or even just valued in the extended real numbers). As with any convergent series of real numbers, by the Riemann series theorem, the series ∑ i = 1 ∞ μ ( F i ) = lim N → ∞ μ ( F 1 ) + μ ( F 2 ) + ⋯ + μ ( F N ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)={\displaystyle \lim _{N\to \infty }}\mu \left(F_{1}\right)+\mu \left(F_{2}\right)+\cdots +\mu \left(F_{N}\right)} converges absolutely if and only if its sum does not depend on the order of its terms (a property known as unconditional convergence). Since unconditional convergence is guaranteed by (a) above, this condition is automatically true if μ {\displaystyle \mu } is valued in [ − ∞ , ∞ ] . {\displaystyle [-\infty ,\infty ].} if μ ( ⋃ i = 1 ∞ F i ) = ∑ i = 1 ∞ μ ( F i ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)=\textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} is infinite then it is also required that the value of at least one of the series ∑ μ ( F i ) > 0 i ∈ N μ ( F i ) and ∑ μ ( F i ) < 0 i ∈ N μ ( F i ) {\displaystyle \textstyle \sum \limits _{\stackrel {i\in \mathbb {N} }{\mu \left(F_{i}\right)>0}}\mu \left(F_{i}\right)\;{\text{ and }}\;\textstyle \sum \limits _{\stackrel {i\in \mathbb {N} }{\mu \left(F_{i}\right)<0}}\mu \left(F_{i}\right)\;} be finite (so that the sum of their values is well-defined). This is automatically true if μ {\displaystyle \mu } is non-negative. a pre-measure if it is non-negative, countably additive (including finitely additive), and has a null empty set. a measure if it is a pre-measure whose domain is a σ-algebra. That is to say, a measure is a non-negative countably additive set function on a σ-algebra that has a null empty set. a probability measure if it is a measure that has a mass of 1. {\displaystyle 1.} an outer measure if it is non-negative, countably subadditive, has a null empty set, and has the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} as its domain. Outer measures appear in the Carathéodory's extension theorem and they are often restricted to Carathéodory measurable subsets a signed measure if it is countably additive, has a null empty set, and μ {\displaystyle \mu } does not take on both − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } as values. complete if every subset of every null set is null; explicitly, this means: whenever F ∈ F satisfies μ ( F ) = 0 {\displaystyle F\in {\mathcal {F}}{\text{ satisfies }}\mu (F)=0} and N ⊆ F {\displaystyle N\subseteq F} is any subset of F {\displaystyle F} then N ∈ F {\displaystyle N\in {\mathcal {F}}} and μ ( N ) = 0. {\displaystyle \mu (N)=0.} Unlike many other properties, completeness places requirements on the set domain ⁡ μ = F {\displaystyle \operatorname {domain} \mu ={\mathcal {F}}} (and not just on μ {\displaystyle \mu } 's values). 𝜎-finite if there exists a sequence F 1 , F 2 , F 3 , … {\displaystyle F_{1},F_{2},F_{3},\ldots \,} in F {\displaystyle {\mathcal {F}}} such that μ ( F i ) {\displaystyle \mu \left(F_{i}\right)} is finite for every index i , {\displaystyle i,} and also ⋃ n = 1 ∞ F n = ⋃ F ∈ F F . {\displaystyle \textstyle \bigcup \limits _{n=1}^{\infty }F_{n}=\textstyle \bigcup \limits _{F\in {\mathcal {F}}}F.} decomposable if there exists a subfamily P ⊆ F {\displaystyle {\mathcal {P}}\subseteq {\mathcal {F}}} of pairwise disjoint sets such that μ ( P ) {\displaystyle \mu (P)} is finite for every P ∈ P {\displaystyle P\in {\mathcal {P}}} and also ⋃ P ∈ P P = ⋃ F ∈ F F {\displaystyle \textstyle \bigcup \limits _{P\in {\mathcal {P}}}\,P=\textstyle \bigcup \limits _{F\in {\mathcal {F}}}F} (where F = domain ⁡ μ {\displaystyle {\mathcal {F}}=\operatorname {domain} \mu } ). Every 𝜎-finite set function is decomposable although not conversely. For example, the counting measure on R {\displaystyle \mathbb {R} } (whose domain is ℘ ( R ) {\displaystyle \wp (\mathbb {R} )} ) is decomposable but not 𝜎-finite. a vector measure if it is a countably additive set function μ : F → X {\displaystyle \mu :{\mathcal {F}}\to X} valued in a topological vector space X {\displaystyle X} (such as a normed space) whose domain is a σ-algebra. If μ {\displaystyle \mu } is valued in a normed space ( X , ‖ ⋅ ‖ ) {\displaystyle (X,\|\cdot \|)} then it is countably additive if and only if for any pairwise disjoint sequence F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F , {\displaystyle {\mathcal {F}},} lim n → ∞ ‖ μ ( F 1 ) + ⋯ + μ ( F n ) − μ ( ⋃ i = 1 ∞ F i ) ‖ = 0. {\displaystyle \lim _{n\to \infty }\left\|\mu \left(F_{1}\right)+\cdots +\mu \left(F_{n}\right)-\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)\right\|=0.} If μ {\displaystyle \mu } is finitely additive and valued in a Banach space then it is countably additive if and only if for any pairwise disjoint sequence F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F , {\displaystyle {\mathcal {F}},} lim n → ∞ ‖ μ ( F n ∪ F n + 1 ∪ F n + 2 ∪ ⋯ ) ‖ = 0. {\displaystyle \lim _{n\to \infty }\left\|\mu \left(F_{n}\cup F_{n+1}\cup F_{n+2}\cup \cdots \right)\right\|=0.} a complex measure if it is a countably additive complex-valued set function μ : F → C {\displaystyle \mu :{\mathcal {F}}\to \mathbb {C} } whose domain is a σ-algebra. By definition, a complex measure never takes ± ∞ {\displaystyle \pm \infty } as a value and so has a null empty set. a random measure if it is a measure-valued random element. Arbitrary sums As described in this article's section on generalized series, for any family ( r i ) i ∈ I {\displaystyle \left(r_{i}\right)_{i\in I}} of real numbers indexed by an arbitrary indexing set I , {\displaystyle I,} it is possible to define their sum ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} as the limit of the net of finite partial sums F ∈ FiniteSubsets ⁡ ( I ) ↦ ∑ i ∈ F r i {\displaystyle F\in \operatorname {FiniteSubsets} (I)\mapsto \textstyle \sum \limits _{i\in F}r_{i}} where the domain FiniteSubsets ⁡ ( I ) {\displaystyle \operatorname {FiniteSubsets} (I)} is directed by ⊆ . {\displaystyle \,\subseteq .\,} Whenever this net converges then its limit is denoted by the symbols ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} while if this net instead diverges to ± ∞ {\displaystyle \pm \infty } then this may be indicated by writing ∑ i ∈ I r i = ± ∞ . {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}=\pm \infty .} Any sum over the empty set is defined to be zero; that is, if I = ∅ {\displaystyle I=\varnothing } then ∑ i ∈ ∅ r i = 0 {\displaystyle \textstyle \sum \limits _{i\in \varnothing }r_{i}=0} by definition. For example, if z i = 0 {\displaystyle z_{i}=0} for every i ∈ I {\displaystyle i\in I} then ∑ i ∈ I z i = 0. {\displaystyle \textstyle \sum \limits _{i\in I}z_{i}=0.} And it can be shown that ∑ i ∈ I r i = ∑ r i = 0 i ∈ I , r i + ∑ r i ≠ 0 i ∈ I , r i = 0 + ∑ r i ≠ 0 i ∈ I , r i = ∑ r i ≠ 0 i ∈ I , r i . {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}=\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}=0}}r_{i}+\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}\neq 0}}r_{i}=0+\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}\neq 0}}r_{i}=\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}\neq 0}}r_{i}.} If I = N {\displaystyle I=\mathbb {N} } then the generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} converges in R {\displaystyle \mathbb {R} } if and only if ∑ i = 1 ∞ r i {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }r_{i}} converges unconditionally (or equivalently, converges absolutely) in the usual sense. If a generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} converges in R {\displaystyle \mathbb {R} } then both ∑ r i > 0 i ∈ I r i {\displaystyle \textstyle \sum \limits _{\stackrel {i\in I}{r_{i}>0}}r_{i}} and ∑ r i < 0 i ∈ I r i {\displaystyle \textstyle \sum \limits _{\stackrel {i\in I}{r_{i}<0}}r_{i}} also converge to elements of R {\displaystyle \mathbb {R} } and the set { i ∈ I : r i ≠ 0 } {\displaystyle \left\{i\in I:r_{i}\neq 0\right\}} is necessarily countable (that is, either finite or countably infinite); this remains true if R {\displaystyle \mathbb {R} } is replaced with any normed space. It follows that in order for a generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} to converge in R {\displaystyle \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} it is necessary that all but at most countably many r i {\displaystyle r_{i}} will be equal to 0 , {\displaystyle 0,} which means that ∑ i ∈ I r i = ∑ r i ≠ 0 i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}~=~\textstyle \sum \limits _{\stackrel {i\in I}{r_{i}\neq 0}}r_{i}} is a sum of at most countably many non-zero terms. Said differently, if { i ∈ I : r i ≠ 0 } {\displaystyle \left\{i\in I:r_{i}\neq 0\right\}} is uncountable then the generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} does not converge. In summary, due to the nature of the real numbers and its topology, every generalized series of real numbers (indexed by an arbitrary set) that converges can be reduced to an ordinary absolutely convergent series of countably many real numbers. So in the context of measure theory, there is little benefit gained by considering uncountably many sets and generalized series. In particular, this is why the definition of "countably additive" is rarely extended from countably many sets F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F {\displaystyle {\mathcal {F}}} (and the usual countable series ∑ i = 1 ∞ μ ( F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} ) to arbitrarily many sets ( F i ) i ∈ I {\displaystyle \left(F_{i}\right)_{i\in I}} (and the generalized series ∑ i ∈ I μ ( F i ) {\displaystyle \textstyle \sum \limits _{i\in I}\mu \left(F_{i}\right)} ). === Inner measures, outer measures, and other properties === A set function μ {\displaystyle \mu } is said to be/satisfies monotone if μ ( E ) ≤ μ ( F ) {\displaystyle \mu (E)\leq \mu (F)} whenever E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} satisfy E ⊆ F . {\displaystyle E\subseteq F.} modular if it satisfies the following condition, known as modularity: μ ( E ∪ F ) + μ ( E ∩ F ) = μ ( E ) + μ ( F ) {\displaystyle \mu (E\cup F)+\mu (E\cap F)=\mu (E)+\mu (F)} for all E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} such that E ∪ F , E ∩ F ∈ F . {\displaystyle E\cup F,E\cap F\in {\mathcal {F}}.} Every finitely additive function on a field of sets is modular. In geometry, a set function valued in some abelian semigroup that possess this property is known as a valuation. This geometric definition of "valuation" should not be confused with the stronger non-equivalent measure theoretic definition of "valuation" that is given below. submodular if μ ( E ∪ F ) + μ ( E ∩ F ) ≤ μ ( E ) + μ ( F ) {\displaystyle \mu (E\cup F)+\mu (E\cap F)\leq \mu (E)+\mu (F)} for all E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} such that E ∪ F , E ∩ F ∈ F . {\displaystyle E\cup F,E\cap F\in {\mathcal {F}}.} finitely subadditive if | μ ( F ) | ≤ ∑ i = 1 n | μ ( F i ) | {\displaystyle |\mu (F)|\leq \textstyle \sum \limits _{i=1}^{n}\left|\mu \left(F_{i}\right)\right|} for all finite sequences F , F 1 , … , F n ∈ F {\displaystyle F,F_{1},\ldots ,F_{n}\in {\mathcal {F}}} that satisfy F ⊆ ⋃ i = 1 n F i . {\displaystyle F\;\subseteq \;\textstyle \bigcup \limits _{i=1}^{n}F_{i}.} countably subadditive or σ-subadditive if | μ ( F ) | ≤ ∑ i = 1 ∞ | μ ( F i ) | {\displaystyle |\mu (F)|\leq \textstyle \sum \limits _{i=1}^{\infty }\left|\mu \left(F_{i}\right)\right|} for all sequences F , F 1 , F 2 , F 3 , … {\displaystyle F,F_{1},F_{2},F_{3},\ldots \,} in F {\displaystyle {\mathcal {F}}} that satisfy F ⊆ ⋃ i = 1 ∞ F i . {\displaystyle F\;\subseteq \;\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}.} If F {\displaystyle {\mathcal {F}}} is closed under finite unions then this condition holds if and only if | μ ( F ∪ G ) | ≤ | μ ( F ) | + | μ ( G ) | {\displaystyle |\mu (F\cup G)|\leq |\mu (F)|+|\mu (G)|} for all F , G ∈ F . {\displaystyle F,G\in {\mathcal {F}}.} If μ {\displaystyle \mu } is non-negative then the absolute values may be removed. If μ {\displaystyle \mu } is a measure then this condition holds if and only if μ ( ⋃ i = 1 ∞ F i ) ≤ ∑ i = 1 ∞ μ ( F i ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)\leq \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} for all F 1 , F 2 , F 3 , … {\displaystyle F_{1},F_{2},F_{3},\ldots \,} in F . {\displaystyle {\mathcal {F}}.} If μ {\displaystyle \mu } is a probability measure then this inequality is Boole's inequality. If μ {\displaystyle \mu } is countably subadditive and ∅ ∈ F {\displaystyle \varnothing \in {\mathcal {F}}} with μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} then μ {\displaystyle \mu } is finitely subadditive. superadditive if μ ( E ) + μ ( F ) ≤ μ ( E ∪ F ) {\displaystyle \mu (E)+\mu (F)\leq \mu (E\cup F)} whenever E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} are disjoint with E ∪ F ∈ F . {\displaystyle E\cup F\in {\mathcal {F}}.} continuous from above if lim n → ∞ μ ( F i ) = μ ( ⋂ i = 1 ∞ F i ) {\displaystyle \lim _{n\to \infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\right)} for all non-increasing sequences of sets F 1 ⊇ F 2 ⊇ F 3 ⋯ {\displaystyle F_{1}\supseteq F_{2}\supseteq F_{3}\cdots \,} in F {\displaystyle {\mathcal {F}}} such that ⋂ i = 1 ∞ F i ∈ F {\displaystyle \textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\in {\mathcal {F}}} with μ ( ⋂ i = 1 ∞ F i ) {\displaystyle \mu \left(\textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\right)} and all μ ( F i ) {\displaystyle \mu \left(F_{i}\right)} finite. Lebesgue measure λ {\displaystyle \lambda } is continuous from above but it would not be if the assumption that all μ ( F i ) {\displaystyle \mu \left(F_{i}\right)} are eventually finite was omitted from the definition, as this example shows: For every integer i , {\displaystyle i,} let F i {\displaystyle F_{i}} be the open interval ( i , ∞ ) {\displaystyle (i,\infty )} so that lim n → ∞ λ ( F i ) = lim n → ∞ ∞ = ∞ ≠ 0 = λ ( ∅ ) = λ ( ⋂ i = 1 ∞ F i ) {\displaystyle \lim _{n\to \infty }\lambda \left(F_{i}\right)=\lim _{n\to \infty }\infty =\infty \neq 0=\lambda (\varnothing )=\lambda \left(\textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\right)} where ⋂ i = 1 ∞ F i = ∅ . {\displaystyle \textstyle \bigcap \limits _{i=1}^{\infty }F_{i}=\varnothing .} continuous from below if lim n → ∞ μ ( F i ) = μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \lim _{n\to \infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} for all non-decreasing sequences of sets F 1 ⊆ F 2 ⊆ F 3 ⋯ {\displaystyle F_{1}\subseteq F_{2}\subseteq F_{3}\cdots \,} in F {\displaystyle {\mathcal {F}}} such that ⋃ i = 1 ∞ F i ∈ F . {\displaystyle \textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\in {\mathcal {F}}.} infinity is approached from below if whenever F ∈ F {\displaystyle F\in {\mathcal {F}}} satisfies μ ( F ) = ∞ {\displaystyle \mu (F)=\infty } then for every real r > 0 , {\displaystyle r>0,} there exists some F r ∈ F {\displaystyle F_{r}\in {\mathcal {F}}} such that F r ⊆ F {\displaystyle F_{r}\subseteq F} and r ≤ μ ( F r ) < ∞ . {\displaystyle r\leq \mu \left(F_{r}\right)<\infty .} an outer measure if μ {\displaystyle \mu } is non-negative, countably subadditive, has a null empty set, and has the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} as its domain. an inner measure if μ {\displaystyle \mu } is non-negative, superadditive, continuous from above, has a null empty set, has the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} as its domain, and + ∞ {\displaystyle +\infty } is approached from below. atomic if every measurable set of positive measure contains an atom. If a binary operation + {\displaystyle \,+\,} is defined, then a set function μ {\displaystyle \mu } is said to be translation invariant if μ ( ω + F ) = μ ( F ) {\displaystyle \mu (\omega +F)=\mu (F)} for all ω ∈ Ω {\displaystyle \omega \in \Omega } and F ∈ F {\displaystyle F\in {\mathcal {F}}} such that ω + F ∈ F . {\displaystyle \omega +F\in {\mathcal {F}}.} === Topology related definitions === If τ {\displaystyle \tau } is a topology on Ω {\displaystyle \Omega } then a set function μ {\displaystyle \mu } is said to be: a Borel measure if it is a measure defined on the σ-algebra of all Borel sets, which is the smallest σ-algebra containing all open subsets (that is, containing τ {\displaystyle \tau } ). a Baire measure if it is a measure defined on the σ-algebra of all Baire sets. locally finite if for every point ω ∈ Ω {\displaystyle \omega \in \Omega } there exists some neighborhood U ∈ F ∩ τ {\displaystyle U\in {\mathcal {F}}\cap \tau } of this point such that μ ( U ) {\displaystyle \mu (U)} is finite. If μ {\displaystyle \mu } is a finitely additive, monotone, and locally finite then μ ( K ) {\displaystyle \mu (K)} is necessarily finite for every compact measurable subset K . {\displaystyle K.} τ {\displaystyle \tau } -additive if μ ( ⋃ D ) = sup D ∈ D μ ( D ) {\displaystyle \mu \left({\textstyle \bigcup }\,{\mathcal {D}}\right)=\sup _{D\in {\mathcal {D}}}\mu (D)} whenever D ⊆ τ ∩ F {\displaystyle {\mathcal {D}}\subseteq \tau \cap {\mathcal {F}}} is directed with respect to ⊆ {\displaystyle \,\subseteq \,} and satisfies ⋃ D = def ⋃ D ∈ D D ∈ F . {\displaystyle {\textstyle \bigcup }\,{\mathcal {D}}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\textstyle \bigcup \limits _{D\in {\mathcal {D}}}D\in {\mathcal {F}}.} D {\displaystyle {\mathcal {D}}} is directed with respect to ⊆ {\displaystyle \,\subseteq \,} if and only if it is not empty and for all A , B ∈ D {\displaystyle A,B\in {\mathcal {D}}} there exists some C ∈ D {\displaystyle C\in {\mathcal {D}}} such that A ⊆ C {\displaystyle A\subseteq C} and B ⊆ C . {\displaystyle B\subseteq C.} inner regular or tight if for every F ∈ F , {\displaystyle F\in {\mathcal {F}},} μ ( F ) = sup { μ ( K ) : F ⊇ K with K ∈ F a compact subset of ( Ω , τ ) } . {\displaystyle \mu (F)=\sup\{\mu (K):F\supseteq K{\text{ with }}K\in {\mathcal {F}}{\text{ a compact subset of }}(\Omega ,\tau )\}.} outer regular if for every F ∈ F , {\displaystyle F\in {\mathcal {F}},} μ ( F ) = inf { μ ( U ) : F ⊆ U and U ∈ F ∩ τ } . {\displaystyle \mu (F)=\inf\{\mu (U):F\subseteq U{\text{ and }}U\in {\mathcal {F}}\cap \tau \}.} regular if it is both inner regular and outer regular. a Borel regular measure if it is a Borel measure that is also regular. a Radon measure if it is a regular and locally finite measure. strictly positive if every non-empty open subset has (strictly) positive measure. a valuation if it is non-negative, monotone, modular, has a null empty set, and has domain τ . {\displaystyle \tau .} === Relationships between set functions === If μ {\displaystyle \mu } and ν {\displaystyle \nu } are two set functions over Ω , {\displaystyle \Omega ,} then: μ {\displaystyle \mu } is said to be absolutely continuous with respect to ν {\displaystyle \nu } or dominated by ν {\displaystyle \nu } , written μ ≪ ν , {\displaystyle \mu \ll \nu ,} if for every set F {\displaystyle F} that belongs to the domain of both μ {\displaystyle \mu } and ν , {\displaystyle \nu ,} if ν ( F ) = 0 {\displaystyle \nu (F)=0} then μ ( F ) = 0. {\displaystyle \mu (F)=0.} If μ {\displaystyle \mu } and ν {\displaystyle \nu } are σ {\displaystyle \sigma } -finite measures on the same measurable space and if μ ≪ ν , {\displaystyle \mu \ll \nu ,} then the Radon–Nikodym derivative d μ d ν {\displaystyle {\frac {d\mu }{d\nu }}} exists and for every measurable F , {\displaystyle F,} μ ( F ) = ∫ F d μ d ν d ν . {\displaystyle \mu (F)=\int _{F}{\frac {d\mu }{d\nu }}d\nu .} μ {\displaystyle \mu } and ν {\displaystyle \nu } are called equivalent if each one is absolutely continuous with respect to the other. μ {\displaystyle \mu } is called a supporting measure of a measure ν {\displaystyle \nu } if μ {\displaystyle \mu } is σ {\displaystyle \sigma } -finite and they are equivalent. μ {\displaystyle \mu } and ν {\displaystyle \nu } are singular, written μ ⊥ ν , {\displaystyle \mu \perp \nu ,} if there exist disjoint sets M {\displaystyle M} and N {\displaystyle N} in the domains of μ {\displaystyle \mu } and ν {\displaystyle \nu } such that M ∪ N = Ω , {\displaystyle M\cup N=\Omega ,} μ ( F ) = 0 {\displaystyle \mu (F)=0} for all F ⊆ M {\displaystyle F\subseteq M} in the domain of μ , {\displaystyle \mu ,} and ν ( F ) = 0 {\displaystyle \nu (F)=0} for all F ⊆ N {\displaystyle F\subseteq N} in the domain of ν . {\displaystyle \nu .} == Examples == Examples of set functions include: The function d ( A ) = lim n → ∞ | A ∩ { 1 , … , n } | n , {\displaystyle d(A)=\lim _{n\to \infty }{\frac {|A\cap \{1,\ldots ,n\}|}{n}},} assigning densities to sufficiently well-behaved subsets A ⊆ { 1 , 2 , 3 , … } , {\displaystyle A\subseteq \{1,2,3,\ldots \},} is a set function. A probability measure assigns a probability to each set in a σ-algebra. Specifically, the probability of the empty set is zero and the probability of the sample space is 1 , {\displaystyle 1,} with other sets given probabilities between 0 {\displaystyle 0} and 1. {\displaystyle 1.} A possibility measure assigns a number between zero and one to each set in the powerset of some given set. See possibility theory. A random set is a set-valued random variable. See the article random compact set. The Jordan measure on R n {\displaystyle \mathbb {R} ^{n}} is a set function defined on the set of all Jordan measurable subsets of R n ; {\displaystyle \mathbb {R} ^{n};} it sends a Jordan measurable set to its Jordan measure. === Lebesgue measure === The Lebesgue measure on R {\displaystyle \mathbb {R} } is a set function that assigns a non-negative real number to every set of real numbers that belongs to the Lebesgue σ {\displaystyle \sigma } -algebra. Its definition begins with the set Intervals ⁡ ( R ) {\displaystyle \operatorname {Intervals} (\mathbb {R} )} of all intervals of real numbers, which is a semialgebra on R . {\displaystyle \mathbb {R} .} The function that assigns to every interval I {\displaystyle I} its length ⁡ ( I ) {\displaystyle \operatorname {length} (I)} is a finitely additive set function (explicitly, if I {\displaystyle I} has endpoints a ≤ b {\displaystyle a\leq b} then length ⁡ ( I ) = b − a {\displaystyle \operatorname {length} (I)=b-a} ). This set function can be extended to the Lebesgue outer measure on R , {\displaystyle \mathbb {R} ,} which is the translation-invariant set function λ ∗ : ℘ ( R ) → [ 0 , ∞ ] {\displaystyle \lambda ^{\!*\!}:\wp (\mathbb {R} )\to [0,\infty ]} that sends a subset E ⊆ R {\displaystyle E\subseteq \mathbb {R} } to the infimum λ ∗ ( E ) = inf { ∑ k = 1 ∞ length ⁡ ( I k ) : ( I k ) k ∈ N is a sequence of open intervals with E ⊆ ⋃ k = 1 ∞ I k } . {\displaystyle \lambda ^{\!*\!}(E)=\inf \left\{\sum _{k=1}^{\infty }\operatorname {length} (I_{k}):{(I_{k})_{k\in \mathbb {N} }}{\text{ is a sequence of open intervals with }}E\subseteq \bigcup _{k=1}^{\infty }I_{k}\right\}.} Lebesgue outer measure is not countably additive (and so is not a measure) although its restriction to the 𝜎-algebra of all subsets M ⊆ R {\displaystyle M\subseteq \mathbb {R} } that satisfy the Carathéodory criterion: λ ∗ ( M ) = λ ∗ ( M ∩ E ) + λ ∗ ( M ∩ E c ) for every S ⊆ R {\displaystyle \lambda ^{\!*\!}(M)=\lambda ^{\!*\!}(M\cap E)+\lambda ^{\!*\!}(M\cap E^{c})\quad {\text{ for every }}S\subseteq \mathbb {R} } is a measure that called Lebesgue measure. Vitali sets are examples of non-measurable sets of real numbers. ==== Infinite-dimensional space ==== As detailed in the article on infinite-dimensional Lebesgue measure, the only locally finite and translation-invariant Borel measure on an infinite-dimensional separable normed space is the trivial measure. However, it is possible to define Gaussian measures on infinite-dimensional topological vector spaces. The structure theorem for Gaussian measures shows that the abstract Wiener space construction is essentially the only way to obtain a strictly positive Gaussian measure on a separable Banach space. === Finitely additive translation-invariant set functions === The only translation-invariant measure on Ω = R {\displaystyle \Omega =\mathbb {R} } with domain ℘ ( R ) {\displaystyle \wp (\mathbb {R} )} that is finite on every compact subset of R {\displaystyle \mathbb {R} } is the trivial set function ℘ ( R ) → [ 0 , ∞ ] {\displaystyle \wp (\mathbb {R} )\to [0,\infty ]} that is identically equal to 0 {\displaystyle 0} (that is, it sends every S ⊆ R {\displaystyle S\subseteq \mathbb {R} } to 0 {\displaystyle 0} ) However, if countable additivity is weakened to finite additivity then a non-trivial set function with these properties does exist and moreover, some are even valued in [ 0 , 1 ] . {\displaystyle [0,1].} In fact, such non-trivial set functions will exist even if R {\displaystyle \mathbb {R} } is replaced by any other abelian group G . {\displaystyle G.} == Extending set functions == === Extending from semialgebras to algebras === Suppose that μ {\displaystyle \mu } is a set function on a semialgebra F {\displaystyle {\mathcal {F}}} over Ω {\displaystyle \Omega } and let algebra ⁡ ( F ) := { F 1 ⊔ ⋯ ⊔ F n : n ∈ N and F 1 , … , F n ∈ F are pairwise disjoint } , {\displaystyle \operatorname {algebra} ({\mathcal {F}}):=\left\{F_{1}\sqcup \cdots \sqcup F_{n}:n\in \mathbb {N} {\text{ and }}F_{1},\ldots ,F_{n}\in {\mathcal {F}}{\text{ are pairwise disjoint }}\right\},} which is the algebra on Ω {\displaystyle \Omega } generated by F . {\displaystyle {\mathcal {F}}.} The archetypal example of a semialgebra that is not also an algebra is the family S d := { ∅ } ∪ { ( a 1 , b 1 ] × ⋯ × ( a 1 , b 1 ] : − ∞ ≤ a i < b i ≤ ∞ for all i = 1 , … , d } {\displaystyle {\mathcal {S}}_{d}:=\{\varnothing \}\cup \left\{\left(a_{1},b_{1}\right]\times \cdots \times \left(a_{1},b_{1}\right]~:~-\infty \leq a_{i}<b_{i}\leq \infty {\text{ for all }}i=1,\ldots ,d\right\}} on Ω := R d {\displaystyle \Omega :=\mathbb {R} ^{d}} where ( a , b ] := { x ∈ R : a < x ≤ b } {\displaystyle (a,b]:=\{x\in \mathbb {R} :a<x\leq b\}} for all − ∞ ≤ a < b ≤ ∞ . {\displaystyle -\infty \leq a<b\leq \infty .} Importantly, the two non-strict inequalities ≤ {\displaystyle \,\leq \,} in − ∞ ≤ a i < b i ≤ ∞ {\displaystyle -\infty \leq a_{i}<b_{i}\leq \infty } cannot be replaced with strict inequalities < {\displaystyle \,<\,} since semialgebras must contain the whole underlying set R d ; {\displaystyle \mathbb {R} ^{d};} that is, R d ∈ S d {\displaystyle \mathbb {R} ^{d}\in {\mathcal {S}}_{d}} is a requirement of semialgebras (as is ∅ ∈ S d {\displaystyle \varnothing \in {\mathcal {S}}_{d}} ). If μ {\displaystyle \mu } is finitely additive then it has a unique extension to a set function μ ¯ {\displaystyle {\overline {\mu }}} on algebra ⁡ ( F ) {\displaystyle \operatorname {algebra} ({\mathcal {F}})} defined by sending F 1 ⊔ ⋯ ⊔ F n ∈ algebra ⁡ ( F ) {\displaystyle F_{1}\sqcup \cdots \sqcup F_{n}\in \operatorname {algebra} ({\mathcal {F}})} (where ⊔ {\displaystyle \,\sqcup \,} indicates that these F i ∈ F {\displaystyle F_{i}\in {\mathcal {F}}} are pairwise disjoint) to: μ ¯ ( F 1 ⊔ ⋯ ⊔ F n ) := μ ( F 1 ) + ⋯ + μ ( F n ) . {\displaystyle {\overline {\mu }}\left(F_{1}\sqcup \cdots \sqcup F_{n}\right):=\mu \left(F_{1}\right)+\cdots +\mu \left(F_{n}\right).} This extension μ ¯ {\displaystyle {\overline {\mu }}} will also be finitely additive: for any pairwise disjoint A 1 , … , A n ∈ algebra ⁡ ( F ) , {\displaystyle A_{1},\ldots ,A_{n}\in \operatorname {algebra} ({\mathcal {F}}),} μ ¯ ( A 1 ∪ ⋯ ∪ A n ) = μ ¯ ( A 1 ) + ⋯ + μ ¯ ( A n ) . {\displaystyle {\overline {\mu }}\left(A_{1}\cup \cdots \cup A_{n}\right)={\overline {\mu }}\left(A_{1}\right)+\cdots +{\overline {\mu }}\left(A_{n}\right).} If in addition μ {\displaystyle \mu } is extended real-valued and monotone (which, in particular, will be the case if μ {\displaystyle \mu } is non-negative) then μ ¯ {\displaystyle {\overline {\mu }}} will be monotone and finitely subadditive: for any A , A 1 , … , A n ∈ algebra ⁡ ( F ) {\displaystyle A,A_{1},\ldots ,A_{n}\in \operatorname {algebra} ({\mathcal {F}})} such that A ⊆ A 1 ∪ ⋯ ∪ A n , {\displaystyle A\subseteq A_{1}\cup \cdots \cup A_{n},} μ ¯ ( A ) ≤ μ ¯ ( A 1 ) + ⋯ + μ ¯ ( A n ) . {\displaystyle {\overline {\mu }}\left(A\right)\leq {\overline {\mu }}\left(A_{1}\right)+\cdots +{\overline {\mu }}\left(A_{n}\right).} === Extending from rings to σ-algebras === If μ : F → [ 0 , ∞ ] {\displaystyle \mu :{\mathcal {F}}\to [0,\infty ]} is a pre-measure on a ring of sets (such as an algebra of sets) F {\displaystyle {\mathcal {F}}} over Ω {\displaystyle \Omega } then μ {\displaystyle \mu } has an extension to a measure μ ¯ : σ ( F ) → [ 0 , ∞ ] {\displaystyle {\overline {\mu }}:\sigma ({\mathcal {F}})\to [0,\infty ]} on the σ-algebra σ ( F ) {\displaystyle \sigma ({\mathcal {F}})} generated by F . {\displaystyle {\mathcal {F}}.} If μ {\displaystyle \mu } is σ-finite then this extension is unique. To define this extension, first extend μ {\displaystyle \mu } to an outer measure μ ∗ {\displaystyle \mu ^{*}} on 2 Ω = ℘ ( Ω ) {\displaystyle 2^{\Omega }=\wp (\Omega )} by μ ∗ ( T ) = inf { ∑ n μ ( S n ) : T ⊆ ∪ n S n with S 1 , S 2 , … ∈ F } {\displaystyle \mu ^{*}(T)=\inf \left\{\sum _{n}\mu \left(S_{n}\right):T\subseteq \cup _{n}S_{n}{\text{ with }}S_{1},S_{2},\ldots \in {\mathcal {F}}\right\}} and then restrict it to the set F M {\displaystyle {\mathcal {F}}_{M}} of μ ∗ {\displaystyle \mu ^{*}} -measurable sets (that is, Carathéodory-measurable sets), which is the set of all M ⊆ Ω {\displaystyle M\subseteq \Omega } such that μ ∗ ( S ) = μ ∗ ( S ∩ M ) + μ ∗ ( S ∩ M c ) for every subset S ⊆ Ω . {\displaystyle \mu ^{*}(S)=\mu ^{*}(S\cap M)+\mu ^{*}(S\cap M^{\mathrm {c} })\quad {\text{ for every subset }}S\subseteq \Omega .} It is a σ {\displaystyle \sigma } -algebra and μ ∗ {\displaystyle \mu ^{*}} is sigma-additive on it, by Caratheodory lemma. === Restricting outer measures === If μ ∗ : ℘ ( Ω ) → [ 0 , ∞ ] {\displaystyle \mu ^{*}:\wp (\Omega )\to [0,\infty ]} is an outer measure on a set Ω , {\displaystyle \Omega ,} where (by definition) the domain is necessarily the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} of Ω , {\displaystyle \Omega ,} then a subset M ⊆ Ω {\displaystyle M\subseteq \Omega } is called μ ∗ {\displaystyle \mu ^{*}} –measurable or Carathéodory-measurable if it satisfies the following Carathéodory's criterion: μ ∗ ( S ) = μ ∗ ( S ∩ M ) + μ ∗ ( S ∩ M c ) for every subset S ⊆ Ω , {\displaystyle \mu ^{*}(S)=\mu ^{*}(S\cap M)+\mu ^{*}(S\cap M^{\mathrm {c} })\quad {\text{ for every subset }}S\subseteq \Omega ,} where M c := Ω ∖ M {\displaystyle M^{\mathrm {c} }:=\Omega \setminus M} is the complement of M . {\displaystyle M.} The family of all μ ∗ {\displaystyle \mu ^{*}} –measurable subsets is a σ-algebra and the restriction of the outer measure μ ∗ {\displaystyle \mu ^{*}} to this family is a measure. == See also == Absolute continuity (measure theory) – Form of continuity for functionsPages displaying short descriptions of redirect targets Boolean ring – Algebraic structure in mathematics Cylinder set measure – way to generate a measure over product spacesPages displaying wikidata descriptions as a fallback Field of sets – Algebraic concept in measure theory, also referred to as an algebra of sets Hadwiger's theorem – Theorem in integral geometry Hahn decomposition theorem – Measurability theorem Invariant measure – Concept in mathematics Lebesgue's decomposition theorem Positive and negative sets Radon–Nikodym theorem – Expressing a measure as an integral of another Riesz–Markov–Kakutani representation theorem – Statement about linear functionals and measures Ring of sets – Family closed under unions and relative complements σ-algebra – Algebraic structure of set algebra Vitali–Hahn–Saks theorem == Notes == Proofs == References == Durrett, Richard (2019). Probability: Theory and Examples (PDF). Cambridge Series in Statistical and Probabilistic Mathematics. Vol. 49 (5th ed.). Cambridge New York, NY: Cambridge University Press. ISBN 978-1-108-47368-2. OCLC 1100115281. Retrieved November 5, 2020. Kolmogorov, Andrey; Fomin, Sergei V. (2012) [1957]. Elements of the Theory of Functions and Functional Analysis. Dover Books on Mathematics. New York: Dover Books. ISBN 978-1-61427-304-2. OCLC 912495626. A. N. Kolmogorov and S. V. Fomin (1975), Introductory Real Analysis, Dover. ISBN 0-486-61226-0 Royden, Halsey; Fitzpatrick, Patrick (15 January 2010). Real Analysis (4 ed.). Boston: Prentice Hall. ISBN 978-0-13-143747-0. OCLC 456836719. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. == Further reading == Sobolev, V.I. (2001) [1994], "Set function", Encyclopedia of Mathematics, EMS Press Regular set function at Encyclopedia of Mathematics
Wikipedia/Set_function
Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis, which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fitted to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables. Extrapolation refers to the use of a fitted curve beyond the range of the observed data, and is subject to a degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data. For linear-algebraic analysis of data, "fitting" usually means trying to find the curve that minimizes the vertical (y-axis) displacement of a point from the curve (e.g., ordinary least squares). However, for graphical and image applications, geometric fitting seeks to provide the best visual fit; which usually means trying to minimize the orthogonal distance to the curve (e.g., total least squares), or to otherwise include both axes of displacement of a point from the curve. Geometric fits are not popular because they usually require non-linear and/or iterative calculations, although they have the advantage of a more aesthetic and geometrically accurate result. == Algebraic fitting of functions to data points == Most commonly, one fits a function of the form y=f(x). === Fitting lines and polynomial functions to data points === The first degree polynomial equation y = a x + b {\displaystyle y=ax+b\;} is a line with slope a. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates. If the order of the equation is increased to a second degree polynomial, the following results: y = a x 2 + b x + c . {\displaystyle y=ax^{2}+bx+c\;.} This will exactly fit a simple curve to three points. If the order of the equation is increased to a third degree polynomial, the following is obtained: y = a x 3 + b x 2 + c x + d . {\displaystyle y=ax^{3}+bx^{2}+cx+d\;.} This will exactly fit four points. A more general statement would be to say it will exactly fit four constraints. Each constraint can be a point, angle, or curvature (which is the reciprocal of the radius of an osculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are called end conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a single spline. Higher-order constraints, such as "the change in the rate of curvature", could also be added. This, for example, would be useful in highway cloverleaf design to understand the rate of change of the forces applied to a car (see jerk), as it follows the cloverleaf, and to set reasonable speed limits, accordingly. The first degree polynomial equation could also be an exact fit for a single point and an angle while the third degree polynomial equation could also be an exact fit for two points, an angle constraint, and a curvature constraint. Many other combinations of constraints are possible for these and for higher order polynomial equations. If there are more than n + 1 constraints (n being the degree of the polynomial), the polynomial curve can still be run through those constraints. An exact fit to all constraints is not certain (but might happen, for example, in the case of a first degree polynomial exactly fitting three collinear points). In general, however, some method is then needed to evaluate each approximation. The least squares method is one way to compare the deviations. There are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match.: Even if an exact match exists, it does not necessarily follow that it can be readily discovered. Depending on the algorithm used there may be a divergent case, where the exact fit cannot be calculated, or it might take too much computer time to find the solution. This situation might require an approximate solution. The effect of averaging out questionable data points in a sample, rather than distorting the curve to fit them exactly, may be desirable. Runge's phenomenon: high order polynomials can be highly oscillatory. If a curve runs through two points A and B, it would be expected that the curve would run somewhat near the midpoint of A and B, as well. This may not happen with high-order polynomial curves; they may even have values that are very large in positive or negative magnitude. With low-order polynomials, the curve is more likely to fall near the midpoint (it's even guaranteed to exactly run through the midpoint on a first degree polynomial). Low-order polynomials tend to be smooth and high order polynomial curves tend to be "lumpy". To define this more precisely, the maximum number of inflection points possible in a polynomial curve is n-2, where n is the order of the polynomial equation. An inflection point is a location on the curve where it switches from a positive radius to negative. We can also say this is where it transitions from "holding water" to "shedding water". Note that it is only "possible" that high order polynomials will be lumpy; they could also be smooth, but there is no guarantee of this, unlike with low order polynomial curves. A fifteenth degree polynomial could have, at most, thirteen inflection points, but could also have eleven, or nine or any odd number down to one. (Polynomials with even numbered degree could have any even number of inflection points from n - 2 down to zero.) The degree of the polynomial curve being higher than needed for an exact fit is undesirable for all the reasons listed previously for high order polynomials, but also leads to a case where there are an infinite number of solutions. For example, a first degree polynomial (a line) constrained by only a single point, instead of the usual two, would give an infinite number of solutions. This brings up the problem of how to compare and choose just one solution, which can be a problem for both software and humans. Because of this, it is usually best to choose as low a degree as possible for an exact match on all constraints, and perhaps an even lower degree, if an approximate fit is acceptable. === Fitting other functions to data points === Other types of curves, such as trigonometric functions (such as sine and cosine), may also be used, in certain cases. In spectroscopy, data may be fitted with Gaussian, Lorentzian, Voigt and related functions. In biology, ecology, demography, epidemiology, and many other disciplines, the growth of a population, the spread of infectious disease, etc. can be fitted using the logistic function. In agriculture the inverted logistic sigmoid function (S-curve) is used to describe the relation between crop yield and growth factors. The blue figure was made by a sigmoid regression of data measured in farm lands. It can be seen that initially, i.e. at low soil salinity, the crop yield reduces slowly at increasing soil salinity, while thereafter the decrease progresses faster. == Geometric fitting of plane curves to data points == If a function of the form y = f ( x ) {\displaystyle y=f(x)} cannot be postulated, one can still try to fit a plane curve. Other types of curves, such as conic sections (circular, elliptical, parabolic, and hyperbolic arcs) or trigonometric functions (such as sine and cosine), may also be used, in certain cases. For example, trajectories of objects under the influence of gravity follow a parabolic path, when air resistance is ignored. Hence, matching trajectory data points to a parabolic curve would make sense. Tides follow sinusoidal patterns, hence tidal data points should be matched to a sine wave, or the sum of two sine waves of different periods, if the effects of the Moon and Sun are both considered. For a parametric curve, it is effective to fit each of its coordinates as a separate function of arc length; assuming that data points can be ordered, the chord distance may be used. === Fitting a circle by geometric fit === Coope approaches the problem of trying to find the best visual fit of circle to a set of 2D data points. The method elegantly transforms the ordinarily non-linear problem into a linear problem that can be solved without using iterative numerical methods, and is hence much faster than previous techniques. === Fitting an ellipse by geometric fit === The above technique is extended to general ellipses by adding a non-linear step, resulting in a method that is fast, yet finds visually pleasing ellipses of arbitrary orientation and displacement. == Fitting surfaces == Note that while this discussion was in terms of 2D curves, much of this logic also extends to 3D surfaces, each patch of which is defined by a net of curves in two parametric directions, typically called u and v. A surface may be composed of one or more surface patches in each direction. == Software == Many statistical packages such as R and numerical software such as the gnuplot, GNU Scientific Library, Igor Pro, MLAB, Maple, MATLAB, TK Solver 6.0, Scilab, Mathematica, GNU Octave, and SciPy include commands for doing curve fitting in a variety of scenarios. There are also programs specifically written to do curve fitting; they can be found in the lists of statistical and numerical-analysis programs as well as in Category:Regression and curve fitting software. == See also == == References == == Further reading == N. Chernov (2010), Circular and linear regression: Fitting circles and lines by least squares, Chapman & Hall/CRC, Monographs on Statistics and Applied Probability, Volume 117 (256 pp.). [2]
Wikipedia/Function_fitting
In mathematics, a multivalued function, multiple-valued function, many-valued function, or multifunction, is a function that has two or more values in its range for at least one point in its domain. It is a set-valued function with additional properties depending on context; some authors do not distinguish between set-valued functions and multifunctions, but English Wikipedia currently does, having a separate article for each. A multivalued function of sets f : X → Y is a subset Γ f ⊆ X × Y . {\displaystyle \Gamma _{f}\ \subseteq \ X\times Y.} Write f(x) for the set of those y ∈ Y with (x,y) ∈ Γf. If f is an ordinary function, it is a multivalued function by taking its graph Γ f = { ( x , f ( x ) ) : x ∈ X } . {\displaystyle \Gamma _{f}\ =\ \{(x,f(x))\ :\ x\in X\}.} They are called single-valued functions to distinguish them. == Motivation == The term multivalued function originated in complex analysis, from analytic continuation. It often occurs that one knows the value of a complex analytic function f ( z ) {\displaystyle f(z)} in some neighbourhood of a point z = a {\displaystyle z=a} . This is the case for functions defined by the implicit function theorem or by a Taylor series around z = a {\displaystyle z=a} . In such a situation, one may extend the domain of the single-valued function f ( z ) {\displaystyle f(z)} along curves in the complex plane starting at a {\displaystyle a} . In doing so, one finds that the value of the extended function at a point z = b {\displaystyle z=b} depends on the chosen curve from a {\displaystyle a} to b {\displaystyle b} ; since none of the new values is more natural than the others, all of them are incorporated into a multivalued function. For example, let f ( z ) = z {\displaystyle f(z)={\sqrt {z}}\,} be the usual square root function on positive real numbers. One may extend its domain to a neighbourhood of z = 1 {\displaystyle z=1} in the complex plane, and then further along curves starting at z = 1 {\displaystyle z=1} , so that the values along a given curve vary continuously from 1 = 1 {\displaystyle {\sqrt {1}}=1} . Extending to negative real numbers, one gets two opposite values for the square root—for example ±i for −1—depending on whether the domain has been extended through the upper or the lower half of the complex plane. This phenomenon is very frequent, occurring for nth roots, logarithms, and inverse trigonometric functions. To define a single-valued function from a complex multivalued function, one may distinguish one of the multiple values as the principal value, producing a single-valued function on the whole plane which is discontinuous along certain boundary curves. Alternatively, dealing with the multivalued function allows having something that is everywhere continuous, at the cost of possible value changes when one follows a closed path (monodromy). These problems are resolved in the theory of Riemann surfaces: to consider a multivalued function f ( z ) {\displaystyle f(z)} as an ordinary function without discarding any values, one multiplies the domain into a many-layered covering space, a manifold which is the Riemann surface associated to f ( z ) {\displaystyle f(z)} . == Inverses of functions == If f : X → Y is an ordinary function, then its inverse is the multivalued function Γ f − 1 ⊆ Y × X {\displaystyle \Gamma _{f^{-1}}\ \subseteq \ Y\times X} defined as Γf, viewed as a subset of X × Y. When f is a differentiable function between manifolds, the inverse function theorem gives conditions for this to be single-valued locally in X. For example, the complex logarithm log(z) is the multivalued inverse of the exponential function ez : C → C×, with graph Γ log ⁡ ( z ) = { ( z , w ) : w = log ⁡ ( z ) } ⊆ C × C × . {\displaystyle \Gamma _{\log(z)}\ =\ \{(z,w)\ :\ w=\log(z)\}\ \subseteq \ \mathbf {C} \times \mathbf {C} ^{\times }.} It is not single valued, given a single w with w = log(z), we have log ⁡ ( z ) = w + 2 π i Z . {\displaystyle \log(z)\ =\ w\ +\ 2\pi i\mathbf {Z} .} Given any holomorphic function on an open subset of the complex plane C, its analytic continuation is always a multivalued function. == Concrete examples == Every real number greater than zero has two real square roots, so that square root may be considered a multivalued function. For example, we may write 4 = ± 2 = { 2 , − 2 } {\displaystyle {\sqrt {4}}=\pm 2=\{2,-2\}} ; although zero has only one square root, 0 = { 0 } {\displaystyle {\sqrt {0}}=\{0\}} . Note that x {\displaystyle {\sqrt {x}}} usually denotes only the principal square root of x {\displaystyle x} . Each nonzero complex number has two square roots, three cube roots, and in general n nth roots. The only nth root of 0 is 0. The complex logarithm function is multiple-valued. The values assumed by log ⁡ ( a + b i ) {\displaystyle \log(a+bi)} for real numbers a {\displaystyle a} and b {\displaystyle b} are log ⁡ a 2 + b 2 + i arg ⁡ ( a + b i ) + 2 π n i {\displaystyle \log {\sqrt {a^{2}+b^{2}}}+i\arg(a+bi)+2\pi ni} for all integers n {\displaystyle n} . Inverse trigonometric functions are multiple-valued because trigonometric functions are periodic. We have tan ⁡ ( π 4 ) = tan ⁡ ( 5 π 4 ) = tan ⁡ ( − 3 π 4 ) = tan ⁡ ( ( 2 n + 1 ) π 4 ) = ⋯ = 1. {\displaystyle \tan \left({\tfrac {\pi }{4}}\right)=\tan \left({\tfrac {5\pi }{4}}\right)=\tan \left({\tfrac {-3\pi }{4}}\right)=\tan \left({\tfrac {(2n+1)\pi }{4}}\right)=\cdots =1.} As a consequence, arctan(1) is intuitively related to several values: π/4, 5π/4, −3π/4, and so on. We can treat arctan as a single-valued function by restricting the domain of tan x to −π/2 < x < π/2 – a domain over which tan x is monotonically increasing. Thus, the range of arctan(x) becomes −π/2 < y < π/2. These values from a restricted domain are called principal values. The antiderivative can be considered as a multivalued function. The antiderivative of a function is the set of functions whose derivative is that function. The constant of integration follows from the fact that the derivative of a constant function is 0. Inverse hyperbolic functions over the complex domain are multiple-valued because hyperbolic functions are periodic along the imaginary axis. Over the reals, they are single-valued, except for arcosh and arsech. These are all examples of multivalued functions that come about from non-injective functions. Since the original functions do not preserve all the information of their inputs, they are not reversible. Often, the restriction of a multivalued function is a partial inverse of the original function. == Branch points == Multivalued functions of a complex variable have branch points. For example, for the nth root and logarithm functions, 0 is a branch point; for the arctangent function, the imaginary units i and −i are branch points. Using the branch points, these functions may be redefined to be single-valued functions, by restricting the range. A suitable interval may be found through use of a branch cut, a kind of curve that connects pairs of branch points, thus reducing the multilayered Riemann surface of the function to a single layer. As in the case with real functions, the restricted range may be called the principal branch of the function. == Applications == In physics, multivalued functions play an increasingly important role. They form the mathematical basis for Dirac's magnetic monopoles, for the theory of defects in crystals and the resulting plasticity of materials, for vortices in superfluids and superconductors, and for phase transitions in these systems, for instance melting and quark confinement. They are the origin of gauge field structures in many branches of physics. == See also == Relation (mathematics) Function (mathematics) Binary relation Set-valued function == Further reading == H. Kleinert, Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation, World Scientific (Singapore, 2008) (also available online) H. Kleinert, Gauge Fields in Condensed Matter, Vol. I: Superflow and Vortex Lines, 1–742, Vol. II: Stresses and Defects, 743–1456, World Scientific, Singapore, 1989 (also available online: Vol. I and Vol. II) == References ==
Wikipedia/Multi-valued_function
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering. As a differentiable function of a complex variable is equal to the sum function given by its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions. The concept can be extended to functions of several complex variables. Complex analysis is contrasted with real analysis, which deals with the study of real numbers and functions of a real variable. == History == Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory. == Complex functions == A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane. For any complex function, the values z {\displaystyle z} from the domain and their images f ( z ) {\displaystyle f(z)} in the range may be separated into real and imaginary parts: z = x + i y and f ( z ) = f ( x + i y ) = u ( x , y ) + i v ( x , y ) , {\displaystyle z=x+iy\quad {\text{ and }}\quad f(z)=f(x+iy)=u(x,y)+iv(x,y),} where x , y , u ( x , y ) , v ( x , y ) {\displaystyle x,y,u(x,y),v(x,y)} are all real-valued. In other words, a complex function f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } may be decomposed into u : R 2 → R {\displaystyle u:\mathbb {R} ^{2}\to \mathbb {R} \quad } and v : R 2 → R , {\displaystyle \quad v:\mathbb {R} ^{2}\to \mathbb {R} ,} i.e., into two real-valued functions ( u {\displaystyle u} , v {\displaystyle v} ) of two real variables ( x {\displaystyle x} , y {\displaystyle y} ). Similarly, any complex-valued function f on an arbitrary set X (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: (Re f, Im f) or, alternatively, as a vector-valued function from X into R 2 . {\displaystyle \mathbb {R} ^{2}.} Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions. == Holomorphic functions == Complex functions that are differentiable at every point of an open subset Ω {\displaystyle \Omega } of the complex plane are said to be holomorphic on Ω {\displaystyle \Omega } . In the context of complex analysis, the derivative of f {\displaystyle f} at z 0 {\displaystyle z_{0}} is defined to be f ′ ( z 0 ) = lim z → z 0 f ( z ) − f ( z 0 ) z − z 0 . {\displaystyle f'(z_{0})=\lim _{z\to z_{0}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}.} Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach z 0 {\displaystyle z_{0}} in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on Ω {\displaystyle \Omega } can be approximated arbitrarily well by polynomials in some neighborhood of every point in Ω {\displaystyle \Omega } . This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see Non-analytic smooth function § A smooth function which is nowhere real analytic. Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions C → C {\displaystyle \mathbb {C} \to \mathbb {C} } , are holomorphic over the entire complex plane, making them entire functions, while rational functions p / q {\displaystyle p/q} , where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions z ↦ ℜ ( z ) {\displaystyle z\mapsto \Re (z)} , z ↦ | z | {\displaystyle z\mapsto |z|} , and z ↦ z ¯ {\displaystyle z\mapsto {\bar {z}}} are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below). An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } , defined by f ( z ) = f ( x + i y ) = u ( x , y ) + i v ( x , y ) {\displaystyle f(z)=f(x+iy)=u(x,y)+iv(x,y)} , where x , y , u ( x , y ) , v ( x , y ) ∈ R {\displaystyle x,y,u(x,y),v(x,y)\in \mathbb {R} } , is holomorphic on a region Ω {\displaystyle \Omega } , then for all z 0 ∈ Ω {\displaystyle z_{0}\in \Omega } , ∂ f ∂ z ¯ ( z 0 ) = 0 , where ∂ ∂ z ¯ := 1 2 ( ∂ ∂ x + i ∂ ∂ y ) . {\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}(z_{0})=0,\ {\text{where }}{\frac {\partial }{\partial {\bar {z}}}}\mathrel {:=} {\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right).} In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations u x = v y {\displaystyle u_{x}=v_{y}} and u y = − v x {\displaystyle u_{y}=-v_{x}} , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem). Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: C {\displaystyle \mathbb {C} } , C ∖ { z 0 } {\displaystyle \mathbb {C} \setminus \{z_{0}\}} , or { z 0 } {\displaystyle \{z_{0}\}} for some z 0 ∈ C {\displaystyle z_{0}\in \mathbb {C} } . In other words, if two distinct complex numbers z {\displaystyle z} and w {\displaystyle w} are not in the range of an entire function f {\displaystyle f} , then f {\displaystyle f} is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset. == Conformal map == == Major results == One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials. A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed. If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface. All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions. A major application of certain complex spaces is in quantum mechanics as wave functions. == See also == Complex geometry Hypercomplex analysis Vector calculus List of complex analysis topics Monodromy theorem Riemann–Roch theorem Runge's theorem == References == == Sources == Ablowitz, M. J. & A. S. Fokas, Complex Variables: Introduction and Applications (Cambridge, 2003). Ahlfors, L., Complex Analysis (McGraw-Hill, 1953). Cartan, H., Théorie élémentaire des fonctions analytiques d'une ou plusieurs variables complexes. (Hermann, 1961). English translation, Elementary Theory of Analytic Functions of One or Several Complex Variables. (Addison-Wesley, 1963). Carathéodory, C., Funktionentheorie. (Birkhäuser, 1950). English translation, Theory of Functions of a Complex Variable (Chelsea, 1954). [2 volumes.] Carrier, G. F., M. Krook, & C. E. Pearson, Functions of a Complex Variable: Theory and Technique. (McGraw-Hill, 1966). Conway, J. B., Functions of One Complex Variable. (Springer, 1973). Fisher, S., Complex Variables. (Wadsworth & Brooks/Cole, 1990). Forsyth, A., Theory of Functions of a Complex Variable (Cambridge, 1893). Freitag, E. & R. Busam, Funktionentheorie. (Springer, 1995). English translation, Complex Analysis. (Springer, 2005). Goursat, E., Cours d'analyse mathématique, tome 2. (Gauthier-Villars, 1905). English translation, A course of mathematical analysis, vol. 2, part 1: Functions of a complex variable. (Ginn, 1916). Henrici, P., Applied and Computational Complex Analysis (Wiley). [Three volumes: 1974, 1977, 1986.] Kreyszig, E., Advanced Engineering Mathematics. (Wiley, 1962). Lavrentyev, M. & B. Shabat, Методы теории функций комплексного переменного. (Methods of the Theory of Functions of a Complex Variable). (1951, in Russian). Markushevich, A. I., Theory of Functions of a Complex Variable, (Prentice-Hall, 1965). [Three volumes.] Marsden & Hoffman, Basic Complex Analysis. (Freeman, 1973). Needham, T., Visual Complex Analysis. (Oxford, 1997). http://usf.usfca.edu/vca/ Remmert, R., Theory of Complex Functions. (Springer, 1990). Rudin, W., Real and Complex Analysis. (McGraw-Hill, 1966). Shaw, W. T., Complex Analysis with Mathematica (Cambridge, 2006). Stein, E. & R. Shakarchi, Complex Analysis. (Princeton, 2003). Sveshnikov, A. G. & A. N. Tikhonov, Теория функций комплексной переменной. (Nauka, 1967). English translation, The Theory Of Functions Of A Complex Variable (MIR, 1978). Titchmarsh, E. C., The Theory of Functions. (Oxford, 1932). Wegert, E., Visual Complex Functions. (Birkhäuser, 2012). Whittaker, E. T. & G. N. Watson, A Course of Modern Analysis. (Cambridge, 1902). 3rd ed. (1920) == External links == Wolfram Research's MathWorld Complex Analysis Page
Wikipedia/Complex_function
In mathematics, the error function (also called the Gauss error function), often denoted by erf, is a function e r f : C → C {\displaystyle \mathrm {erf} :\mathbb {C} \to \mathbb {C} } defined as: erf ⁡ z = 2 π ∫ 0 z e − t 2 d t . {\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\int _{0}^{z}e^{-t^{2}}\,\mathrm {d} t.} The integral here is a complex contour integral which is path-independent because exp ⁡ ( − t 2 ) {\displaystyle \exp(-t^{2})} is holomorphic on the whole complex plane C {\displaystyle \mathbb {C} } . In many applications, the function argument is a real number, in which case the function value is also real. In some old texts, the error function is defined without the factor of 2 π {\displaystyle {\frac {2}{\sqrt {\pi }}}} . This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations. In statistics, for non-negative real values of x, the error function has the following interpretation: for a real random variable Y that is normally distributed with mean 0 and standard deviation 1 2 {\displaystyle {\frac {1}{\sqrt {2}}}} , erf x is the probability that Y falls in the range [−x, x]. Two closely related functions are the complementary error function e r f c : C → C {\displaystyle \mathrm {erfc} :\mathbb {C} \to \mathbb {C} } is defined as erfc ⁡ z = 1 − erf ⁡ z , {\displaystyle \operatorname {erfc} z=1-\operatorname {erf} z,} and the imaginary error function e r f i : C → C {\displaystyle \mathrm {erfi} :\mathbb {C} \to \mathbb {C} } is defined as erfi ⁡ z = − i erf ⁡ i z , {\displaystyle \operatorname {erfi} z=-i\operatorname {erf} iz,} where i is the imaginary unit. == Name == The name "error function" and its abbreviation erf were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors." The error function complement was also discussed by Glaisher in a separate publication in the same year. For the "law of facility" of errors whose density is given by f ( x ) = ( c π ) 1 / 2 e − c x 2 {\displaystyle f(x)=\left({\frac {c}{\pi }}\right)^{1/2}e^{-cx^{2}}} (the normal distribution), Glaisher calculates the probability of an error lying between p and q as: ( c π ) 1 2 ∫ p q e − c x 2 d x = 1 2 ( erf ⁡ ( q c ) − erf ⁡ ( p c ) ) . {\displaystyle \left({\frac {c}{\pi }}\right)^{\frac {1}{2}}\int _{p}^{q}e^{-cx^{2}}\,\mathrm {d} x={\tfrac {1}{2}}\left(\operatorname {erf} \left(q{\sqrt {c}}\right)-\operatorname {erf} \left(p{\sqrt {c}}\right)\right).} == Applications == When the results of a series of measurements are described by a normal distribution with standard deviation σ and expected value 0, then erf (⁠a/σ √2⁠) is the probability that the error of a single measurement lies between −a and +a, for positive a. This is useful, for example, in determining the bit error rate of a digital communication system. The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function. The error function and its approximations can be used to estimate results that hold with high probability or with low probability. Given a random variable X ~ Norm[μ,σ] (a normal distribution with mean μ and standard deviation σ) and a constant L > μ, it can be shown via integration by substitution: Pr [ X ≤ L ] = 1 2 + 1 2 erf ⁡ L − μ 2 σ ≈ A exp ⁡ ( − B ( L − μ σ ) 2 ) {\displaystyle {\begin{aligned}\Pr[X\leq L]&={\frac {1}{2}}+{\frac {1}{2}}\operatorname {erf} {\frac {L-\mu }{{\sqrt {2}}\sigma }}\\&\approx A\exp \left(-B\left({\frac {L-\mu }{\sigma }}\right)^{2}\right)\end{aligned}}} where A and B are certain numeric constants. If L is sufficiently far from the mean, specifically μ − L ≥ σ√ln k, then: Pr [ X ≤ L ] ≤ A exp ⁡ ( − B ln ⁡ k ) = A k B {\displaystyle \Pr[X\leq L]\leq A\exp(-B\ln {k})={\frac {A}{k^{B}}}} so the probability goes to 0 as k → ∞. The probability for X being in the interval [La, Lb] can be derived as Pr [ L a ≤ X ≤ L b ] = ∫ L a L b 1 2 π σ exp ⁡ ( − ( x − μ ) 2 2 σ 2 ) d x = 1 2 ( erf ⁡ L b − μ 2 σ − erf ⁡ L a − μ 2 σ ) . {\displaystyle {\begin{aligned}\Pr[L_{a}\leq X\leq L_{b}]&=\int _{L_{a}}^{L_{b}}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\,\mathrm {d} x\\&={\frac {1}{2}}\left(\operatorname {erf} {\frac {L_{b}-\mu }{{\sqrt {2}}\sigma }}-\operatorname {erf} {\frac {L_{a}-\mu }{{\sqrt {2}}\sigma }}\right).\end{aligned}}} == Properties == The property erf (−z) = −erf z means that the error function is an odd function. This directly results from the fact that the integrand e−t2 is an even function (the antiderivative of an even function which is zero at the origin is an odd function and vice versa). Since the error function is an entire function which takes real numbers to real numbers, for any complex number z: erf ⁡ z ¯ = erf ⁡ z ¯ {\displaystyle \operatorname {erf} {\overline {z}}={\overline {\operatorname {erf} z}}} where z ¯ {\displaystyle {\overline {z}}} denotes the complex conjugate of z {\displaystyle z} . The integrand f = exp(−z2) and f = erf z are shown in the complex z-plane in the figures at right with domain coloring. The error function at +∞ is exactly 1 (see Gaussian integral). At the real axis, erf z approaches unity at z → +∞ and −1 at z → −∞. At the imaginary axis, it tends to ±i∞. === Taylor series === The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges. For x >> 1, however, cancellation of leading terms makes the Taylor expansion unpractical. The defining integral cannot be evaluated in closed form in terms of elementary functions (see Liouville's theorem), but by expanding the integrand e−z2 into its Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as: erf ⁡ z = 2 π ∑ n = 0 ∞ ( − 1 ) n z 2 n + 1 n ! ( 2 n + 1 ) = 2 π ( z − z 3 3 + z 5 10 − z 7 42 + z 9 216 − ⋯ ) {\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}-{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}-\cdots \right)\end{aligned}}} which holds for every complex number z. The denominator terms are sequence A007680 in the OEIS. For iterative calculation of the above series, the following alternative formulation may be useful: erf ⁡ z = 2 π ∑ n = 0 ∞ ( z ∏ k = 1 n − ( 2 k − 1 ) z 2 k ( 2 k + 1 ) ) = 2 π ∑ n = 0 ∞ z 2 n + 1 ∏ k = 1 n − z 2 k {\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }\left(z\prod _{k=1}^{n}{\frac {-(2k-1)z^{2}}{k(2k+1)}}\right)\\[6pt]&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z}{2n+1}}\prod _{k=1}^{n}{\frac {-z^{2}}{k}}\end{aligned}}} because ⁠−(2k − 1)z2/k(2k + 1)⁠ expresses the multiplier to turn the kth term into the (k + 1)th term (considering z as the first term). The imaginary error function has a very similar Maclaurin series, which is: erfi ⁡ z = 2 π ∑ n = 0 ∞ z 2 n + 1 n ! ( 2 n + 1 ) = 2 π ( z + z 3 3 + z 5 10 + z 7 42 + z 9 216 + ⋯ ) {\displaystyle {\begin{aligned}\operatorname {erfi} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z+{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}+{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}+\cdots \right)\end{aligned}}} which holds for every complex number z. === Derivative and integral === The derivative of the error function follows immediately from its definition: d d z erf ⁡ z = 2 π e − z 2 . {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {erf} z={\frac {2}{\sqrt {\pi }}}e^{-z^{2}}.} From this, the derivative of the imaginary error function is also immediate: d d z erfi ⁡ z = 2 π e z 2 . {\displaystyle {\frac {d}{dz}}\operatorname {erfi} z={\frac {2}{\sqrt {\pi }}}e^{z^{2}}.} An antiderivative of the error function, obtainable by integration by parts, is z erf ⁡ z + e − z 2 π + C . {\displaystyle z\operatorname {erf} z+{\frac {e^{-z^{2}}}{\sqrt {\pi }}}+C.} An antiderivative of the imaginary error function, also obtainable by integration by parts, is z erfi ⁡ z − e z 2 π + C . {\displaystyle z\operatorname {erfi} z-{\frac {e^{z^{2}}}{\sqrt {\pi }}}+C.} Higher order derivatives are given by erf ( k ) ⁡ z = 2 ( − 1 ) k − 1 π H k − 1 ( z ) e − z 2 = 2 π d k − 1 d z k − 1 ( e − z 2 ) , k = 1 , 2 , … {\displaystyle \operatorname {erf} ^{(k)}z={\frac {2(-1)^{k-1}}{\sqrt {\pi }}}{\mathit {H}}_{k-1}(z)e^{-z^{2}}={\frac {2}{\sqrt {\pi }}}{\frac {\mathrm {d} ^{k-1}}{\mathrm {d} z^{k-1}}}\left(e^{-z^{2}}\right),\qquad k=1,2,\dots } where H are the physicists' Hermite polynomials. === Bürmann series === An expansion, which converges more rapidly for all real values of x than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem: erf ⁡ x = 2 π sgn ⁡ x ⋅ 1 − e − x 2 ( 1 − 1 12 ( 1 − e − x 2 ) − 7 480 ( 1 − e − x 2 ) 2 − 5 896 ( 1 − e − x 2 ) 3 − 787 276480 ( 1 − e − x 2 ) 4 − ⋯ ) = 2 π sgn ⁡ x ⋅ 1 − e − x 2 ( π 2 + ∑ k = 1 ∞ c k e − k x 2 ) . {\displaystyle {\begin{aligned}\operatorname {erf} x&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left(1-{\frac {1}{12}}\left(1-e^{-x^{2}}\right)-{\frac {7}{480}}\left(1-e^{-x^{2}}\right)^{2}-{\frac {5}{896}}\left(1-e^{-x^{2}}\right)^{3}-{\frac {787}{276480}}\left(1-e^{-x^{2}}\right)^{4}-\cdots \right)\\[10pt]&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+\sum _{k=1}^{\infty }c_{k}e^{-kx^{2}}\right).\end{aligned}}} where sgn is the sign function. By keeping only the first two coefficients and choosing c1 = ⁠31/200⁠ and c2 = −⁠341/8000⁠, the resulting approximation shows its largest relative error at x = ±1.40587, where it is less than 0.0034361: erf ⁡ x ≈ 2 π sgn ⁡ x ⋅ 1 − e − x 2 ( π 2 + 31 200 e − x 2 − 341 8000 e − 2 x 2 ) . {\displaystyle \operatorname {erf} x\approx {\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+{\frac {31}{200}}e^{-x^{2}}-{\frac {341}{8000}}e^{-2x^{2}}\right).} === Inverse functions === Given a complex number z, there is not a unique complex number w satisfying erf w = z, so a true inverse function would be multivalued. However, for −1 < x < 1, there is a unique real number denoted erf−1 x satisfying erf ⁡ ( erf − 1 ⁡ x ) = x . {\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}x\right)=x.} The inverse error function is usually defined with domain (−1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk |z| < 1 of the complex plane, using the Maclaurin series erf − 1 ⁡ z = ∑ k = 0 ∞ c k 2 k + 1 ( π 2 z ) 2 k + 1 , {\displaystyle \operatorname {erf} ^{-1}z=\sum _{k=0}^{\infty }{\frac {c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},} where c0 = 1 and c k = ∑ m = 0 k − 1 c m c k − 1 − m ( m + 1 ) ( 2 m + 1 ) = { 1 , 1 , 7 6 , 127 90 , 4369 2520 , 34807 16200 , … } . {\displaystyle {\begin{aligned}c_{k}&=\sum _{m=0}^{k-1}{\frac {c_{m}c_{k-1-m}}{(m+1)(2m+1)}}\\[1ex]&=\left\{1,1,{\frac {7}{6}},{\frac {127}{90}},{\frac {4369}{2520}},{\frac {34807}{16200}},\ldots \right\}.\end{aligned}}} So we have the series expansion (common factors have been canceled from numerators and denominators): erf − 1 ⁡ z = π 2 ( z + π 12 z 3 + 7 π 2 480 z 5 + 127 π 3 40320 z 7 + 4369 π 4 5806080 z 9 + 34807 π 5 182476800 z 11 + ⋯ ) . {\displaystyle \operatorname {erf} ^{-1}z={\frac {\sqrt {\pi }}{2}}\left(z+{\frac {\pi }{12}}z^{3}+{\frac {7\pi ^{2}}{480}}z^{5}+{\frac {127\pi ^{3}}{40320}}z^{7}+{\frac {4369\pi ^{4}}{5806080}}z^{9}+{\frac {34807\pi ^{5}}{182476800}}z^{11}+\cdots \right).} (After cancellation the numerator and denominator values in OEIS: A092676 and OEIS: A092677 respectively; without cancellation the numerator terms are values in OEIS: A002067.) The error function's value at ±∞ is equal to ±1. For |z| < 1, we have erf(erf−1 z) = z. The inverse complementary error function is defined as erfc − 1 ⁡ ( 1 − z ) = erf − 1 ⁡ z . {\displaystyle \operatorname {erfc} ^{-1}(1-z)=\operatorname {erf} ^{-1}z.} For real x, there is a unique real number erfi−1 x satisfying erfi(erfi−1 x) = x. The inverse imaginary error function is defined as erfi−1 x. For any real x, Newton's method can be used to compute erfi−1 x, and for −1 ≤ x ≤ 1, the following Maclaurin series converges: erfi − 1 ⁡ z = ∑ k = 0 ∞ ( − 1 ) k c k 2 k + 1 ( π 2 z ) 2 k + 1 , {\displaystyle \operatorname {erfi} ^{-1}z=\sum _{k=0}^{\infty }{\frac {(-1)^{k}c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},} where ck is defined as above. === Asymptotic expansion === A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is erfc ⁡ x = e − x 2 x π ( 1 + ∑ n = 1 ∞ ( − 1 ) n 1 ⋅ 3 ⋅ 5 ⋯ ( 2 n − 1 ) ( 2 x 2 ) n ) = e − x 2 x π ∑ n = 0 ∞ ( − 1 ) n ( 2 n − 1 ) ! ! ( 2 x 2 ) n , {\displaystyle {\begin{aligned}\operatorname {erfc} x&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\left(1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {1\cdot 3\cdot 5\cdots (2n-1)}{\left(2x^{2}\right)^{n}}}\right)\\[6pt]&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}},\end{aligned}}} where (2n − 1)!! is the double factorial of (2n − 1), which is the product of all odd numbers up to (2n − 1). This series diverges for every finite x, and its meaning as asymptotic expansion is that for any integer N ≥ 1 one has erfc ⁡ x = e − x 2 x π ∑ n = 0 N − 1 ( − 1 ) n ( 2 n − 1 ) ! ! ( 2 x 2 ) n + R N ( x ) {\displaystyle \operatorname {erfc} x={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{N-1}(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}}+R_{N}(x)} where the remainder is R N ( x ) := ( − 1 ) N ( 2 N − 1 ) ! ! π ⋅ 2 N − 1 ∫ x ∞ t − 2 N e − t 2 d t , {\displaystyle R_{N}(x):={\frac {(-1)^{N}\,(2N-1)!!}{{\sqrt {\pi }}\cdot 2^{N-1}}}\int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t,} which follows easily by induction, writing e − t 2 = − 1 2 t d d t e − t 2 {\displaystyle e^{-t^{2}}=-{\frac {1}{2t}}\,{\frac {\mathrm {d} }{\mathrm {d} t}}e^{-t^{2}}} and integrating by parts. The asymptotic behavior of the remainder term, in Landau notation, is R N ( x ) = O ( x − ( 1 + 2 N ) e − x 2 ) {\displaystyle R_{N}(x)=O\left(x^{-(1+2N)}e^{-x^{2}}\right)} as x → ∞. This can be found by R N ( x ) ∝ ∫ x ∞ t − 2 N e − t 2 d t = e − x 2 ∫ 0 ∞ ( t + x ) − 2 N e − t 2 − 2 t x d t ≤ e − x 2 ∫ 0 ∞ x − 2 N e − 2 t x d t ∝ x − ( 1 + 2 N ) e − x 2 . {\displaystyle R_{N}(x)\propto \int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t=e^{-x^{2}}\int _{0}^{\infty }(t+x)^{-2N}e^{-t^{2}-2tx}\,\mathrm {d} t\leq e^{-x^{2}}\int _{0}^{\infty }x^{-2N}e^{-2tx}\,\mathrm {d} t\propto x^{-(1+2N)}e^{-x^{2}}.} For large enough values of x, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of erfc x (while for not too large values of x, the above Taylor expansion at 0 provides a very fast convergence). === Continued fraction expansion === A continued fraction expansion of the complementary error function was found by Laplace: erfc ⁡ z = z π e − z 2 1 z 2 + a 1 1 + a 2 z 2 + a 3 1 + ⋯ , a m = m 2 . {\displaystyle \operatorname {erfc} z={\frac {z}{\sqrt {\pi }}}e^{-z^{2}}{\cfrac {1}{z^{2}+{\cfrac {a_{1}}{1+{\cfrac {a_{2}}{z^{2}+{\cfrac {a_{3}}{1+\dotsb }}}}}}}},\qquad a_{m}={\frac {m}{2}}.} === Factorial series === The inverse factorial series: erfc ⁡ z = e − z 2 π z ∑ n = 0 ∞ ( − 1 ) n Q n ( z 2 + 1 ) n ¯ = e − z 2 π z [ 1 − 1 2 1 ( z 2 + 1 ) + 1 4 1 ( z 2 + 1 ) ( z 2 + 2 ) − ⋯ ] {\displaystyle {\begin{aligned}\operatorname {erfc} z&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\sum _{n=0}^{\infty }{\frac {\left(-1\right)^{n}Q_{n}}{{\left(z^{2}+1\right)}^{\bar {n}}}}\\[1ex]&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\left[1-{\frac {1}{2}}{\frac {1}{(z^{2}+1)}}+{\frac {1}{4}}{\frac {1}{\left(z^{2}+1\right)\left(z^{2}+2\right)}}-\cdots \right]\end{aligned}}} converges for Re(z2) > 0. Here Q n = def 1 Γ ( 1 2 ) ∫ 0 ∞ τ ( τ − 1 ) ⋯ ( τ − n + 1 ) τ − 1 2 e − τ d τ = ∑ k = 0 n ( 1 2 ) k ¯ s ( n , k ) , {\displaystyle {\begin{aligned}Q_{n}&{\overset {\text{def}}{{}={}}}{\frac {1}{\Gamma {\left({\frac {1}{2}}\right)}}}\int _{0}^{\infty }\tau (\tau -1)\cdots (\tau -n+1)\tau ^{-{\frac {1}{2}}}e^{-\tau }\,d\tau \\[1ex]&=\sum _{k=0}^{n}\left({\frac {1}{2}}\right)^{\bar {k}}s(n,k),\end{aligned}}} zn denotes the rising factorial, and s(n,k) denotes a signed Stirling number of the first kind. There also exists a representation by an infinite sum containing the double factorial: erf ⁡ z = 2 π ∑ n = 0 ∞ ( − 2 ) n ( 2 n − 1 ) ! ! ( 2 n + 1 ) ! z 2 n + 1 {\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-2)^{n}(2n-1)!!}{(2n+1)!}}z^{2n+1}} == Bounds and Numerical approximations == === Approximation with elementary functions === Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are: erf ⁡ x ≈ 1 − 1 ( 1 + a 1 x + a 2 x 2 + a 3 x 3 + a 4 x 4 ) 4 , x ≥ 0 {\displaystyle \operatorname {erf} x\approx 1-{\frac {1}{\left(1+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4}\right)^{4}}},\qquad x\geq 0} (maximum error: 5×10−4) where a1 = 0.278393, a2 = 0.230389, a3 = 0.000972, a4 = 0.078108 erf ⁡ x ≈ 1 − ( a 1 t + a 2 t 2 + a 3 t 3 ) e − x 2 , t = 1 1 + p x , x ≥ 0 {\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+a_{3}t^{3}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}},\qquad x\geq 0} (maximum error: 2.5×10−5) where p = 0.47047, a1 = 0.3480242, a2 = −0.0958798, a3 = 0.7478556 erf ⁡ x ≈ 1 − 1 ( 1 + a 1 x + a 2 x 2 + ⋯ + a 6 x 6 ) 16 , x ≥ 0 {\displaystyle \operatorname {erf} x\approx 1-{\frac {1}{\left(1+a_{1}x+a_{2}x^{2}+\cdots +a_{6}x^{6}\right)^{16}}},\qquad x\geq 0} (maximum error: 3×10−7) where a1 = 0.0705230784, a2 = 0.0422820123, a3 = 0.0092705272, a4 = 0.0001520143, a5 = 0.0002765672, a6 = 0.0000430638 erf ⁡ x ≈ 1 − ( a 1 t + a 2 t 2 + ⋯ + a 5 t 5 ) e − x 2 , t = 1 1 + p x {\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+\cdots +a_{5}t^{5}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}}} (maximum error: 1.5×10−7) where p = 0.3275911, a1 = 0.254829592, a2 = −0.284496736, a3 = 1.421413741, a4 = −1.453152027, a5 = 1.061405429 All of these approximations are valid for x ≥ 0. To use these approximations for negative x, use the fact that erf x is an odd function, so erf x = −erf(−x). Exponential bounds and a pure exponential approximation for the complementary error function are given by erfc ⁡ x ≤ 1 2 e − 2 x 2 + 1 2 e − x 2 ≤ e − x 2 , x > 0 erfc ⁡ x ≈ 1 6 e − x 2 + 1 2 e − 4 3 x 2 , x > 0. {\displaystyle {\begin{aligned}\operatorname {erfc} x&\leq {\frac {1}{2}}e^{-2x^{2}}+{\frac {1}{2}}e^{-x^{2}}\leq e^{-x^{2}},&\quad x&>0\\[1.5ex]\operatorname {erfc} x&\approx {\frac {1}{6}}e^{-x^{2}}+{\frac {1}{2}}e^{-{\frac {4}{3}}x^{2}},&\quad x&>0.\end{aligned}}} The above have been generalized to sums of N exponentials with increasing accuracy in terms of N so that erfc x can be accurately approximated or bounded by 2Q̃(√2x), where Q ~ ( x ) = ∑ n = 1 N a n e − b n x 2 . {\displaystyle {\tilde {Q}}(x)=\sum _{n=1}^{N}a_{n}e^{-b_{n}x^{2}}.} In particular, there is a systematic methodology to solve the numerical coefficients {(an,bn)}Nn = 1 that yield a minimax approximation or bound for the closely related Q-function: Q(x) ≈ Q̃(x), Q(x) ≤ Q̃(x), or Q(x) ≥ Q̃(x) for x ≥ 0. The coefficients {(an,bn)}Nn = 1 for many variations of the exponential approximations and bounds up to N = 25 have been released to open access as a comprehensive dataset. A tight approximation of the complementary error function for x ∈ [0,∞) is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters {A,B} that erfc ⁡ x ≈ ( 1 − e − A x ) e − x 2 B π x . {\displaystyle \operatorname {erfc} x\approx {\frac {\left(1-e^{-Ax}\right)e^{-x^{2}}}{B{\sqrt {\pi }}x}}.} They determined {A,B} = {1.98,1.135}, which gave a good approximation for all x ≥ 0. Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound. A single-term lower bound is erfc ⁡ x ≥ 2 e π β − 1 β e − β x 2 , x ≥ 0 , β > 1 , {\displaystyle \operatorname {erfc} x\geq {\sqrt {\frac {2e}{\pi }}}{\frac {\sqrt {\beta -1}}{\beta }}e^{-\beta x^{2}},\qquad x\geq 0,\quad \beta >1,} where the parameter β can be picked to minimize error on the desired interval of approximation. Another approximation is given by Sergei Winitzki using his "global Padé approximations":: 2–3  erf ⁡ x ≈ sgn ⁡ x ⋅ 1 − exp ⁡ ( − x 2 4 π + a x 2 1 + a x 2 ) {\displaystyle \operatorname {erf} x\approx \operatorname {sgn} x\cdot {\sqrt {1-\exp \left(-x^{2}{\frac {{\frac {4}{\pi }}+ax^{2}}{1+ax^{2}}}\right)}}} where a = 8 ( π − 3 ) 3 π ( 4 − π ) ≈ 0.140012. {\displaystyle a={\frac {8(\pi -3)}{3\pi (4-\pi )}}\approx 0.140012.} This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the relative error is less than 0.00035 for all real x. Using the alternate value a ≈ 0.147 reduces the maximum relative error to about 0.00013. This approximation can be inverted to obtain an approximation for the inverse error function: erf − 1 ⁡ x ≈ sgn ⁡ x ⋅ ( 2 π a + ln ⁡ ( 1 − x 2 ) 2 ) 2 − ln ⁡ ( 1 − x 2 ) a − ( 2 π a + ln ⁡ ( 1 − x 2 ) 2 ) . {\displaystyle \operatorname {erf} ^{-1}x\approx \operatorname {sgn} x\cdot {\sqrt {{\sqrt {\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)^{2}-{\frac {\ln \left(1-x^{2}\right)}{a}}}}-\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)}}.} An approximation with a maximal error of 1.2×10−7 for any real argument is: erf ⁡ x = { 1 − τ x ≥ 0 τ − 1 x < 0 {\displaystyle \operatorname {erf} x={\begin{cases}1-\tau &x\geq 0\\\tau -1&x<0\end{cases}}} with τ = t ⋅ exp ⁡ ( − x 2 − 1.26551223 + 1.00002368 t + 0.37409196 t 2 + 0.09678418 t 3 − 0.18628806 t 4 + 0.27886807 t 5 − 1.13520398 t 6 + 1.48851587 t 7 − 0.82215223 t 8 + 0.17087277 t 9 ) {\displaystyle {\begin{aligned}\tau &=t\cdot \exp \left(-x^{2}-1.26551223+1.00002368t+0.37409196t^{2}+0.09678418t^{3}-0.18628806t^{4}\right.\\&\left.\qquad \qquad \qquad +0.27886807t^{5}-1.13520398t^{6}+1.48851587t^{7}-0.82215223t^{8}+0.17087277t^{9}\right)\end{aligned}}} and t = 1 1 + 1 2 | x | . {\displaystyle t={\frac {1}{1+{\frac {1}{2}}|x|}}.} An approximation of erfc {\displaystyle \operatorname {erfc} } with a maximum relative error less than 2 − 53 {\displaystyle 2^{-53}} ( ≈ 1.1 × 10 − 16 ) {\displaystyle \left(\approx 1.1\times 10^{-16}\right)} in absolute value is: for x ≥ 0 {\displaystyle x\geq 0} , erfc ⁡ ( x ) = ( 0.56418958354775629 x + 2.06955023132914151 ) ( x 2 + 2.71078540045147805 x + 5.80755613130301624 x 2 + 3.47954057099518960 x + 12.06166887286239555 ) ( x 2 + 3.47469513777439592 x + 12.07402036406381411 x 2 + 3.72068443960225092 x + 8.44319781003968454 ) ( x 2 + 4.00561509202259545 x + 9.30596659485887898 x 2 + 3.90225704029924078 x + 6.36161630953880464 ) ( x 2 + 5.16722705817812584 x + 9.12661617673673262 x 2 + 4.03296893109262491 x + 5.13578530585681539 ) ( x 2 + 5.95908795446633271 x + 9.19435612886969243 x 2 + 4.11240942957450885 x + 4.48640329523408675 ) e − x 2 {\displaystyle {\begin{aligned}\operatorname {erfc} \left(x\right)&=\left({\frac {0.56418958354775629}{x+2.06955023132914151}}\right)\left({\frac {x^{2}+2.71078540045147805x+5.80755613130301624}{x^{2}+3.47954057099518960x+12.06166887286239555}}\right)\\&\left({\frac {x^{2}+3.47469513777439592x+12.07402036406381411}{x^{2}+3.72068443960225092x+8.44319781003968454}}\right)\left({\frac {x^{2}+4.00561509202259545x+9.30596659485887898}{x^{2}+3.90225704029924078x+6.36161630953880464}}\right)\\&\left({\frac {x^{2}+5.16722705817812584x+9.12661617673673262}{x^{2}+4.03296893109262491x+5.13578530585681539}}\right)\left({\frac {x^{2}+5.95908795446633271x+9.19435612886969243}{x^{2}+4.11240942957450885x+4.48640329523408675}}\right)e^{-x^{2}}\\\end{aligned}}} and for x < 0 {\displaystyle x<0} erfc ⁡ ( x ) = 2 − erfc ⁡ ( − x ) {\displaystyle \operatorname {erfc} \left(x\right)=2-\operatorname {erfc} \left(-x\right)} A simple approximation for real-valued arguments could be done through Hyperbolic functions: erf ⁡ ( x ) ≈ z ( x ) = tanh ⁡ ( 2 π ( x + 11 123 x 3 ) ) {\displaystyle \operatorname {erf} \left(x\right)\approx z(x)=\tanh \left({\frac {2}{\sqrt {\pi }}}\left(x+{\frac {11}{123}}x^{3}\right)\right)} which keeps the absolute difference | erf ⁡ ( x ) − z ( x ) | < 0.000358 , ∀ x {\displaystyle \left|\operatorname {erf} \left(x\right)-z(x)\right|<0.000358,\,\forall x} . Since the error function and the Gaussian Q-function are closely related through the identity erfc ⁡ ( x ) = 2 Q ( 2 x ) {\displaystyle \operatorname {erfc} (x)=2Q({\sqrt {2}}x)} or equivalently Q ( x ) = 1 2 erfc ⁡ ( x 2 ) {\displaystyle Q(x)={\frac {1}{2}}\operatorname {erfc} \left({\frac {x}{\sqrt {2}}}\right)} , bounds developed for the Q-function can be adapted to approximate the complementary error function. A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments x ∈ [ 0 , ∞ ) {\displaystyle x\in [0,\infty )} was introduced by Abreu (2012) based on a simple algebraic expression with only two exponential terms: Q ( x ) ≥ 1 12 e − x 2 + 1 2 π ( x + 1 ) e − x 2 / 2 , x ≥ 0 , {\displaystyle Q(x)\geq {\frac {1}{12}}e^{-x^{2}}+{\frac {1}{{\sqrt {2\pi }}(x+1)}}e^{-x^{2}/2},\qquad x\geq 0,} and Q ( x ) ≤ 1 50 e − x 2 + 1 2 ( x + 1 ) e − x 2 / 2 , x ≥ 0. {\displaystyle Q(x)\leq {\frac {1}{50}}e^{-x^{2}}+{\frac {1}{2(x+1)}}e^{-x^{2}/2},\qquad x\geq 0.} These bounds stem from a unified form Q B ( x ; a , b ) = exp ⁡ ( − x 2 ) a + exp ⁡ ( − x 2 / 2 ) b ( x + 1 ) , {\displaystyle Q_{\mathrm {B} }(x;a,b)={\frac {\exp(-x^{2})}{a}}+{\frac {\exp(-x^{2}/2)}{b(x+1)}},} where the parameters a {\displaystyle a} and b {\displaystyle b} are selected to ensure the bounding properties: for the lower bound, a L = 12 {\displaystyle a_{\mathrm {L} }=12} and b L = 2 π {\displaystyle b_{\mathrm {L} }={\sqrt {2\pi }}} , and for the upper bound, a U = 50 {\displaystyle a_{\mathrm {U} }=50} and b U = 2 {\displaystyle b_{\mathrm {U} }=2} . These expressions maintain simplicity and tightness, providing a practical trade-off between accuracy and ease of computation. They are particularly valuable in theoretical contexts, such as communication theory over fading channels, where both functions frequently appear. Additionally, the original Q-function bounds can be extended to Q n ( x ) {\displaystyle Q^{n}(x)} for positive integers n {\displaystyle n} via the binomial theorem, suggesting potential adaptability for powers of erfc ⁡ ( x ) {\displaystyle \operatorname {erfc} (x)} , though this is less commonly required in error function applications. === Table of values === == Related functions == === Complementary error function === The complementary error function, denoted erfc, is defined as erfc ⁡ x = 1 − erf ⁡ x = 2 π ∫ x ∞ e − t 2 d t = e − x 2 erfcx ⁡ x , {\displaystyle {\begin{aligned}\operatorname {erfc} x&=1-\operatorname {erf} x\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{x}^{\infty }e^{-t^{2}}\,\mathrm {d} t\\[5pt]&=e^{-x^{2}}\operatorname {erfcx} x,\end{aligned}}} which also defines erfcx, the scaled complementary error function (which can be used instead of erfc to avoid arithmetic underflow). Another form of erfc x for x ≥ 0 is known as Craig's formula, after its discoverer: erfc ⁡ ( x ∣ x ≥ 0 ) = 2 π ∫ 0 π 2 exp ⁡ ( − x 2 sin 2 ⁡ θ ) d θ . {\displaystyle \operatorname {erfc} (x\mid x\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}\right)\,\mathrm {d} \theta .} This expression is valid only for positive values of x, but it can be used in conjunction with erfc x = 2 − erfc(−x) to obtain erfc(x) for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the erfc of the sum of two non-negative variables is as follows: erfc ⁡ ( x + y ∣ x , y ≥ 0 ) = 2 π ∫ 0 π 2 exp ⁡ ( − x 2 sin 2 ⁡ θ − y 2 cos 2 ⁡ θ ) d θ . {\displaystyle \operatorname {erfc} (x+y\mid x,y\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}-{\frac {y^{2}}{\cos ^{2}\theta }}\right)\,\mathrm {d} \theta .} === Imaginary error function === The imaginary error function, denoted erfi, is defined as erfi ⁡ x = − i erf ⁡ i x = 2 π ∫ 0 x e t 2 d t = 2 π e x 2 D ( x ) , {\displaystyle {\begin{aligned}\operatorname {erfi} x&=-i\operatorname {erf} ix\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{t^{2}}\,\mathrm {d} t\\[5pt]&={\frac {2}{\sqrt {\pi }}}e^{x^{2}}D(x),\end{aligned}}} where D(x) is the Dawson function (which can be used instead of erfi to avoid arithmetic overflow). Despite the name "imaginary error function", erfi x is real when x is real. When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function: w ( z ) = e − z 2 erfc ⁡ ( − i z ) = erfcx ⁡ ( − i z ) . {\displaystyle w(z)=e^{-z^{2}}\operatorname {erfc} (-iz)=\operatorname {erfcx} (-iz).} === Cumulative distribution function === The error function is essentially identical to the standard normal cumulative distribution function, denoted Φ, also named norm(x) by some software languages, as they differ only by scaling and translation. Indeed, Φ ( x ) = 1 2 π ∫ − ∞ x e − t 2 2 d t = 1 2 ( 1 + erf ⁡ x 2 ) = 1 2 erfc ⁡ ( − x 2 ) {\displaystyle {\begin{aligned}\Phi (x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{\tfrac {-t^{2}}{2}}\,\mathrm {d} t\\[6pt]&={\frac {1}{2}}\left(1+\operatorname {erf} {\frac {x}{\sqrt {2}}}\right)\\[6pt]&={\frac {1}{2}}\operatorname {erfc} \left(-{\frac {x}{\sqrt {2}}}\right)\end{aligned}}} or rearranged for erf and erfc: erf ⁡ ( x ) = 2 Φ ( x 2 ) − 1 erfc ⁡ ( x ) = 2 Φ ( − x 2 ) = 2 ( 1 − Φ ( x 2 ) ) . {\displaystyle {\begin{aligned}\operatorname {erf} (x)&=2\Phi {\left(x{\sqrt {2}}\right)}-1\\[6pt]\operatorname {erfc} (x)&=2\Phi {\left(-x{\sqrt {2}}\right)}\\&=2\left(1-\Phi {\left(x{\sqrt {2}}\right)}\right).\end{aligned}}} Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as Q ( x ) = 1 2 − 1 2 erf ⁡ x 2 = 1 2 erfc ⁡ x 2 . {\displaystyle {\begin{aligned}Q(x)&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} {\frac {x}{\sqrt {2}}}\\&={\frac {1}{2}}\operatorname {erfc} {\frac {x}{\sqrt {2}}}.\end{aligned}}} The inverse of Φ is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as probit ⁡ ( p ) = Φ − 1 ( p ) = 2 erf − 1 ⁡ ( 2 p − 1 ) = − 2 erfc − 1 ⁡ ( 2 p ) . {\displaystyle \operatorname {probit} (p)=\Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1)=-{\sqrt {2}}\operatorname {erfc} ^{-1}(2p).} The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics. The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function): erf ⁡ x = 2 x π M ( 1 2 , 3 2 , − x 2 ) . {\displaystyle \operatorname {erf} x={\frac {2x}{\sqrt {\pi }}}M\left({\tfrac {1}{2}},{\tfrac {3}{2}},-x^{2}\right).} It has a simple expression in terms of the Fresnel integral. In terms of the regularized gamma function P and the incomplete gamma function, erf ⁡ x = sgn ⁡ x ⋅ P ( 1 2 , x 2 ) = sgn ⁡ x π γ ( 1 2 , x 2 ) . {\displaystyle \operatorname {erf} x=\operatorname {sgn} x\cdot P\left({\tfrac {1}{2}},x^{2}\right)={\frac {\operatorname {sgn} x}{\sqrt {\pi }}}\gamma {\left({\tfrac {1}{2}},x^{2}\right)}.} sgn x is the sign function. === Iterated integrals of the complementary error function === The iterated integrals of the complementary error function are defined by i n erfc ⁡ z = ∫ z ∞ i n − 1 erfc ⁡ ζ d ζ i 0 erfc ⁡ z = erfc ⁡ z i 1 erfc ⁡ z = ierfc ⁡ z = 1 π e − z 2 − z erfc ⁡ z i 2 erfc ⁡ z = 1 4 ( erfc ⁡ z − 2 z ierfc ⁡ z ) {\displaystyle {\begin{aligned}i^{n}\!\operatorname {erfc} z&=\int _{z}^{\infty }i^{n-1}\!\operatorname {erfc} \zeta \,\mathrm {d} \zeta \\[6pt]i^{0}\!\operatorname {erfc} z&=\operatorname {erfc} z\\i^{1}\!\operatorname {erfc} z&=\operatorname {ierfc} z={\frac {1}{\sqrt {\pi }}}e^{-z^{2}}-z\operatorname {erfc} z\\i^{2}\!\operatorname {erfc} z&={\tfrac {1}{4}}\left(\operatorname {erfc} z-2z\operatorname {ierfc} z\right)\\\end{aligned}}} The general recurrence formula is 2 n ⋅ i n erfc ⁡ z = i n − 2 erfc ⁡ z − 2 z ⋅ i n − 1 erfc ⁡ z {\displaystyle 2n\cdot i^{n}\!\operatorname {erfc} z=i^{n-2}\!\operatorname {erfc} z-2z\cdot i^{n-1}\!\operatorname {erfc} z} They have the power series i n erfc ⁡ z = ∑ j = 0 ∞ ( − z ) j 2 n − j j ! Γ ( 1 + n − j 2 ) , {\displaystyle i^{n}\!\operatorname {erfc} z=\sum _{j=0}^{\infty }{\frac {(-z)^{j}}{2^{n-j}j!\,\Gamma \left(1+{\frac {n-j}{2}}\right)}},} from which follow the symmetry properties i 2 m erfc ⁡ ( − z ) = − i 2 m erfc ⁡ z + ∑ q = 0 m z 2 q 2 2 ( m − q ) − 1 ( 2 q ) ! ( m − q ) ! {\displaystyle i^{2m}\!\operatorname {erfc} (-z)=-i^{2m}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q}}{2^{2(m-q)-1}(2q)!(m-q)!}}} and i 2 m + 1 erfc ⁡ ( − z ) = i 2 m + 1 erfc ⁡ z + ∑ q = 0 m z 2 q + 1 2 2 ( m − q ) − 1 ( 2 q + 1 ) ! ( m − q ) ! . {\displaystyle i^{2m+1}\!\operatorname {erfc} (-z)=i^{2m+1}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q+1}}{2^{2(m-q)-1}(2q+1)!(m-q)!}}.} == Implementations == === As real function of a real argument === In POSIX-compliant operating systems, the header math.h shall declare and the mathematical library libm shall provide the functions erf and erfc (double precision) as well as their single precision and extended precision counterparts erff, erfl and erfcf, erfcl. The GNU Scientific Library provides erf, erfc, log(erf), and scaled error functions. === As complex function of a complex argument === libcerf, numeric C library for complex error functions, provides the complex functions cerf, cerfc, cerfcx and the real functions erfi, erfcx with approximately 13–14 digits precision, based on the Faddeeva function as implemented in the MIT Faddeeva Package == References == == Further reading == Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 7". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 297. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 6.2. Incomplete Gamma Function and Error Function", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 11 August 2011, retrieved 9 August 2011 Temme, Nico M. (2010), "Error Functions, Dawson's and Fresnel Integrals", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. == External links == A Table of Integrals of the Error Functions
Wikipedia/Error_function
Special functions are particular mathematical functions that have more or less established names and notations due to their importance in mathematical analysis, functional analysis, geometry, physics, or other applications. The term is defined by consensus, and thus lacks a general formal definition, but the list of mathematical functions contains functions that are commonly accepted as special. == Tables of special functions == Many special functions appear as solutions of differential equations or integrals of elementary functions. Therefore, tables of integrals usually include descriptions of special functions, and tables of special functions include most important integrals; at least, the integral representation of special functions. Because symmetries of differential equations are essential to both physics and mathematics, the theory of special functions is closely related to the theory of Lie groups and Lie algebras, as well as certain topics in mathematical physics. Symbolic computation engines usually recognize the majority of special functions. === Notations used for special functions === Functions with established international notations are the sine ( sin {\displaystyle \sin } ), cosine ( cos {\displaystyle \cos } ), exponential function ( exp {\displaystyle \exp } ), and error function ( erf {\displaystyle \operatorname {erf} } or erfc {\displaystyle \operatorname {erfc} } ). Some special functions have several notations: The natural logarithm may be denoted ln {\displaystyle \ln } , log {\displaystyle \log } , log e {\displaystyle \log _{e}} , or Log {\displaystyle \operatorname {Log} } depending on the context. The tangent function may be denoted tan {\displaystyle \tan } , Tan {\displaystyle \operatorname {Tan} } , or tg {\displaystyle \operatorname {tg} } (used in several European languages). Arctangent may be denoted arctan {\displaystyle \arctan } , atan {\displaystyle \operatorname {atan} } , arctg {\displaystyle \operatorname {arctg} } , or tan − 1 {\displaystyle \tan ^{-1}} . The Bessel functions may be denoted J n ( x ) , {\displaystyle J_{n}(x),} besselj ⁡ ( n , x ) , {\displaystyle \operatorname {besselj} (n,x),} B e s s e l J [ n , x ] . {\displaystyle {\rm {BesselJ}}[n,x].} Subscripts are often used to indicate arguments, typically integers. In a few cases, the semicolon (;) or even backslash (\) is used as a separator for arguments. This may confuse the translation to algorithmic languages. Superscripts may indicate not only a power (exponent), but some other modification of the function. Examples (particularly with trigonometric and hyperbolic functions) include: cos 3 ⁡ ( x ) {\displaystyle \cos ^{3}(x)} usually means ( cos ⁡ ( x ) ) 3 {\displaystyle (\cos(x))^{3}} cos 2 ⁡ ( x ) {\displaystyle \cos ^{2}(x)} is typically ( cos ⁡ ( x ) ) 2 {\displaystyle (\cos(x))^{2}} , but never cos ⁡ ( cos ⁡ ( x ) ) {\displaystyle \cos(\cos(x))} cos − 1 ⁡ ( x ) {\displaystyle \cos ^{-1}(x)} usually means arccos ⁡ ( x ) {\displaystyle \arccos(x)} , not ( cos ⁡ ( x ) ) − 1 {\displaystyle (\cos(x))^{-1}} ; this may cause confusion, since the meaning of this superscript is inconsistent with the others. === Evaluation of special functions === Most special functions are considered as a function of a complex variable. They are analytic; the singularities and cuts are described; the differential and integral representations are known and the expansion to the Taylor series or asymptotic series are available. In addition, sometimes there exist relations with other special functions; a complicated special function can be expressed in terms of simpler functions. Various representations can be used for the evaluation; the simplest way to evaluate a function is to expand it into a Taylor series. However, such representation may converge slowly or not at all. In algorithmic languages, rational approximations are typically used, although they may behave badly in the case of complex argument(s). == History of special functions == === Classical theory === While trigonometry and exponential functions were systematized and unified by the eighteenth century, the search for a complete and unified theory of special functions has continued since the nineteenth century. The high point of special function theory in 1800–1900 was the theory of elliptic functions; treatises that were essentially complete, such as that of Tannery and Molk, expounded all the basic identities of the theory using techniques from analytic function theory (based on complex analysis). The end of the century also saw a very detailed discussion of spherical harmonics. === Changing and fixed motivations === While pure mathematicians sought a broad theory deriving as many as possible of the known special functions from a single principle, for a long time the special functions were the province of applied mathematics. Applications to the physical sciences and engineering determined the relative importance of functions. Before electronic computation, the importance of a special function was affirmed by the laborious computation of extended tables of values for ready look-up, as for the familiar logarithm tables. (Babbage's difference engine was an attempt to compute such tables.) For this purpose, the main techniques are: numerical analysis, the discovery of infinite series or other analytical expressions allowing rapid calculation; and reduction of as many functions as possible to the given function. More theoretical questions include: asymptotic analysis; analytic continuation and monodromy in the complex plane; and symmetry principles and other structural equations. === Twentieth century === The twentieth century saw several waves of interest in special function theory. The classic Whittaker and Watson (1902) textbook sought to unify the theory using complex analysis; the G. N. Watson tome A Treatise on the Theory of Bessel Functions pushed the techniques as far as possible for one important type, including asymptotic results. The later Bateman Manuscript Project, under the editorship of Arthur Erdélyi, attempted to be encyclopedic, and came around the time when electronic computation was coming to the fore and tabulation ceased to be the main issue. === Contemporary theories === The modern theory of orthogonal polynomials is of a definite but limited scope. Hypergeometric series, observed by Felix Klein to be important in astronomy and mathematical physics, became an intricate theory, requiring later conceptual arrangement. Lie group representations give an immediate generalization of spherical functions; from 1950 onwards substantial parts of classical theory were recast in terms of Lie groups. Further, work on algebraic combinatorics also revived interest in older parts of the theory. Conjectures of Ian G. Macdonald helped open up large and active new fields with a special function flavour. Difference equations have begun to take their place beside differential equations as a source of special functions. == Special functions in number theory == In number theory, certain special functions have traditionally been studied, such as particular Dirichlet series and modular forms. Almost all aspects of special function theory are reflected there, as well as some new ones, such as came out of monstrous moonshine theory. == Special functions of matrix arguments == Analogues of several special functions have been defined on the space of positive definite matrices, among them the power function which goes back to Atle Selberg, the multivariate gamma function, and types of Bessel functions. The NIST Digital Library of Mathematical Functions has a section covering several special functions of matrix arguments. == Researchers == == See also == List of mathematical functions List of special functions and eponyms Elementary function == References == === Bibliography === Andrews, George E.; Askey, Richard; Roy, Ranjan (1999). Special functions. Encyclopedia of Mathematics and its Applications. Vol. 71. Cambridge University Press. ISBN 978-0-521-62321-6. MR 1688958. Terras, Audrey (2016). Harmonic analysis on symmetric spaces – Higher rank spaces, positive definite matrix space and generalizations (second ed.). Springer Nature. ISBN 978-1-4939-3406-5. MR 3496932. Whittaker, E. T.; Watson, G. N. (1996-09-13). A Course of Modern Analysis. Cambridge University Press. ISBN 978-0-521-58807-2. N. N. Levedev (Translated & Edited by Richard A. Sliverman): Special Functions & Their Applications, DOVER, ISBN 978-0-486-60624-8 (1972). # Originally published from Prentice-Hall Inc.(1965). Nico M. Temme: Special Functions: An Introduction to the Classical Functions of Mathematical Physics, Wiley-Interscience,ISBN 978-0-471-11313-1 (1996). Yury A. Brychkov: Handbook of Special Functions: Derivatives, Integrals, Series and Other Formulas, CRC Press, ISBN 978-1-58488-956-4 (2008). W. W. Bell: Special Functions : for Scientists and Engineers, Dover, ISBN 978-0-486-43521-3 (2004). === Numerical calculation method of function value === Shanjie Zhang and Jian-Ming Jin: Computation of Special Functions, Wiley-Interscience, ISBN 978-0-471-11963-0 (1996). William J. Thompson: Atlas for Computing Mathematical Functions: An Illustrated Guide for Practitioners; With Programs in C and Mathematica, Wiley-Interscience, ISBN 978-0-471-00260-4 (March, 1997). William J. Thompson: Atlas for Computing Mathematical Functions: An illustrated Guide for Practitioners; With Programs in Fortran 90 and Mathematica, Wiley-Interscience, ISBN 978-0-471-18171-2 (June, 1997). Amparo Gil, Javier Segura and Nico M. Temme: Numerical Methods for Special Functions, SIAM, ISBN 978-0-898716-34-4 (2007). == External links == National Institute of Standards and Technology, United States Department of Commerce. NIST Digital Library of Mathematical Functions. Archived from the original on December 13, 2018. Weisstein, Eric W. "Special Function". MathWorld. Online calculator, Online scientific calculator with over 100 functions (>=32 digits, many complex) (German language) Special functions at EqWorld: The World of Mathematical Equations Special functions and polynomials by Gerard 't Hooft and Stefan Nobbenhuis (April 8, 2013) Numerical Methods for Special Functions, by A. Gil, J. Segura, N.M. Temme (2007). R. Jagannathan, (P,Q)-Special Functions Specialfunctionswiki
Wikipedia/Special_function
In computer science and mathematical logic, a function type (or arrow type or exponential) is the type of a variable or parameter to which a function has or can be assigned, or an argument or result type of a higher-order function taking or returning a function. A function type depends on the type of the parameters and the result type of the function (it, or more accurately the unapplied type constructor · → ·, is a higher-kinded type). In theoretical settings and programming languages where functions are defined in curried form, such as the simply typed lambda calculus, a function type depends on exactly two types, the domain A and the range B. Here a function type is often denoted A → B, following mathematical convention, or BA, based on there existing exactly BA (exponentially many) set-theoretic functions mappings A to B in the category of sets. The class of such maps or functions is called the exponential object. The act of currying makes the function type adjoint to the product type; this is explored in detail in the article on currying. The function type can be considered to be a special case of the dependent product type, which among other properties, encompasses the idea of a polymorphic function. == Programming languages == The syntax used for function types in several programming languages can be summarized, including an example type signature for the higher-order function composition function: When looking at the example type signature of, for example C#, the type of the function compose is actually Func<Func<A,B>,Func<B,C>,Func<A,C>>. Due to type erasure in C++11's std::function, it is more common to use templates for higher order function parameters and type inference (auto) for closures. == Denotational semantics == The function type in programming languages does not correspond to the space of all set-theoretic functions. Given the countably infinite type of natural numbers as the domain and the booleans as range, then there are an uncountably infinite number (2ℵ0 = c) of set-theoretic functions between them. Clearly this space of functions is larger than the number of functions that can be defined in any programming language, as there exist only countably many programs (a program being a finite sequence of a finite number of symbols) and one of the set-theoretic functions effectively solves the halting problem. Denotational semantics concerns itself with finding more appropriate models (called domains) to model programming language concepts such as function types. It turns out that restricting expression to the set of computable functions is not sufficient either if the programming language allows writing non-terminating computations (which is the case if the programming language is Turing complete). Expression must be restricted to the so-called continuous functions (corresponding to continuity in the Scott topology, not continuity in the real analytical sense). Even then, the set of continuous function contains the parallel-or function, which cannot be correctly defined in all programming languages. == See also == Cartesian closed category Currying Exponential object, category-theoretic equivalent First-class function Function space, set-theoretic equivalent == References == Pierce, Benjamin C. (2002). Types and Programming Languages. The MIT Press. pp. 99–100. ISBN 9780262162098. Mitchell, John C. Foundations for Programming Languages. The MIT Press. function type at the nLab Homotopy Type Theory: Univalent Foundations of Mathematics, The Univalent Foundations Program, Institute for Advanced Study. See section 1.2.
Wikipedia/Function_type
In computer science, an operation, function or expression is said to have a side effect if it has any observable effect other than its primary effect of reading the value of its arguments and returning a value to the invoker of the operation. Example side effects include modifying a non-local variable, a static local variable or a mutable argument passed by reference; raising errors or exceptions; performing I/O; or calling other functions with side-effects. In the presence of side effects, a program's behaviour may depend on history; that is, the order of evaluation matters. Understanding and debugging a function with side effects requires knowledge about the context and its possible histories. Side effects play an important role in the design and analysis of programming languages. The degree to which side effects are used depends on the programming paradigm. For example, imperative programming is commonly used to produce side effects, to update a system's state. By contrast, declarative programming is commonly used to report on the state of system, without side effects. Functional programming aims to minimize or eliminate side effects. The lack of side effects makes it easier to do formal verification of a program. The functional language Haskell eliminates side effects such as I/O and other stateful computations by replacing them with monadic actions. Functional languages such as Standard ML, Scheme and Scala do not restrict side effects, but it is customary for programmers to avoid them. Effect systems extend types to keep track of effects, permitting concise notation for functions with effects, while maintaining information about the extent and nature of side effects. In particular, functions without effects correspond to pure functions. Assembly language programmers must be aware of hidden side effects—instructions that modify parts of the processor state which are not mentioned in the instruction's mnemonic. A classic example of a hidden side effect is an arithmetic instruction that implicitly modifies condition codes (a hidden side effect) while it explicitly modifies a register (the intended effect). One potential drawback of an instruction set with hidden side effects is that, if many instructions have side effects on a single piece of state, like condition codes, then the logic required to update that state sequentially may become a performance bottleneck. The problem is particularly acute on some processors designed with pipelining (since 1990) or with out-of-order execution. Such a processor may require additional control circuitry to detect hidden side effects and stall the pipeline if the next instruction depends on the results of those effects. == Referential transparency == Absence of side effects is a necessary, but not sufficient, condition for referential transparency. Referential transparency means that an expression (such as a function call) can be replaced with its value. This requires that the expression is pure, that is to say the expression must be deterministic (always give the same value for the same input) and side-effect free. == Temporal side effects == Side effects caused by the time taken for an operation to execute are usually ignored when discussing side effects and referential transparency. There are some cases, such as with hardware timing or testing, where operations are inserted specifically for their temporal side effects e.g. sleep(5000) or for (int i = 0; i < 10000; ++i) {}. These instructions do not change state other than taking an amount of time to complete. == Idempotence == A subroutine with side effects is idempotent if multiple applications of the subroutine have the same effect on the system state as a single application, in other words if the function from the system state space to itself associated with the subroutine is idempotent in the mathematical sense. For instance, consider the following Python program: setx is idempotent because the second application of setx to 3 has the same effect on the system state as the first application: x was already set to 3 after the first application, and it is still set to 3 after the second application. A pure function is idempotent if it is idempotent in the mathematical sense. For instance, consider the following Python program: abs is idempotent because the second application of abs to the return value of the first application to -3 returns the same value as the first application to -3. == Example == One common demonstration of side effect behavior is that of the assignment operator in C. The assignment a = b is an expression that evaluates to the same value as the expression b, with the side effect of storing the R-value of b into the L-value of a. This allows multiple assignment: Because the operator right associates, this is equivalent to This presents a potential hangup for novice programmers who may confuse with == See also == Action at a distance (computer programming) Don't-care term Sequence point Side-channel attack Undefined behaviour Unspecified behaviour Frame problem == References ==
Wikipedia/Side_effect_(computer_science)
In computer programming, a function (also procedure, method, subroutine, routine, or subprogram) is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times. Callable units provide a powerful programming tool. The primary purpose is to allow for the decomposition of a large and/or complicated problem into chunks that have relatively low cognitive load and to assign the chunks meaningful names (unless they are anonymous). Judicious application can reduce the cost of developing and maintaining software, while increasing its quality and reliability. Callable units are present at multiple levels of abstraction in the programming environment. For example, a programmer may write a function in source code that is compiled to machine code that implements similar semantics. There is a callable unit in the source code and an associated one in the machine code, but they are different kinds of callable units – with different implications and features. == Terminology == Some programming languages, such as COBOL and BASIC, make a distinction between functions that return a value (typically called "functions") and those that do not (typically called "subprogram", "subroutine", or "procedure"). Other programming languages, such as C, C++, and Rust, only use the term "function" irrespective of whether they return a value or not. Some object-oriented languages, such as Java and C#, refer to functions inside classes as "methods". == History == The idea of a callable unit was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines." Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack. The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site. Subroutines were implemented in Konrad Zuse's Z4 in 1945. In 1945, Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery' under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting ...the structure of the machine need not be complicated one bit. It is possible, since all the logical characteristics essential to this procedure are available, to evolve a coding instruction for placing the subroutines in the memory at places known to the machine, and in such a way that they may easily be called into use.In other words, one can designate subroutine A as division and subroutine B as complex multiplication and subroutine C as the evaluation of a standard error of a sequence of numbers, and so on through the list of subroutines needed for a particular problem. ... All these subroutines will then be stored in the machine, and all one needs to do is make a brief reference to them by number, as they are indicated in the coding. Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II. She and the other ENIAC programmers used the subroutines to help calculate missile trajectories. Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines. Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The Burroughs B5000 (1961) is one of the first computers to store subroutine return data on a stack. The DEC PDP-6 (1964) is one of the first accumulator-based machines to have a subroutine call instruction that saved the return address in a stack addressed by an accumulator or index register. The later PDP-10 (1966), PDP-11 (1970) and VAX-11 (1976) lines followed suit; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines. === Language support === In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together. One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming. === Libraries === Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs. Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program (or "mainline"); and the same subroutine tape could then be used by many different programs. A similar approach was used in computers that loaded program instructions from punched cards. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or decks of cards for collective use. === Return by indirect jump === To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address. On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable. === Jump to subroutine === Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly. In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack. In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example to call a subroutine called MYSUB from the main program. The subroutine would be coded as The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB. Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls. Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC. === Call stack === Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address. The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose. The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter. Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible. When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management. However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data. In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records. ==== Delayed stacking ==== One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both. This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns. == Features == In general, a callable unit is a list of instructions that, starting at the first instruction, executes sequentially except as directed via its internal logic. It can be invoked (called) many times during the execution of a program. Execution continues at the next instruction after the call instruction when it returns control. == Implementations == The features of implementations of callable units evolved over time and varies by context. This section describes features of the various common implementations. === General characteristics === Most modern programming languages provide features to define and call functions, including syntax for accessing such features, including: Delimit the implementation of a function from the rest of the program Assign an identifier, name, to a function Define formal parameters with a name and data type for each Assign a data type to the return value, if any Specify a return value in the function body Call a function Provide actual parameters that correspond to a called function's formal parameters Return control to the caller at the point of call Consume the return value in the caller Dispose of the values returned by a call Provide a private naming scope for variables Identify variables outside the function that are accessible within it Propagate an exceptional condition out of a function and to handle it in the calling context Package functions into a container such as module, library, object, or class === Naming === Some languages, such as Pascal, Fortran, Ada and many dialects of BASIC, use a different name for a callable unit that returns a value (function or subprogram) vs. one that does not (subroutine or procedure). Other languages, such as C, C++, C# and Lisp, use only one name for a callable unit, function. The C-family languages use the keyword void to indicate no return value. === Call syntax === If declared to return a value, a call can be embedded in an expression in order to consume the return value. For example, a square root callable unit might be called like y = sqrt(x). A callable unit that does not return a value is called as a stand-alone statement like print("hello"). This syntax can also be used for a callable unit that returns a value, but the return value will be ignored. Some older languages require a keyword for calls that do not consume a return value, like CALL print("hello"). === Parameters === Most implementations, especially in modern languages, support parameters which the callable declares as formal parameters. A caller passes actual parameters, a.k.a. arguments, to match. Different programming languages provide different conventions for passing arguments. === Return value === In some languages, such as BASIC, a callable has different syntax (i.e. keyword) for a callable that returns a value vs. one that does not. In other languages, the syntax is the same regardless. In some of these languages an extra keyword is used to declare no return value; for example void in C, C++ and C#. In some languages, such as Python, the difference is whether the body contains a return statement with a value, and a particular callable may return with or without a value based on control flow. === Side effects === In many contexts, a callable may have side effect behavior such as modifying passed or global data, reading from or writing to a peripheral device, accessing a file, halting the program or the machine, or temporarily pausing program execution. Side effects are considered undesireble by Robert C. Martin, who is known for promoting design principles. Martin argues that side effects can result in temporal coupling or order dependencies. In strictly functional programming languages such as Haskell, a function can have no side effects, which means it cannot change the state of the program. Functions always return the same result for the same input. Such languages typically only support functions that return a value, since there is no value in a function that has neither return value nor side effect. === Local variables === Most contexts support local variables – memory owned by a callable to hold intermediate values. These variables are typically stored in the call's activation record on the call stack along with other information such as the return address. === Nested call – recursion === If supported by the language, a callable may call itself, causing its execution to suspend while another nested execution of the same callable executes. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages provide a new copy of local variables on each call. If the programmer desires the recursive callable to use the same variables instead of using locals, they typically declare them in a shared context such static or global. Languages going back to ALGOL, PL/I and C and modern languages, almost invariably use a call stack, usually supported by the instruction sets to provide an activation record for each call. That way, a nested call can modify its local variables without affecting any of the suspended calls variables. Recursion allows direct implementation of functionality defined by mathematical induction and recursive divide and conquer algorithms. Here is an example of a recursive function in C/C++ to find Fibonacci numbers: Early languages like Fortran did not initially support recursion because only one set of variables and return address were allocated for each callable. Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer. === Nested scope === Some languages, e.g., Ada, Pascal, PL/I, Python, support declaring and defining a function inside, e.g., a function body, such that the name of the inner is only visible within the body of the outer. === Reentrancy === If a callable can be executed properly even when another execution of the same callable is already in progress, that callable is said to be reentrant. A reentrant callable is also useful in multi-threaded situations since multiple threads can call the same callable without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads. === Overloading === Some languages support overloading – allow multiple callables with the same name in the same scope, but operating on different types of input. Consider the square root function applied to real number, complex number and matrix input. The algorithm for each type of input is different, and the return value may have a different type. By writing three separate callables with the same name. i.e. sqrt, the resulting code may be easier to write and to maintain since each one has a name that is relatively easy to understand and to remember instead of giving longer and more complicated names like sqrt_real, sqrt_complex, qrt_matrix. Overloading is supported in many languages that support strong typing. Often the compiler selects the overload to call based on the type of the input arguments or it fails if the input arguments do not select an overload. Older and weakly-typed languages generally do not support overloading. Here is an example of overloading in C++, two functions Area that accept different types: PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example: DECLARE gen_name GENERIC( name WHEN(FIXED BINARY), flame WHEN(FLOAT), pathname OTHERWISE); Multiple argument definitions may be specified for each entry. A call to "gen_name" will result in a call to "name" when the argument is FIXED BINARY, "flame" when FLOAT", etc. If the argument matches none of the choices "pathname" will be called. === Closure === A closure is a callable plus values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects. === Exception reporting === Besides its happy path behavior, a callable may need to inform the caller about an exceptional condition that occurred during its execution. Most modern languages support exceptions which allows for exceptional control flow that pops the call stack until an exception handler is found to handle the condition. Languages that do not support exceptions can use the return value to indicate success or failure of a call. Another approach is to use a well-known location like a global variable for success indication. A callable writes the value and the caller reads it after a call. In the IBM System/360, where return code was expected from a subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example: === Call overhead === A call has runtime overhead, which may include but is not limited to: Allocating and reclaiming call stack storage Saving and restoring processor registers Copying input variables Copying values after the call into the caller's context Automatic testing of the return code Handling of exceptions Dispatching such as for a virtual method in an object-oriented language Various techniques are employed to minimize the runtime cost of calls. ==== Compiler optimization ==== Some optimizations for minimizing call overhead may seem straight forward, but cannot be used if the callable has side effects. For example, in the expression (f(x)-1)/(f(x)+1), the function f cannot be called only once with its value used two times since the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a callable has a side effect is difficult – indeed, undecidable by virtue of Rice's theorem. So, while this optimization is safe in a purely functional programming language, a compiler for a language not limited to functional typically assumes the worst case, that every callable may have side effects. ==== Inlining ==== Inlining eliminates calls for particular callables. The compiler replaces each call with the compiled code of the callable. Not only does this avoid the call overhead, but it also allows the compiler to optimize code of the caller more effectively by taking into account the context and arguments at that call. Inlining, however, usually increases the compiled code size, except when only called once or the body is very short, like one line. === Sharing === Callables can be defined within a program, or separately in a library that can be used by multiple programs. === Inter-operability === A compiler translates call and return statements into machine instructions according to a well-defined calling convention. For code compiled by the same or a compatible compiler, functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue. === Built-in functions === A built-in function, or builtin function, or intrinsic function, is a function for which the compiler generates code at compile time or provides in a way other than for other functions. A built-in function does not need to be defined like other functions since it is built in to the programming language. == Programming == === Trade-offs === ==== Advantages ==== Advantages of breaking a program into functions include: Decomposing a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures Reducing duplicate code within a program Enabling reuse of code across multiple programs Dividing a large programming task among various programmers or various stages of a project Hiding implementation details from users of the function Improving readability of code by replacing a block of code with a function call where a descriptive function name serves to describe the block of code. This makes the calling code concise and readable even if the function is not meant to be reused. Improving traceability (i.e. most languages offer ways to obtain the call trace which includes the names of the involved functions and perhaps even more information such as file names and line numbers); by not decomposing the code into functions, debugging would be severely impaired ==== Disadvantages ==== Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism. A function typically requires standard housekeeping code – both at the entry to, and exit from, the function (function prologue and epilogue – usually saving general purpose registers and return address as a minimum). === Conventions === Many programming conventions have been developed regarding callables. With respect to naming, many developers name a callable with a phrase starting with a verb when it does a certain task, with an adjective when it makes an inquiry, and with a noun when it is used to substitute variables. Some programmers suggest that a callable should perform exactly one task, and if it performs more than one task, it should be split up into multiple callables. They argue that callables are key components in software maintenance, and their roles in the program must remain distinct. Proponents of modular programming advocate that each callable should have minimal dependency on the rest of the codebase. For example, the use of global variables is generally deemed unwise, because it adds coupling between all callables that use the global variables. If such coupling is not necessary, they advise to refactor callables to accept passed parameters instead. == Examples == === Early BASIC === Early BASIC variants require each line to have a unique number (line number) that orders the lines for execution, provides no separation of the code that is callable, no mechanism for passing arguments or to return a value and all variables are global. It provides the command GOSUB where sub is short for sub procedure, subprocedure or subroutine. Control jumps to the specified line number and then continues on the next line on return. This code repeatedly asks the user to enter a number and reports the square root of the value. Lines 100-130 are the callable. === Small Basic === In Microsoft Small Basic, targeted to the student first learning how to program in a text-based language, a callable unit is called a subroutine. The Sub keyword denotes the start of a subroutine and is followed by a name identifier. Subsequent lines are the body which ends with the EndSub keyword. This can be called as SayHello(). === Visual Basic === In later versions of Visual Basic (VB), including the latest product line and VB6, the term procedure is used for the callable unit concept. The keyword Sub is used to return no value and Function to return a value. When used in the context of a class, a procedure is a method. Each parameter has a data type that can be specified, but if not, defaults to Object for later versions based on .NET and variant for VB6. VB supports parameter passing conventions by value and by reference via the keywords ByVal and ByRef, respectively. Unless ByRef is specified, an argument is passed ByVal. Therefore, ByVal is rarely explicitly specified. For a simple type like a number these conventions are relatively clear. Passing ByRef allows the procedure to modify the passed variable whereas passing ByVal does not. For an object, semantics can confuse programmers since an object is always treated as a reference. Passing an object ByVal copies the reference; not the state of the object. The called procedure can modify the state of the object via its methods yet cannot modify the object reference of the actual parameter. The does not return a value and has to be called stand-alone, like DoSomething This returns the value 5, and a call can be part of an expression like y = x + GiveMeFive() This has a side-effect – modifies the variable passed by reference and could be called for variable v like AddTwo(v). Giving v is 5 before the call, it will be 7 after. === C and C++ === In C and C++, a callable unit is called a function. A function definition starts with the name of the type of value that it returns or void to indicate that it does not return a value. This is followed by the function name, formal arguments in parentheses, and body lines in braces. In C++, a function declared in a class (as non-static) is called a member function or method. A function outside of a class can be called a free function to distinguish it from a member function. This function does not return a value and is always called stand-alone, like doSomething() This function returns the integer value 5. The call can be stand-alone or in an expression like y = x + giveMeFive() This function has a side-effect – modifies the value passed by address to the input value plus 2. It could be called for variable v as addTwo(&v) where the ampersand (&) tells the compiler to pass the address of a variable. Giving v is 5 before the call, it will be 7 after. This function requires C++ – would not compile as C. It has the same behavior as the preceding example but passes the actual parameter by reference rather than passing its address. A call such as addTwo(v) does not include an ampersand since the compiler handles passing by reference without syntax in the call. === PL/I === In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A (trivial) function to change the sign of each element of a two-dimensional array might look like: change_sign: procedure(array); declare array(*,*) float; array = -array; end change_sign; This could be called with various arrays as follows: /* first array bounds from -5 to +10 and 3 to 9 */ declare array1 (-5:10, 3:9)float; /* second array bounds from 1 to 16 and 1 to 16 */ declare array2 (16,16) float; call change_sign(array1); call change_sign(array2); === Python === In Python, the keyword def denotes the start of a function definition. The statements of the function body follow as indented on subsequent lines and end at the line that is indented the same as the first line or end of file. The first function returns greeting text that includes the name passed by the caller. The second function calls the first and is called like greet_martin() to write "Welcome Martin" to the console. === Prolog === In the procedural interpretation of logic programs, logical implications behave as goal-reduction procedures. A rule (or clause) of the form: A :- B which has the logical reading: A if B behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB. Consider, for example, the Prolog program: Notice that the motherhood function, X = mother(Y) is represented by a relation, as in a relational database. However, relations in Prolog function as callable units. For example, the procedure call ?- parent_child(X, charles) produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example: == See also == Asynchronous procedure call, a subprogram that is called after its parameters are set by other activities Command–query separation (CQS) Compound operation Coroutines, subprograms that call each other as if both were the main programs Evaluation strategy Event handler, a subprogram that is called in response to an input event or interrupt Function (mathematics) Functional programming Fused operation Intrinsic function Lambda function (computer programming), a function that is not bound to an identifier Logic programming Modular programming Operator overloading Protected procedure Transclusion == References ==
Wikipedia/Function_(computer_programming)
In mathematics, the restriction of a function f {\displaystyle f} is a new function, denoted f | A {\displaystyle f\vert _{A}} or f ↾ A , {\displaystyle f{\upharpoonright _{A}},} obtained by choosing a smaller domain A {\displaystyle A} for the original function f . {\displaystyle f.} The function f {\displaystyle f} is then said to extend f | A . {\displaystyle f\vert _{A}.} == Formal definition == Let f : E → F {\displaystyle f:E\to F} be a function from a set E {\displaystyle E} to a set F . {\displaystyle F.} If a set A {\displaystyle A} is a subset of E , {\displaystyle E,} then the restriction of f {\displaystyle f} to A {\displaystyle A} is the function f | A : A → F {\displaystyle {f|}_{A}:A\to F} given by f | A ( x ) = f ( x ) {\displaystyle {f|}_{A}(x)=f(x)} for x ∈ A . {\displaystyle x\in A.} Informally, the restriction of f {\displaystyle f} to A {\displaystyle A} is the same function as f , {\displaystyle f,} but is only defined on A {\displaystyle A} . If the function f {\displaystyle f} is thought of as a relation ( x , f ( x ) ) {\displaystyle (x,f(x))} on the Cartesian product E × F , {\displaystyle E\times F,} then the restriction of f {\displaystyle f} to A {\displaystyle A} can be represented by its graph, G ( f | A ) = { ( x , f ( x ) ) ∈ G ( f ) : x ∈ A } = G ( f ) ∩ ( A × F ) , {\displaystyle G({f|}_{A})=\{(x,f(x))\in G(f):x\in A\}=G(f)\cap (A\times F),} where the pairs ( x , f ( x ) ) {\displaystyle (x,f(x))} represent ordered pairs in the graph G . {\displaystyle G.} === Extensions === A function F {\displaystyle F} is said to be an extension of another function f {\displaystyle f} if whenever x {\displaystyle x} is in the domain of f {\displaystyle f} then x {\displaystyle x} is also in the domain of F {\displaystyle F} and f ( x ) = F ( x ) . {\displaystyle f(x)=F(x).} That is, if domain ⁡ f ⊆ domain ⁡ F {\displaystyle \operatorname {domain} f\subseteq \operatorname {domain} F} and F | domain ⁡ f = f . {\displaystyle F{\big \vert }_{\operatorname {domain} f}=f.} A linear extension (respectively, continuous extension, etc.) of a function f {\displaystyle f} is an extension of f {\displaystyle f} that is also a linear map (respectively, a continuous map, etc.). == Examples == The restriction of the non-injective function f : R → R , x ↦ x 2 {\displaystyle f:\mathbb {R} \to \mathbb {R} ,\ x\mapsto x^{2}} to the domain R + = [ 0 , ∞ ) {\displaystyle \mathbb {R} _{+}=[0,\infty )} is the injection f : R + → R , x ↦ x 2 . {\displaystyle f:\mathbb {R} _{+}\to \mathbb {R} ,\ x\mapsto x^{2}.} The factorial function is the restriction of the gamma function to the positive integers, with the argument shifted by one: Γ | Z + ( n ) = ( n − 1 ) ! {\displaystyle {\Gamma |}_{\mathbb {Z} ^{+}}\!(n)=(n-1)!} == Properties of restrictions == Restricting a function f : X → Y {\displaystyle f:X\rightarrow Y} to its entire domain X {\displaystyle X} gives back the original function, that is, f | X = f . {\displaystyle f|_{X}=f.} Restricting a function twice is the same as restricting it once, that is, if A ⊆ B ⊆ dom ⁡ f , {\displaystyle A\subseteq B\subseteq \operatorname {dom} f,} then ( f | B ) | A = f | A . {\displaystyle \left(f|_{B}\right)|_{A}=f|_{A}.} The restriction of the identity function on a set X {\displaystyle X} to a subset A {\displaystyle A} of X {\displaystyle X} is just the inclusion map from A {\displaystyle A} into X . {\displaystyle X.} The restriction of a continuous function is continuous. == Applications == === Inverse functions === For a function to have an inverse, it must be one-to-one. If a function f {\displaystyle f} is not one-to-one, it may be possible to define a partial inverse of f {\displaystyle f} by restricting the domain. For example, the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} defined on the whole of R {\displaystyle \mathbb {R} } is not one-to-one since x 2 = ( − x ) 2 {\displaystyle x^{2}=(-x)^{2}} for any x ∈ R . {\displaystyle x\in \mathbb {R} .} However, the function becomes one-to-one if we restrict to the domain R ≥ 0 = [ 0 , ∞ ) , {\displaystyle \mathbb {R} _{\geq 0}=[0,\infty ),} in which case f − 1 ( y ) = y . {\displaystyle f^{-1}(y)={\sqrt {y}}.} (If we instead restrict to the domain ( − ∞ , 0 ] , {\displaystyle (-\infty ,0],} then the inverse is the negative of the square root of y . {\displaystyle y.} ) Alternatively, there is no need to restrict the domain if we allow the inverse to be a multivalued function. === Selection operators === In relational algebra, a selection (sometimes called a restriction to avoid confusion with SQL's use of SELECT) is a unary operation written as σ a θ b ( R ) {\displaystyle \sigma _{a\theta b}(R)} or σ a θ v ( R ) {\displaystyle \sigma _{a\theta v}(R)} where: a {\displaystyle a} and b {\displaystyle b} are attribute names, θ {\displaystyle \theta } is a binary operation in the set { < , ≤ , = , ≠ , ≥ , > } , {\displaystyle \{<,\leq ,=,\neq ,\geq ,>\},} v {\displaystyle v} is a value constant, R {\displaystyle R} is a relation. The selection σ a θ b ( R ) {\displaystyle \sigma _{a\theta b}(R)} selects all those tuples in R {\displaystyle R} for which θ {\displaystyle \theta } holds between the a {\displaystyle a} and the b {\displaystyle b} attribute. The selection σ a θ v ( R ) {\displaystyle \sigma _{a\theta v}(R)} selects all those tuples in R {\displaystyle R} for which θ {\displaystyle \theta } holds between the a {\displaystyle a} attribute and the value v . {\displaystyle v.} Thus, the selection operator restricts to a subset of the entire database. === The pasting lemma === The pasting lemma is a result in topology that relates the continuity of a function with the continuity of its restrictions to subsets. Let X , Y {\displaystyle X,Y} be two closed subsets (or two open subsets) of a topological space A {\displaystyle A} such that A = X ∪ Y , {\displaystyle A=X\cup Y,} and let B {\displaystyle B} also be a topological space. If f : A → B {\displaystyle f:A\to B} is continuous when restricted to both X {\displaystyle X} and Y , {\displaystyle Y,} then f {\displaystyle f} is continuous. This result allows one to take two continuous functions defined on closed (or open) subsets of a topological space and create a new one. === Sheaves === Sheaves provide a way of generalizing restrictions to objects besides functions. In sheaf theory, one assigns an object F ( U ) {\displaystyle F(U)} in a category to each open set U {\displaystyle U} of a topological space, and requires that the objects satisfy certain conditions. The most important condition is that there are restriction morphisms between every pair of objects associated to nested open sets; that is, if V ⊆ U , {\displaystyle V\subseteq U,} then there is a morphism res V , U : F ( U ) → F ( V ) {\displaystyle \operatorname {res} _{V,U}:F(U)\to F(V)} satisfying the following properties, which are designed to mimic the restriction of a function: For every open set U {\displaystyle U} of X , {\displaystyle X,} the restriction morphism res U , U : F ( U ) → F ( U ) {\displaystyle \operatorname {res} _{U,U}:F(U)\to F(U)} is the identity morphism on F ( U ) . {\displaystyle F(U).} If we have three open sets W ⊆ V ⊆ U , {\displaystyle W\subseteq V\subseteq U,} then the composite res W , V ∘ res V , U = res W , U . {\displaystyle \operatorname {res} _{W,V}\circ \operatorname {res} _{V,U}=\operatorname {res} _{W,U}.} (Locality) If ( U i ) {\displaystyle \left(U_{i}\right)} is an open covering of an open set U , {\displaystyle U,} and if s , t ∈ F ( U ) {\displaystyle s,t\in F(U)} are such that s | U i = t | U i {\displaystyle s{\big \vert }_{U_{i}}=t{\big \vert }_{U_{i}}} for each set U i {\displaystyle U_{i}} of the covering, then s = t {\displaystyle s=t} ; and (Gluing) If ( U i ) {\displaystyle \left(U_{i}\right)} is an open covering of an open set U , {\displaystyle U,} and if for each i {\displaystyle i} a section x i ∈ F ( U i ) {\displaystyle x_{i}\in F\left(U_{i}\right)} is given such that for each pair U i , U j {\displaystyle U_{i},U_{j}} of the covering sets the restrictions of s i {\displaystyle s_{i}} and s j {\displaystyle s_{j}} agree on the overlaps: s i | U i ∩ U j = s j | U i ∩ U j , {\displaystyle s_{i}{\big \vert }_{U_{i}\cap U_{j}}=s_{j}{\big \vert }_{U_{i}\cap U_{j}},} then there is a section s ∈ F ( U ) {\displaystyle s\in F(U)} such that s | U i = s i {\displaystyle s{\big \vert }_{U_{i}}=s_{i}} for each i . {\displaystyle i.} The collection of all such objects is called a sheaf. If only the first two properties are satisfied, it is a pre-sheaf. == Left- and right-restriction == More generally, the restriction (or domain restriction or left-restriction) A ◃ R {\displaystyle A\triangleleft R} of a binary relation R {\displaystyle R} between E {\displaystyle E} and F {\displaystyle F} may be defined as a relation having domain A , {\displaystyle A,} codomain F {\displaystyle F} and graph G ( A ◃ R ) = { ( x , y ) ∈ F ( R ) : x ∈ A } . {\displaystyle G(A\triangleleft R)=\{(x,y)\in F(R):x\in A\}.} Similarly, one can define a right-restriction or range restriction R ▹ B . {\displaystyle R\triangleright B.} Indeed, one could define a restriction to n {\displaystyle n} -ary relations, as well as to subsets understood as relations, such as ones of the Cartesian product E × F {\displaystyle E\times F} for binary relations. These cases do not fit into the scheme of sheaves. == Anti-restriction == The domain anti-restriction (or domain subtraction) of a function or binary relation R {\displaystyle R} (with domain E {\displaystyle E} and codomain F {\displaystyle F} ) by a set A {\displaystyle A} may be defined as ( E ∖ A ) ◃ R {\displaystyle (E\setminus A)\triangleleft R} ; it removes all elements of A {\displaystyle A} from the domain E . {\displaystyle E.} It is sometimes denoted A {\displaystyle A} ⩤ R . {\displaystyle R.} Similarly, the range anti-restriction (or range subtraction) of a function or binary relation R {\displaystyle R} by a set B {\displaystyle B} is defined as R ▹ ( F ∖ B ) {\displaystyle R\triangleright (F\setminus B)} ; it removes all elements of B {\displaystyle B} from the codomain F . {\displaystyle F.} It is sometimes denoted R {\displaystyle R} ⩥ B . {\displaystyle B.} == See also == Constraint – Condition of an optimization problem which the solution must satisfy Deformation retract – Continuous, position-preserving mapping from a topological space into a subspacePages displaying short descriptions of redirect targets Local property – property which occurs on sufficiently small or arbitrarily small neighborhoods of pointsPages displaying wikidata descriptions as a fallback Function (mathematics) § Restriction and extension Binary relation § Restriction Relational algebra § Selection (σ) == References ==
Wikipedia/Restriction_of_a_function
In mathematics, the inverse function of a function f (also called the inverse of f) is a function that undoes the operation of f. The inverse of f exists if and only if f is bijective, and if it exists, is denoted by f − 1 . {\displaystyle f^{-1}.} For a function f : X → Y {\displaystyle f\colon X\to Y} , its inverse f − 1 : Y → X {\displaystyle f^{-1}\colon Y\to X} admits an explicit description: it sends each element y ∈ Y {\displaystyle y\in Y} to the unique element x ∈ X {\displaystyle x\in X} such that f(x) = y. As an example, consider the real-valued function of a real variable given by f(x) = 5x − 7. One can think of f as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of f is the function f − 1 : R → R {\displaystyle f^{-1}\colon \mathbb {R} \to \mathbb {R} } defined by f − 1 ( y ) = y + 7 5 . {\displaystyle f^{-1}(y)={\frac {y+7}{5}}.} == Definitions == Let f be a function whose domain is the set X, and whose codomain is the set Y. Then f is invertible if there exists a function g from Y to X such that g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} and f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} . If f is invertible, then there is exactly one function g satisfying this property. The function g is called the inverse of f, and is usually denoted as f −1, a notation introduced by John Frederick William Herschel in 1813. The function f is invertible if and only if it is bijective. This is because the condition g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} implies that f is injective, and the condition f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} implies that f is surjective. The inverse function f −1 to f can be explicitly described as the function f − 1 ( y ) = ( the unique element x ∈ X such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in X{\text{ such that }}f(x)=y)} . === Inverses and composition === Recall that if f is an invertible function with domain X and codomain Y, then f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}\left(f(x)\right)=x} , for every x ∈ X {\displaystyle x\in X} and f ( f − 1 ( y ) ) = y {\displaystyle f\left(f^{-1}(y)\right)=y} for every y ∈ Y {\displaystyle y\in Y} . Using the composition of functions, this statement can be rewritten to the following equations between functions: f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y , {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y},} where idX is the identity function on the set X; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation f −1. Repeatedly composing a function f: X→X with itself is called iteration. If f is applied n times, starting with the value x, then this is written as f n(x); so f 2(x) = f (f (x)), etc. Since f −1(f (x)) = x, composing f −1 and f n yields f n−1, "undoing" the effect of one application of f. === Notation === While the notation f −1(x) might be misunderstood, (f(x))−1 certainly denotes the multiplicative inverse of f(x) and has nothing to do with the inverse function of f. The notation f ⟨ − 1 ⟩ {\displaystyle f^{\langle -1\rangle }} might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like sin−1(x) to denote the inverse of the sine function applied to x (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of sin (x), which can be denoted as (sin (x))−1. To avoid any confusion, an inverse trigonometric function is often indicated by the prefix "arc" (for Latin arcus). For instance, the inverse of the sine function is typically called the arcsine function, written as arcsin(x). Similarly, the inverse of a hyperbolic function is indicated by the prefix "ar" (for Latin ārea). For instance, the inverse of the hyperbolic sine function is typically written as arsinh(x). The expressions like sin−1(x) can still be useful to distinguish the multivalued inverse from the partial inverse: sin − 1 ⁡ ( x ) = { ( − 1 ) n arcsin ⁡ ( x ) + π n : n ∈ Z } {\displaystyle \sin ^{-1}(x)=\{(-1)^{n}\arcsin(x)+\pi n:n\in \mathbb {Z} \}} . Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the f −1 notation should be avoided. == Examples == === Squaring and square root functions === The function f: R → [0,∞) given by f(x) = x2 is not injective because ( − x ) 2 = x 2 {\displaystyle (-x)^{2}=x^{2}} for all x ∈ R {\displaystyle x\in \mathbb {R} } . Therefore, f is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function f : [ 0 , ∞ ) → [ 0 , ∞ ) ; x ↦ x 2 {\displaystyle f\colon [0,\infty )\to [0,\infty );\ x\mapsto x^{2}} with the same rule as before, then the function is bijective and so, invertible. The inverse function here is called the (positive) square root function and is denoted by x ↦ x {\displaystyle x\mapsto {\sqrt {x}}} . === Standard inverse functions === The following table shows several standard functions and their inverses: === Formula for the inverse === Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse f − 1 {\displaystyle f^{-1}} of an invertible function f : R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } has an explicit description as f − 1 ( y ) = ( the unique element x ∈ R such that f ( x ) = y ) {\displaystyle f^{-1}(y)=({\text{the unique element }}x\in \mathbb {R} {\text{ such that }}f(x)=y)} . This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if f is the function f ( x ) = ( 2 x + 8 ) 3 {\displaystyle f(x)=(2x+8)^{3}} then to determine f − 1 ( y ) {\displaystyle f^{-1}(y)} for a real number y, one must find the unique real number x such that (2x + 8)3 = y. This equation can be solved: y = ( 2 x + 8 ) 3 y 3 = 2 x + 8 y 3 − 8 = 2 x y 3 − 8 2 = x . {\displaystyle {\begin{aligned}y&=(2x+8)^{3}\\{\sqrt[{3}]{y}}&=2x+8\\{\sqrt[{3}]{y}}-8&=2x\\{\dfrac {{\sqrt[{3}]{y}}-8}{2}}&=x.\end{aligned}}} Thus the inverse function f −1 is given by the formula f − 1 ( y ) = y 3 − 8 2 . {\displaystyle f^{-1}(y)={\frac {{\sqrt[{3}]{y}}-8}{2}}.} Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if f is the function f ( x ) = x − sin ⁡ x , {\displaystyle f(x)=x-\sin x,} then f is a bijection, and therefore possesses an inverse function f −1. The formula for this inverse has an expression as an infinite sum: f − 1 ( y ) = ∑ n = 1 ∞ y n / 3 n ! lim θ → 0 ( d n − 1 d θ n − 1 ( θ θ − sin ⁡ ( θ ) 3 ) n ) . {\displaystyle f^{-1}(y)=\sum _{n=1}^{\infty }{\frac {y^{n/3}}{n!}}\lim _{\theta \to 0}\left({\frac {\mathrm {d} ^{\,n-1}}{\mathrm {d} \theta ^{\,n-1}}}\left({\frac {\theta }{\sqrt[{3}]{\theta -\sin(\theta )}}}\right)^{n}\right).} == Properties == Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. === Uniqueness === If an inverse function exists for a given function f, then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by f. === Symmetry === There is a symmetry between a function and its inverse. Specifically, if f is an invertible function with domain X and codomain Y, then its inverse f −1 has domain Y and image X, and the inverse of f −1 is the original function f. In symbols, for functions f:X → Y and f−1:Y → X, f − 1 ∘ f = id X {\displaystyle f^{-1}\circ f=\operatorname {id} _{X}} and f ∘ f − 1 = id Y . {\displaystyle f\circ f^{-1}=\operatorname {id} _{Y}.} This statement is a consequence of the implication that for f to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by ( f − 1 ) − 1 = f . {\displaystyle \left(f^{-1}\right)^{-1}=f.} The inverse of a composition of functions is given by ( g ∘ f ) − 1 = f − 1 ∘ g − 1 . {\displaystyle (g\circ f)^{-1}=f^{-1}\circ g^{-1}.} Notice that the order of g and f have been reversed; to undo f followed by g, we must first undo g, and then undo f. For example, let f(x) = 3x and let g(x) = x + 5. Then the composition g ∘ f is the function that first multiplies by three and then adds five, ( g ∘ f ) ( x ) = 3 x + 5. {\displaystyle (g\circ f)(x)=3x+5.} To reverse this process, we must first subtract five, and then divide by three, ( g ∘ f ) − 1 ( x ) = 1 3 ( x − 5 ) . {\displaystyle (g\circ f)^{-1}(x)={\tfrac {1}{3}}(x-5).} This is the composition (f −1 ∘ g −1)(x). === Self-inverses === If X is a set, then the identity function on X is its own inverse: id X − 1 = id X . {\displaystyle {\operatorname {id} _{X}}^{-1}=\operatorname {id} _{X}.} More generally, a function f : X → X is equal to its own inverse, if and only if the composition f ∘ f is equal to idX. Such a function is called an involution. === Graph of the inverse === If f is invertible, then the graph of the function y = f − 1 ( x ) {\displaystyle y=f^{-1}(x)} is the same as the graph of the equation x = f ( y ) . {\displaystyle x=f(y).} This is identical to the equation y = f(x) that defines the graph of f, except that the roles of x and y have been reversed. Thus the graph of f −1 can be obtained from the graph of f by switching the positions of the x and y axes. This is equivalent to reflecting the graph across the line y = x. === Inverses and derivatives === By the inverse function theorem, a continuous function of a single variable f : A → R {\displaystyle f\colon A\to \mathbb {R} } (where A ⊆ R {\displaystyle A\subseteq \mathbb {R} } ) is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function f ( x ) = x 3 + x {\displaystyle f(x)=x^{3}+x} is invertible, since the derivative f′(x) = 3x2 + 1 is always positive. If the function f is differentiable on an interval I and f′(x) ≠ 0 for each x ∈ I, then the inverse f −1 is differentiable on f(I). If y = f(x), the derivative of the inverse is given by the inverse function theorem, ( f − 1 ) ′ ( y ) = 1 f ′ ( x ) . {\displaystyle \left(f^{-1}\right)^{\prime }(y)={\frac {1}{f'\left(x\right)}}.} Using Leibniz's notation the formula above can be written as d x d y = 1 d y / d x . {\displaystyle {\frac {dx}{dy}}={\frac {1}{dy/dx}}.} This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function f : Rn → Rn is invertible in a neighborhood of a point p as long as the Jacobian matrix of f at p is invertible. In this case, the Jacobian of f −1 at f(p) is the matrix inverse of the Jacobian of f at p. == Real-world examples == Let f be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, F = f ( C ) = 9 5 C + 32 ; {\displaystyle F=f(C)={\tfrac {9}{5}}C+32;} then its inverse function converts degrees Fahrenheit to degrees Celsius, C = f − 1 ( F ) = 5 9 ( F − 32 ) , {\displaystyle C=f^{-1}(F)={\tfrac {5}{9}}(F-32),} since f − 1 ( f ( C ) ) = f − 1 ( 9 5 C + 32 ) = 5 9 ( ( 9 5 C + 32 ) − 32 ) = C , for every value of C , and f ( f − 1 ( F ) ) = f ( 5 9 ( F − 32 ) ) = 9 5 ( 5 9 ( F − 32 ) ) + 32 = F , for every value of F . {\displaystyle {\begin{aligned}f^{-1}(f(C))={}&f^{-1}\left({\tfrac {9}{5}}C+32\right)={\tfrac {5}{9}}\left(({\tfrac {9}{5}}C+32)-32\right)=C,\\&{\text{for every value of }}C,{\text{ and }}\\[6pt]f\left(f^{-1}(F)\right)={}&f\left({\tfrac {5}{9}}(F-32)\right)={\tfrac {9}{5}}\left({\tfrac {5}{9}}(F-32)\right)+32=F,\\&{\text{for every value of }}F.\end{aligned}}} Suppose f assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, f ( Allan ) = 2005 , f ( Brad ) = 2007 , f ( Cary ) = 2001 f − 1 ( 2005 ) = Allan , f − 1 ( 2007 ) = Brad , f − 1 ( 2001 ) = Cary {\displaystyle {\begin{aligned}f({\text{Allan}})&=2005,\quad &f({\text{Brad}})&=2007,\quad &f({\text{Cary}})&=2001\\f^{-1}(2005)&={\text{Allan}},\quad &f^{-1}(2007)&={\text{Brad}},\quad &f^{-1}(2001)&={\text{Cary}}\end{aligned}}} Let R be the function that leads to an x percentage rise of some quantity, and F be the function producing an x percentage fall. Applied to $100 with x = 10%, we find that applying the first function followed by the second does not restore the original value of $100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. The formula to calculate the pH of a solution is pH = −log10[H+]. In many cases we need to find the concentration of acid from a pH measurement. The inverse function [H+] = 10−pH is used. == Generalizations == === Partial inverses === Even if a function f is not one-to-one, it may be possible to define a partial inverse of f by restricting the domain. For example, the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} is not one-to-one, since x2 = (−x)2. However, the function becomes one-to-one if we restrict to the domain x ≥ 0, in which case f − 1 ( y ) = y . {\displaystyle f^{-1}(y)={\sqrt {y}}.} (If we instead restrict to the domain x ≤ 0, then the inverse is the negative of the square root of y.) === Full inverses === Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: f − 1 ( y ) = ± y . {\displaystyle f^{-1}(y)=\pm {\sqrt {y}}.} Sometimes, this multivalued inverse is called the full inverse of f, and the portions (such as √x and −√x) are called branches. The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch, and its value at y is called the principal value of f −1(y). For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). === Trigonometric inverses === The above considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since sin ⁡ ( x + 2 π ) = sin ⁡ ( x ) {\displaystyle \sin(x+2\pi )=\sin(x)} for every real x (and more generally sin(x + 2πn) = sin(x) for every integer n). However, the sine is one-to-one on the interval [−⁠π/2⁠, ⁠π/2⁠], and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between −⁠π/2⁠ and ⁠π/2⁠. The following table describes the principal branch of each inverse trigonometric function: === Left and right inverses === Function composition on the left and on the right need not coincide. In general, the conditions "There exists g such that g(f(x))=x" and "There exists g such that f(g(x))=x" imply different properties of f. For example, let f: R → [0, ∞) denote the squaring map, such that f(x) = x2 for all x in R, and let g: [0, ∞) → R denote the square root map, such that g(x) = √x for all x ≥ 0. Then f(g(x)) = x for all x in [0, ∞); that is, g is a right inverse to f. However, g is not a left inverse to f, since, e.g., g(f(−1)) = 1 ≠ −1. ==== Left inverses ==== If f: X → Y, a left inverse for f (or retraction of f ) is a function g: Y → X such that composing f with g from the left gives the identity function g ∘ f = id X ⁡ . {\displaystyle g\circ f=\operatorname {id} _{X}{\text{.}}} That is, the function g satisfies the rule If f(x)=y, then g(y)=x. The function g must equal the inverse of f on the image of f, but may take any values for elements of Y not in the image. A function f with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: If g is the left inverse of f, and f(x) = f(y), then g(f(x)) = g(f(y)) = x = y. If nonempty f: X → Y is injective, construct a left inverse g: Y → X as follows: for all y ∈ Y, if y is in the image of f, then there exists x ∈ X such that f(x) = y. Let g(y) = x; this definition is unique because f is injective. Otherwise, let g(y) be an arbitrary element of X.For all x ∈ X, f(x) is in the image of f. By construction, g(f(x)) = x, the condition for a left inverse. In classical mathematics, every injective function f with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion {0,1} → R of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set {0,1}. ==== Right inverses ==== A right inverse for f (or section of f ) is a function h: Y → X such that f ∘ h = id Y . {\displaystyle f\circ h=\operatorname {id} _{Y}.} That is, the function h satisfies the rule If h ( y ) = x {\displaystyle \displaystyle h(y)=x} , then f ( x ) = y . {\displaystyle \displaystyle f(x)=y.} Thus, h(y) may be any of the elements of X that map to y under f. A function f has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). If h is the right inverse of f, then f is surjective. For all y ∈ Y {\displaystyle y\in Y} , there is x = h ( y ) {\displaystyle x=h(y)} such that f ( x ) = f ( h ( y ) ) = y {\displaystyle f(x)=f(h(y))=y} . If f is surjective, f has a right inverse h, which can be constructed as follows: for all y ∈ Y {\displaystyle y\in Y} , there is at least one x ∈ X {\displaystyle x\in X} such that f ( x ) = y {\displaystyle f(x)=y} (because f is surjective), so we choose one to be the value of h(y). ==== Two-sided inverses ==== An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. If g {\displaystyle g} is a left inverse and h {\displaystyle h} a right inverse of f {\displaystyle f} , for all y ∈ Y {\displaystyle y\in Y} , g ( y ) = g ( f ( h ( y ) ) = h ( y ) {\displaystyle g(y)=g(f(h(y))=h(y)} . A function has a two-sided inverse if and only if it is bijective. A bijective function f is injective, so it has a left inverse (if f is the empty function, f : ∅ → ∅ {\displaystyle f\colon \varnothing \to \varnothing } is its own left inverse). f is surjective, so it has a right inverse. By the above, the left and right inverse are the same. If f has a two-sided inverse g, then g is a left inverse and right inverse of f, so f is injective and surjective. === Preimages === If f: X → Y is any function (not necessarily invertible), the preimage (or inverse image) of an element y ∈ Y is defined to be the set of all elements of X that map to y: f − 1 ( y ) = { x ∈ X : f ( x ) = y } . {\displaystyle f^{-1}(y)=\left\{x\in X:f(x)=y\right\}.} The preimage of y can be thought of as the image of y under the (multivalued) full inverse of the function f. The notion can be generalized to subsets of the range. Specifically, if S is any subset of Y, the preimage of S, denoted by f − 1 ( S ) {\displaystyle f^{-1}(S)} , is the set of all elements of X that map to S: f − 1 ( S ) = { x ∈ X : f ( x ) ∈ S } . {\displaystyle f^{-1}(S)=\left\{x\in X:f(x)\in S\right\}.} For example, take the function f: R → R; x ↦ x2. This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. f − 1 ( { 1 , 4 , 9 , 16 } ) = { − 4 , − 3 , − 2 , − 1 , 1 , 2 , 3 , 4 } {\displaystyle f^{-1}(\left\{1,4,9,16\right\})=\left\{-4,-3,-2,-1,1,2,3,4\right\}} . The original notion and its generalization are related by the identity f − 1 ( y ) = f − 1 ( { y } ) , {\displaystyle f^{-1}(y)=f^{-1}(\{y\}),} The preimage of a single element y ∈ Y – a singleton set {y}  – is sometimes called the fiber of y. When Y is the set of real numbers, it is common to refer to f −1({y}) as a level set. == See also == Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function Integral of inverse functions Inverse Fourier transform Reversible computing == Notes == == References == == Bibliography == Briggs, William; Cochran, Lyle (2011). Calculus / Early Transcendentals Single Variable. Addison-Wesley. ISBN 978-0-321-66414-3. Devlin, Keith J. (2004). Sets, Functions, and Logic / An Introduction to Abstract Mathematics (3 ed.). Chapman & Hall / CRC Mathematics. ISBN 978-1-58488-449-1. Fletcher, Peter; Patty, C. Wayne (1988). Foundations of Higher Mathematics. PWS-Kent. ISBN 0-87150-164-3. Lay, Steven R. (2006). Analysis / With an Introduction to Proof (4 ed.). Pearson / Prentice Hall. ISBN 978-0-13-148101-5. Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006). A Transition to Advanced Mathematics (6 ed.). Thompson Brooks/Cole. ISBN 978-0-534-39900-9. Thomas Jr., George Brinton (1972). Calculus and Analytic Geometry Part 1: Functions of One Variable and Analytic Geometry (Alternate ed.). Addison-Wesley. Wolf, Robert S. (1998). Proof, Logic, and Conjecture / The Mathematician's Toolbox. W. H. Freeman and Co. ISBN 978-0-7167-3050-7. == Further reading == Amazigo, John C.; Rubenfeld, Lester A. (1980). "Implicit Functions; Jacobians; Inverse Functions". Advanced Calculus and its Applications to the Engineering and Physical Sciences. New York: Wiley. pp. 103–120. ISBN 0-471-04934-4. Binmore, Ken G. (1983). "Inverse Functions". Calculus. New York: Cambridge University Press. pp. 161–197. ISBN 0-521-28952-1. Spivak, Michael (1994). Calculus (3 ed.). Publish or Perish. ISBN 0-914098-89-6. Stewart, James (2002). Calculus (5 ed.). Brooks Cole. ISBN 978-0-534-39339-7. == External links == "Inverse function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Left_inverse_function
In computer science, partial application (or partial function application) refers to the process of fixing a number of arguments of a function, producing another function of smaller arity. Given a function f : ( X × Y × Z ) → N {\displaystyle f\colon (X\times Y\times Z)\to N} , we might fix (or 'bind') the first argument, producing a function of type partial ( f ) : ( Y × Z ) → N {\displaystyle {\text{partial}}(f)\colon (Y\times Z)\to N} . Evaluation of this function might be represented as f partial ( 2 , 3 ) {\displaystyle f_{\text{partial}}(2,3)} . Note that the result of partial function application in this case is a function that takes two arguments. Partial application is sometimes incorrectly called currying, which is a related, but distinct concept. == Motivation == Intuitively, partial function application says "if you fix the first arguments of the function, you get a function of the remaining arguments". For example, if function div(x,y) = x/y, then div with the parameter x fixed at 1 is another function: div1(y) = div(1,y) = 1/y. This is the same as the function inv that returns the multiplicative inverse of its argument, defined by inv(y) = 1/y. The practical motivation for partial application is that very often the functions obtained by supplying some but not all of the arguments to a function are useful; for example, many languages have a function or operator similar to plus_one. Partial application makes it easy to define these functions, for example by creating a function that represents the addition operator with 1 bound as its first argument. == Implementations == In languages such as ML, Haskell and F#, functions are defined in curried form by default. Supplying fewer than the total number of arguments is referred to as partial application. In languages with first-class functions, one can define curry, uncurry and papply to perform currying and partial application explicitly. This might incur a greater run-time overhead due to the creation of additional closures, while Haskell can use more efficient techniques. Scala implements optional partial application with placeholder, e.g. def add(x: Int, y: Int) = {x+y}; add(1, _: Int) returns an incrementing function. Scala also supports multiple parameter lists as currying, e.g. def add(x: Int)(y: Int) = {x+y}; add(1) _. Clojure implements partial application using the partial function defined in its core library. The C++ standard library provides bind(function, args..) to return a function object that is the result of partial application of the given arguments to the given function. Since C++20 the function bind_front(function, args...) is also provided which binds the first sizeof...(args) arguments of the function to the args. In contrast, bind allows binding any of the arguments of the function passed to it, not just the first ones. Alternatively, lambda expressions can be used: In Java, MethodHandle.bindTo partially applies a function to its first argument. Alternatively, since Java 8, lambdas can be used: In Raku, the assuming method creates a new function with fewer parameters. The Python standard library module functools includes the partial function, allowing positional and named argument bindings, returning a new function. In XQuery, an argument placeholder (?) is used for each non-fixed argument in a partial function application. == Definitions == In the simply typed lambda calculus with function and product types (λ→,×) partial application, currying and uncurrying can be defined as papply (((a × b) → c) × a) → (b → c) = λ(f, x). λy. f (x, y) curry ((a × b) → c) → (a → (b → c)) = λf. λx. λy. f (x, y) uncurry (a → (b → c)) → ((a × b) → c) = λf. λ(x, y). f x y Note that curry papply = curry. == Mathematical formulation and examples == Partial application can be a useful way to define several useful notions in mathematics. Given sets X , Y {\displaystyle X,Y} and Z {\displaystyle Z} , and a function f : X × Y → Z {\displaystyle f:X\times Y\rightarrow Z} , one can define the function f ( ⋅ , − ) : X → ( Y → Z ) , {\displaystyle f(\,\cdot \,,-):X\rightarrow (Y\rightarrow Z),} where ( Y → Z ) {\displaystyle (Y\rightarrow Z)} is the set of functions Y → Z {\displaystyle Y\rightarrow Z} . The image of x ∈ X {\displaystyle x\in X} under this map is f ( x , ⋅ ) : Y → Z {\displaystyle f(x,\,\cdot \,):Y\rightarrow Z} . This is the function which sends y ∈ Y {\displaystyle y\in Y} to f ( x , y ) {\displaystyle f(x,y)} . There are often structures on X , Y , Z {\displaystyle X,Y,Z} which mean that the image of f ( ⋅ , − ) {\displaystyle f(\,\cdot \,,-)} restricts to some subset of functions Y → Z {\displaystyle Y\rightarrow Z} , as illustrated in the following examples. === Group actions === A group action can be understood as a function ∗ : G × X → X {\displaystyle *:G\times X\rightarrow X} . The partial evaluation ρ : G → Sym ( X ) ⊂ ( X → X ) {\displaystyle \rho :G\rightarrow {\text{Sym}}(X)\subset (X\rightarrow X)} restricts to the group of bijections from X {\displaystyle X} to itself. The group action axioms further ensure ρ {\displaystyle \rho } is a group homomorphism. === Inner-products and canonical map to the dual === An inner-product on a vector space V {\displaystyle V} over a field K {\displaystyle K} is a map ϕ : V × V → K {\displaystyle \phi :V\times V\rightarrow K} . The partial evaluation provides a canonical map to the dual vector space, ϕ ( ⋅ , − ) : V → V ∗ ⊂ ( V → K ) {\displaystyle \phi (\,\cdot \,,-):V\rightarrow V^{*}\subset (V\rightarrow K)} . If this is the inner-product of a Hilbert space, the Riesz representation theorem ensures this is an isomorphism. === Cross-products and the adjoint map for Lie algebras === The partial application of the cross product × {\displaystyle \times } on R 3 {\displaystyle \mathbb {R} ^{3}} is × ( ⋅ , − ) : R 3 ↦ End ( R 3 ) {\displaystyle \times (\,\cdot \,,-):\mathbb {R} ^{3}\mapsto {\text{End}}(\mathbb {R} ^{3})} . The image of the vector u {\displaystyle \mathbf {u} } is a linear map T u {\displaystyle T_{\mathbf {u} }} such that T u ( v ) = u × v {\displaystyle T_{\mathbf {u} }(\mathbf {v} )=\mathbf {u} \times \mathbf {v} } . The components of T u {\displaystyle T_{\mathbf {u} }} can be found to be ( T u ) i j = ϵ i j k u k {\displaystyle (T_{\mathbf {u} })_{ij}=\epsilon _{ijk}u_{k}} . This is closely related to the adjoint map for Lie algebras. Lie algebras are equipped with a bracket [ ⋅ , ⋅ ] : g × g → g {\displaystyle [\,\cdot \,,\,\cdot \,]:{\mathfrak {g}}\times {\mathfrak {g}}\rightarrow {\mathfrak {g}}} . The partial application gives a map ad : g → End ( g ) {\displaystyle {\text{ad}}:{\mathfrak {g}}\rightarrow {\text{End}}({\mathfrak {g}})} . The axioms for the bracket ensure this map is a homomorphism of Lie algebras. == See also == η-conversion POP-2 Restriction (mathematics), the more general phenomenon of restricting a function to a subset of its domain == References == == Further reading == Marlow, Simon; Peyton Jones, Simon (2004), "Making a Fast Curry: Push/Enter vs. Eval/Apply for Higher-order Languages", ICFP '04 Proceedings of the ninth ACM SIGPLAN international conference on Functional programming Benjamin C. Pierce et al. "Partial Application", Archived 2016-05-21 at the Wayback Machine "Digression: Currying". Archived 2016-05-21 at the Wayback Machine Software Foundations. == External links == Partial function application on Rosetta code. Partial application at Haskell Wiki Constant applicative form at Haskell Wiki The dangers of being too partial
Wikipedia/Partial_application
In mathematics, exponentiation, denoted bn, is an operation involving two numbers: the base, b, and the exponent or power, n. When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases: b n = b × b × ⋯ × b × b ⏟ n times . {\displaystyle b^{n}=\underbrace {b\times b\times \dots \times b\times b} _{n{\text{ times}}}.} In particular, b 1 = b {\displaystyle b^{1}=b} . The exponent is usually shown as a superscript to the right of the base as bn or in computer code as b^n. This binary operation is often read as "b to the power n"; it may also be referred to as "b raised to the nth power", "the nth power of b", or, most briefly, "b to the n". The above definition of b n {\displaystyle b^{n}} immediately implies several properties, in particular the multiplication rule: b n × b m = b × ⋯ × b ⏟ n times × b × ⋯ × b ⏟ m times = b × ⋯ × b ⏟ n + m times = b n + m . {\displaystyle {\begin{aligned}b^{n}\times b^{m}&=\underbrace {b\times \dots \times b} _{n{\text{ times}}}\times \underbrace {b\times \dots \times b} _{m{\text{ times}}}\\[1ex]&=\underbrace {b\times \dots \times b} _{n+m{\text{ times}}}\ =\ b^{n+m}.\end{aligned}}} That is, when multiplying a base raised to one power times the same base raised to another power, the powers add. Extending this rule to the power zero gives b 0 × b n = b 0 + n = b n {\displaystyle b^{0}\times b^{n}=b^{0+n}=b^{n}} , and, where b is non-zero, dividing both sides by b n {\displaystyle b^{n}} gives b 0 = b n / b n = 1 {\displaystyle b^{0}=b^{n}/b^{n}=1} . That is the multiplication rule implies the definition b 0 = 1. {\displaystyle b^{0}=1.} A similar argument implies the definition for negative integer powers: b − n = 1 / b n . {\displaystyle b^{-n}=1/b^{n}.} That is, extending the multiplication rule gives b − n × b n = b − n + n = b 0 = 1 {\displaystyle b^{-n}\times b^{n}=b^{-n+n}=b^{0}=1} . Dividing both sides by b n {\displaystyle b^{n}} gives b − n = 1 / b n {\displaystyle b^{-n}=1/b^{n}} . This also implies the definition for fractional powers: b n / m = b n m . {\displaystyle b^{n/m}={\sqrt[{m}]{b^{n}}}.} For example, b 1 / 2 × b 1 / 2 = b 1 / 2 + 1 / 2 = b 1 = b {\displaystyle b^{1/2}\times b^{1/2}=b^{1/2\,+\,1/2}=b^{1}=b} , meaning ( b 1 / 2 ) 2 = b {\displaystyle (b^{1/2})^{2}=b} , which is the definition of square root: b 1 / 2 = b {\displaystyle b^{1/2}={\sqrt {b}}} . The definition of exponentiation can be extended in a natural way (preserving the multiplication rule) to define b x {\displaystyle b^{x}} for any positive real base b {\displaystyle b} and any real number exponent x {\displaystyle x} . More involved definitions allow complex base and exponent, as well as certain types of matrices as base or exponent. Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography. == Etymology == The term exponent originates from the Latin exponentem, the present participle of exponere, meaning "to put forth". The term power (Latin: potentia, potestas, dignitas) is a mistranslation of the ancient Greek δύναμις (dúnamis, here: "amplification") used by the Greek mathematician Euclid for the square of a line, following Hippocrates of Chios. The word exponent was coined in 1544 by Michael Stifel. In the 16th century, Robert Recorde used the terms "square", "cube", "zenzizenzic" (fourth power), "sursolid" (fifth), "zenzicube" (sixth), "second sursolid" (seventh), and "zenzizenzizenzic" (eighth). "Biquadrate" has been used to refer to the fourth power as well. == History == In The Sand Reckoner, Archimedes proved the law of exponents, 10a · 10b = 10a+b, necessary to manipulate powers of 10. He then used powers of 10 to estimate the number of grains of sand that can be contained in the universe. In the 9th century, the Persian mathematician Al-Khwarizmi used the terms مَال (māl, "possessions", "property") for a square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"—and كَعْبَة (Kaʿbah, "cube") for a cube, which later Islamic mathematicians represented in mathematical notation as the letters mīm (m) and kāf (k), respectively, by the 15th century, as seen in the work of Abu'l-Hasan ibn Ali al-Qalasadi. Nicolas Chuquet used a form of exponential notation in the 15th century, for example 122 to represent 12x2. This was later used by Henricus Grammateus and Michael Stifel in the 16th century. In the late 16th century, Jost Bürgi would use Roman numerals for exponents in a way similar to that of Chuquet, for example iii4 for 4x3. In 1636, James Hume used in essence modern notation, when in L'algèbre de Viète he wrote Aiii for A3. Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled La Géométrie; there, the notation is introduced in Book I. I designate ... aa, or a2 in multiplying a by itself; and a3 in multiplying it once more again by a, and thus to infinity. Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx3 + d. Samuel Jeake introduced the term indices in 1696. The term involution was used synonymously with the term indices, but had declined in usage and should not be confused with its more common meaning. In 1748, Leonhard Euler introduced variable exponents, and, implicitly, non-integer exponents by writing:Consider exponentials or powers in which the exponent itself is a variable. It is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant. === 20th century === As calculation was mechanized, notation was adapted to numerical capacity by conventions in exponential notation. For example Konrad Zuse introduced floating-point arithmetic in his 1938 computer Z1. One register contained representation of leading digits, and a second contained representation of the exponent of 10. Earlier Leonardo Torres Quevedo contributed Essays on Automation (1914) which had suggested the floating-point representation of numbers. The more flexible decimal floating-point representation was introduced in 1946 with a Bell Laboratories computer. Eventually educators and engineers adopted scientific notation of numbers, consistent with common reference to order of magnitude in a ratio scale. For instance, in 1961 the School Mathematics Study Group developed the notation in connection with units used in the metric system. Exponents also came to be used to describe units of measurement and quantity dimensions. For instance, since force is mass times acceleration, it is measured in kg m/sec2. Using M for mass, L for length, and T for time, the expression M L T–2 is used in dimensional analysis to describe force. == Terminology == The expression b2 = b · b is called "the square of b" or "b squared", because the area of a square with side-length b is b2. (It is true that it could also be called "b to the second power", but "the square of b" and "b squared" are more traditional) Similarly, the expression b3 = b · b · b is called "the cube of b" or "b cubed", because the volume of a cube with side-length b is b3. When an exponent is a positive integer, that exponent indicates how many copies of the base are multiplied together. For example, 35 = 3 · 3 · 3 · 3 · 3 = 243. The base 3 appears 5 times in the multiplication, because the exponent is 5. Here, 243 is the 5th power of 3, or 3 raised to the 5th power. The word "raised" is usually omitted, and sometimes "power" as well, so 35 can be simply read "3 to the 5th", or "3 to the 5". == Integer exponents == The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations. === Positive exponents === The definition of the exponentiation as an iterated multiplication can be formalized by using induction, and this definition can be used as soon as one has an associative multiplication: The base case is b 1 = b {\displaystyle b^{1}=b} and the recurrence is b n + 1 = b n ⋅ b . {\displaystyle b^{n+1}=b^{n}\cdot b.} The associativity of multiplication implies that for any positive integers m and n, b m + n = b m ⋅ b n , {\displaystyle b^{m+n}=b^{m}\cdot b^{n},} and ( b m ) n = b m n . {\displaystyle (b^{m})^{n}=b^{mn}.} === Zero exponent === As mentioned earlier, a (nonzero) number raised to the 0 power is 1: b 0 = 1. {\displaystyle b^{0}=1.} This value is also obtained by the empty product convention, which may be used in every algebraic structure with a multiplication that has an identity. This way the formula b m + n = b m ⋅ b n {\displaystyle b^{m+n}=b^{m}\cdot b^{n}} also holds for n = 0 {\displaystyle n=0} . The case of 00 is controversial. In contexts where only integer powers are considered, the value 1 is generally assigned to 00 but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context. For more details, see Zero to the power of zero. === Negative exponents === Exponentiation with negative exponents is defined by the following identity, which holds for any integer n and nonzero b: b − n = 1 b n {\displaystyle b^{-n}={\frac {1}{b^{n}}}} . Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity ( ∞ {\displaystyle \infty } ). This definition of exponentiation with negative exponents is the only one that allows extending the identity b m + n = b m ⋅ b n {\displaystyle b^{m+n}=b^{m}\cdot b^{n}} to negative exponents (consider the case m = − n {\displaystyle m=-n} ). The same definition applies to invertible elements in a multiplicative monoid, that is, an algebraic structure, with an associative multiplication and a multiplicative identity denoted 1 (for example, the square matrices of a given dimension). In particular, in such a structure, the inverse of an invertible element x is standardly denoted x − 1 . {\displaystyle x^{-1}.} === Identities and properties === The following identities, often called exponent rules, hold for all integer exponents, provided that the base is non-zero: b m ⋅ b n = b m + n ( b m ) n = b m ⋅ n b n ⋅ c n = ( b ⋅ c ) n {\displaystyle {\begin{aligned}b^{m}\cdot b^{n}&=b^{m+n}\\\left(b^{m}\right)^{n}&=b^{m\cdot n}\\b^{n}\cdot c^{n}&=(b\cdot c)^{n}\end{aligned}}} Unlike addition and multiplication, exponentiation is not commutative: for example, 2 3 = 8 {\displaystyle 2^{3}=8} , but reversing the operands gives the different value 3 2 = 9 {\displaystyle 3^{2}=9} . Also unlike addition and multiplication, exponentiation is not associative: for example, (23)2 = 82 = 64, whereas 2(32) = 29 = 512. Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or right-associative), not bottom-up (or left-associative). That is, b p q = b ( p q ) , {\displaystyle b^{p^{q}}=b^{\left(p^{q}\right)},} which, in general, is different from ( b p ) q = b p q . {\displaystyle \left(b^{p}\right)^{q}=b^{pq}.} === Powers of a sum === The powers of a sum can normally be computed from the powers of the summands by the binomial formula ( a + b ) n = ∑ i = 0 n ( n i ) a i b n − i = ∑ i = 0 n n ! i ! ( n − i ) ! a i b n − i . {\displaystyle (a+b)^{n}=\sum _{i=0}^{n}{\binom {n}{i}}a^{i}b^{n-i}=\sum _{i=0}^{n}{\frac {n!}{i!(n-i)!}}a^{i}b^{n-i}.} However, this formula is true only if the summands commute (i.e. that ab = ba), which is implied if they belong to a structure that is commutative. Otherwise, if a and b are, say, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes ^^ instead of ^) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation. === Combinatorial interpretation === For nonnegative integers n and m, the value of nm is the number of functions from a set of m elements to a set of n elements (see cardinal exponentiation). Such functions can be represented as m-tuples from an n-element set (or as m-letter words from an n-letter alphabet). Some examples for particular values of m and n are given in the following table: === Particular bases === ==== Powers of ten ==== In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, 103 = 1000 and 10−4 = 0.0001. Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, 299792458 m/s (the speed of light in vacuum, in metres per second) can be written as 2.99792458×108 m/s and then approximated as 2.998×108 m/s. SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means 103 = 1000, so a kilometre is 1000 m. ==== Powers of two ==== The first negative powers of 2 have special names: 2 − 1 {\displaystyle 2^{-1}} is a half; 2 − 2 {\displaystyle 2^{-2}} is a quarter. Powers of 2 appear in set theory, since a set with n members has a power set, the set of all of its subsets, which has 2n members. Integer powers of 2 are important in computer science. The positive integer powers 2n give the number of possible values for an n-bit integer binary number; for example, a byte may take 28 = 256 different values. The binary number system expresses any number as a sum of powers of 2, and denotes it as a sequence of 0 and 1, separated by a binary point, where 1 indicates a power of 2 that appears in the sum; the exponent is determined by the place of this 1: the nonnegative exponents are the rank of the 1 on the left of the point (starting from 0), and the negative exponents are determined by the rank on the right of the point. ==== Powers of one ==== Every power of one equals: 1n = 1. ==== Powers of zero ==== For a positive exponent n > 0, the nth power of zero is zero: 0n = 0. For a negative exponent, 0 − n = 1 / 0 n = 1 / 0 {\displaystyle 0^{-n}=1/0^{n}=1/0} is undefined. In some contexts (e.g., combinatorics), the expression 00 is defined to be equal to 1 {\displaystyle 1} ; in others (e.g., analysis), it is often undefined. ==== Powers of negative one ==== Since a negative number times another negative is positive, we have: ( − 1 ) n = { 1 for even n , − 1 for odd n . {\displaystyle (-1)^{n}=\left\{{\begin{array}{rl}1&{\text{for even }}n,\\-1&{\text{for odd }}n.\\\end{array}}\right.} Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number i, see § nth roots of a complex number. === Large exponents === The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound: bn → ∞ as n → ∞ when b > 1 This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one". Powers of a number with absolute value less than one tend to zero: bn → 0 as n → ∞ when |b| < 1 Any power of one is always one: bn = 1 for all n for b = 1 Powers of a negative number b ≤ − 1 {\displaystyle b\leq -1} alternate between positive and negative as n alternates between even and odd, and thus do not tend to any limit as n grows. If the exponentiated number varies while tending to 1 as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is (1 + 1/n)n → e as n → ∞ See § Exponential function below. Other limits, in particular those of expressions that take on an indeterminate form, are described in § Limits of powers below. === Power functions === Real functions of the form f ( x ) = c x n {\displaystyle f(x)=cx^{n}} , where c ≠ 0 {\displaystyle c\neq 0} , are sometimes called power functions. When n {\displaystyle n} is an integer and n ≥ 1 {\displaystyle n\geq 1} , two primary families exist: for n {\displaystyle n} even, and for n {\displaystyle n} odd. In general for c > 0 {\displaystyle c>0} , when n {\displaystyle n} is even f ( x ) = c x n {\displaystyle f(x)=cx^{n}} will tend towards positive infinity with increasing x {\displaystyle x} , and also towards positive infinity with decreasing x {\displaystyle x} . All graphs from the family of even power functions have the general shape of y = c x 2 {\displaystyle y=cx^{2}} , flattening more in the middle as n {\displaystyle n} increases. Functions with this kind of symmetry ( f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} ) are called even functions. When n {\displaystyle n} is odd, f ( x ) {\displaystyle f(x)} 's asymptotic behavior reverses from positive x {\displaystyle x} to negative x {\displaystyle x} . For c > 0 {\displaystyle c>0} , f ( x ) = c x n {\displaystyle f(x)=cx^{n}} will also tend towards positive infinity with increasing x {\displaystyle x} , but towards negative infinity with decreasing x {\displaystyle x} . All graphs from the family of odd power functions have the general shape of y = c x 3 {\displaystyle y=cx^{3}} , flattening more in the middle as n {\displaystyle n} increases and losing all flatness there in the straight line for n = 1 {\displaystyle n=1} . Functions with this kind of symmetry ( f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} ) are called odd functions. For c < 0 {\displaystyle c<0} , the opposite asymptotic behavior is true in each case. === Table of powers of decimal digits === == Rational exponents == If x is a nonnegative real number, and n is a positive integer, x 1 / n {\displaystyle x^{1/n}} or x n {\displaystyle {\sqrt[{n}]{x}}} denotes the unique nonnegative real nth root of x, that is, the unique nonnegative real number y such that y n = x . {\displaystyle y^{n}=x.} If x is a positive real number, and p q {\displaystyle {\frac {p}{q}}} is a rational number, with p and q > 0 integers, then x p / q {\textstyle x^{p/q}} is defined as x p q = ( x p ) 1 q = ( x 1 q ) p . {\displaystyle x^{\frac {p}{q}}=\left(x^{p}\right)^{\frac {1}{q}}=(x^{\frac {1}{q}})^{p}.} The equality on the right may be derived by setting y = x 1 q , {\displaystyle y=x^{\frac {1}{q}},} and writing ( x 1 q ) p = y p = ( ( y p ) q ) 1 q = ( ( y q ) p ) 1 q = ( x p ) 1 q . {\displaystyle (x^{\frac {1}{q}})^{p}=y^{p}=\left((y^{p})^{q}\right)^{\frac {1}{q}}=\left((y^{q})^{p}\right)^{\frac {1}{q}}=(x^{p})^{\frac {1}{q}}.} If r is a positive rational number, 0r = 0, by definition. All these definitions are required for extending the identity ( x r ) s = x r s {\displaystyle (x^{r})^{s}=x^{rs}} to rational exponents. On the other hand, there are problems with the extension of these definitions to bases that are not positive real numbers. For example, a negative real number has a real nth root, which is negative, if n is odd, and no real root if n is even. In the latter case, whichever complex nth root one chooses for x 1 n , {\displaystyle x^{\frac {1}{n}},} the identity ( x a ) b = x a b {\displaystyle (x^{a})^{b}=x^{ab}} cannot be satisfied. For example, ( ( − 1 ) 2 ) 1 2 = 1 1 2 = 1 ≠ ( − 1 ) 2 ⋅ 1 2 = ( − 1 ) 1 = − 1. {\displaystyle \left((-1)^{2}\right)^{\frac {1}{2}}=1^{\frac {1}{2}}=1\neq (-1)^{2\cdot {\frac {1}{2}}}=(-1)^{1}=-1.} See § Real exponents and § Non-integer powers of complex numbers for details on the way these problems may be handled. == Real exponents == For positive real numbers, exponentiation to real powers can be defined in two equivalent ways, either by extending the rational powers to reals by continuity (§ Limits of rational exponents, below), or in terms of the logarithm of the base and the exponential function (§ Powers via logarithms, below). The result is always a positive real number, and the identities and properties shown above for integer exponents remain true with these definitions for real exponents. The second definition is more commonly used, since it generalizes straightforwardly to complex exponents. On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values. One may choose one of these values, called the principal value, but there is no choice of the principal value for which the identity ( b r ) s = b r s {\displaystyle \left(b^{r}\right)^{s}=b^{rs}} is true; see § Failure of power and logarithm identities. Therefore, exponentiation with a basis that is not a positive real number is generally viewed as a multivalued function. === Limits of rational exponents === Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number b with an arbitrary real exponent x can be defined by continuity with the rule b x = lim r ( ∈ Q ) → x b r ( b ∈ R + , x ∈ R ) , {\displaystyle b^{x}=\lim _{r(\in \mathbb {Q} )\to x}b^{r}\quad (b\in \mathbb {R} ^{+},\,x\in \mathbb {R} ),} where the limit is taken over rational values of r only. This limit exists for every positive b and every real x. For example, if x = π, the non-terminating decimal representation π = 3.14159... and the monotonicity of the rational powers can be used to obtain intervals bounded by rational powers that are as small as desired, and must contain b π : {\displaystyle b^{\pi }:} [ b 3 , b 4 ] , [ b 3.1 , b 3.2 ] , [ b 3.14 , b 3.15 ] , [ b 3.141 , b 3.142 ] , [ b 3.1415 , b 3.1416 ] , [ b 3.14159 , b 3.14160 ] , … {\displaystyle \left[b^{3},b^{4}\right],\left[b^{3.1},b^{3.2}\right],\left[b^{3.14},b^{3.15}\right],\left[b^{3.141},b^{3.142}\right],\left[b^{3.1415},b^{3.1416}\right],\left[b^{3.14159},b^{3.14160}\right],\ldots } So, the upper bounds and the lower bounds of the intervals form two sequences that have the same limit, denoted b π . {\displaystyle b^{\pi }.} This defines b x {\displaystyle b^{x}} for every positive b and real x as a continuous function of b and x. See also Well-defined expression. === Exponential function === The exponential function may be defined as x ↦ e x , {\displaystyle x\mapsto e^{x},} where e ≈ 2.718 {\displaystyle e\approx 2.718} is Euler's number, but to avoid circular reasoning, this definition cannot be used here. Rather, we give an independent definition of the exponential function exp ⁡ ( x ) , {\displaystyle \exp(x),} and of e = exp ⁡ ( 1 ) {\displaystyle e=\exp(1)} , relying only on positive integer powers (repeated multiplication). Then we sketch the proof that this agrees with the previous definition: exp ⁡ ( x ) = e x . {\displaystyle \exp(x)=e^{x}.} There are many equivalent ways to define the exponential function, one of them being exp ⁡ ( x ) = lim n → ∞ ( 1 + x n ) n . {\displaystyle \exp(x)=\lim _{n\rightarrow \infty }\left(1+{\frac {x}{n}}\right)^{n}.} One has exp ⁡ ( 0 ) = 1 , {\displaystyle \exp(0)=1,} and the exponential identity (or multiplication rule) exp ⁡ ( x ) exp ⁡ ( y ) = exp ⁡ ( x + y ) {\displaystyle \exp(x)\exp(y)=\exp(x+y)} holds as well, since exp ⁡ ( x ) exp ⁡ ( y ) = lim n → ∞ ( 1 + x n ) n ( 1 + y n ) n = lim n → ∞ ( 1 + x + y n + x y n 2 ) n , {\displaystyle \exp(x)\exp(y)=\lim _{n\rightarrow \infty }\left(1+{\frac {x}{n}}\right)^{n}\left(1+{\frac {y}{n}}\right)^{n}=\lim _{n\rightarrow \infty }\left(1+{\frac {x+y}{n}}+{\frac {xy}{n^{2}}}\right)^{n},} and the second-order term x y n 2 {\displaystyle {\frac {xy}{n^{2}}}} does not affect the limit, yielding exp ⁡ ( x ) exp ⁡ ( y ) = exp ⁡ ( x + y ) {\displaystyle \exp(x)\exp(y)=\exp(x+y)} . Euler's number can be defined as e = exp ⁡ ( 1 ) {\displaystyle e=\exp(1)} . It follows from the preceding equations that exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} when x is an integer (this results from the repeated-multiplication definition of the exponentiation). If x is real, exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} results from the definitions given in preceding sections, by using the exponential identity if x is rational, and the continuity of the exponential function otherwise. The limit that defines the exponential function converges for every complex value of x, and therefore it can be used to extend the definition of exp ⁡ ( z ) {\displaystyle \exp(z)} , and thus e z , {\displaystyle e^{z},} from the real numbers to any complex argument z. This extended exponential function still satisfies the exponential identity, and is commonly used for defining exponentiation for complex base and exponent. === Powers via logarithms === The definition of ex as the exponential function allows defining bx for every positive real numbers b, in terms of exponential and logarithm function. Specifically, the fact that the natural logarithm ln(x) is the inverse of the exponential function ex means that one has b = exp ⁡ ( ln ⁡ b ) = e ln ⁡ b {\displaystyle b=\exp(\ln b)=e^{\ln b}} for every b > 0. For preserving the identity ( e x ) y = e x y , {\displaystyle (e^{x})^{y}=e^{xy},} one must have b x = ( e ln ⁡ b ) x = e x ln ⁡ b {\displaystyle b^{x}=\left(e^{\ln b}\right)^{x}=e^{x\ln b}} So, e x ln ⁡ b {\displaystyle e^{x\ln b}} can be used as an alternative definition of bx for any positive real b. This agrees with the definition given above using rational exponents and continuity, with the advantage to extend straightforwardly to any complex exponent. == Complex exponents with a positive real base == If b is a positive real number, exponentiation with base b and complex exponent z is defined by means of the exponential function with complex argument (see the end of § Exponential function, above) as b z = e ( z ln ⁡ b ) , {\displaystyle b^{z}=e^{(z\ln b)},} where ln ⁡ b {\displaystyle \ln b} denotes the natural logarithm of b. This satisfies the identity b z + t = b z b t , {\displaystyle b^{z+t}=b^{z}b^{t},} In general, ( b z ) t {\textstyle \left(b^{z}\right)^{t}} is not defined, since bz is not a real number. If a meaning is given to the exponentiation of a complex number (see § Non-integer powers of complex numbers, below), one has, in general, ( b z ) t ≠ b z t , {\displaystyle \left(b^{z}\right)^{t}\neq b^{zt},} unless z is real or t is an integer. Euler's formula, e i y = cos ⁡ y + i sin ⁡ y , {\displaystyle e^{iy}=\cos y+i\sin y,} allows expressing the polar form of b z {\displaystyle b^{z}} in terms of the real and imaginary parts of z, namely b x + i y = b x ( cos ⁡ ( y ln ⁡ b ) + i sin ⁡ ( y ln ⁡ b ) ) , {\displaystyle b^{x+iy}=b^{x}(\cos(y\ln b)+i\sin(y\ln b)),} where the absolute value of the trigonometric factor is one. This results from b x + i y = b x b i y = b x e i y ln ⁡ b = b x ( cos ⁡ ( y ln ⁡ b ) + i sin ⁡ ( y ln ⁡ b ) ) . {\displaystyle b^{x+iy}=b^{x}b^{iy}=b^{x}e^{iy\ln b}=b^{x}(\cos(y\ln b)+i\sin(y\ln b)).} == Non-integer exponents with a complex base == In the preceding sections, exponentiation with non-integer exponents has been defined for positive real bases only. For other bases, difficulties appear already with the apparently simple case of nth roots, that is, of exponents 1 / n , {\displaystyle 1/n,} where n is a positive integer. Although the general theory of exponentiation with non-integer exponents applies to nth roots, this case deserves to be considered first, since it does not need to use complex logarithms, and is therefore easier to understand. === nth roots of a complex number === Every nonzero complex number z may be written in polar form as z = ρ e i θ = ρ ( cos ⁡ θ + i sin ⁡ θ ) , {\displaystyle z=\rho e^{i\theta }=\rho (\cos \theta +i\sin \theta ),} where ρ {\displaystyle \rho } is the absolute value of z, and θ {\displaystyle \theta } is its argument. The argument is defined up to an integer multiple of 2π; this means that, if θ {\displaystyle \theta } is the argument of a complex number, then θ + 2 k π {\displaystyle \theta +2k\pi } is also an argument of the same complex number for every integer k {\displaystyle k} . The polar form of the product of two complex numbers is obtained by multiplying the absolute values and adding the arguments. It follows that the polar form of an nth root of a complex number can be obtained by taking the nth root of the absolute value and dividing its argument by n: ( ρ e i θ ) 1 n = ρ n e i θ n . {\displaystyle \left(\rho e^{i\theta }\right)^{\frac {1}{n}}={\sqrt[{n}]{\rho }}\,e^{\frac {i\theta }{n}}.} If 2 π {\displaystyle 2\pi } is added to θ {\displaystyle \theta } , the complex number is not changed, but this adds 2 i π / n {\displaystyle 2i\pi /n} to the argument of the nth root, and provides a new nth root. This can be done n times ( k = 0 , 1 , . . . , n − 1 {\displaystyle k=0,1,...,n-1} ), and provides the n nth roots of the complex number: ( ρ e i ( θ + 2 k π ) ) 1 n = ρ n e i ( θ + 2 k π ) n . {\displaystyle \left(\rho e^{i(\theta +2k\pi )}\right)^{\frac {1}{n}}={\sqrt[{n}]{\rho }}\,e^{\frac {i(\theta +2k\pi )}{n}}.} It is usual to choose one of the n nth root as the principal root. The common choice is to choose the nth root for which − π < θ ≤ π , {\displaystyle -\pi <\theta \leq \pi ,} that is, the nth root that has the largest real part, and, if there are two, the one with positive imaginary part. This makes the principal nth root a continuous function in the whole complex plane, except for negative real values of the radicand. This function equals the usual nth root for positive real radicands. For negative real radicands, and odd exponents, the principal nth root is not real, although the usual nth root is real. Analytic continuation shows that the principal nth root is the unique complex differentiable function that extends the usual nth root to the complex plane without the nonpositive real numbers. If the complex number is moved around zero by increasing its argument, after an increment of 2 π , {\displaystyle 2\pi ,} the complex number comes back to its initial position, and its nth roots are permuted circularly (they are multiplied by e 2 i π / n e^{2i\pi /n} ). This shows that it is not possible to define a nth root function that is continuous in the whole complex plane. ==== Roots of unity ==== The nth roots of unity are the n complex numbers such that wn = 1, where n is a positive integer. They arise in various areas of mathematics, such as in discrete Fourier transform or algebraic solutions of algebraic equations (Lagrange resolvent). The n nth roots of unity are the n first powers of ω = e 2 π i n {\displaystyle \omega =e^{\frac {2\pi i}{n}}} , that is 1 = ω 0 = ω n , ω = ω 1 , ω 2 , . . . , ω n − 1 . {\displaystyle 1=\omega ^{0}=\omega ^{n},\omega =\omega ^{1},\omega ^{2},...,\omega ^{n-1}.} The nth roots of unity that have this generating property are called primitive nth roots of unity; they have the form ω k = e 2 k π i n , {\displaystyle \omega ^{k}=e^{\frac {2k\pi i}{n}},} with k coprime with n. The unique primitive square root of unity is − 1 ; {\displaystyle -1;} the primitive fourth roots of unity are i {\displaystyle i} and − i . {\displaystyle -i.} The nth roots of unity allow expressing all nth roots of a complex number z as the n products of a given nth roots of z with a nth root of unity. Geometrically, the nth roots of unity lie on the unit circle of the complex plane at the vertices of a regular n-gon with one vertex on the real number 1. As the number e 2 k π i n {\displaystyle e^{\frac {2k\pi i}{n}}} is the primitive nth root of unity with the smallest positive argument, it is called the principal primitive nth root of unity, sometimes shortened as principal nth root of unity, although this terminology can be confused with the principal value of 1 1 / n {\displaystyle 1^{1/n}} , which is 1. === Complex exponentiation === Defining exponentiation with complex bases leads to difficulties that are similar to those described in the preceding section, except that there are, in general, infinitely many possible values for z w z^{w} . So, either a principal value is defined, which is not continuous for the values of z that are real and nonpositive, or z w z^{w} is defined as a multivalued function. In all cases, the complex logarithm is used to define complex exponentiation as z w = e w log ⁡ z , {\displaystyle z^{w}=e^{w\log z},} where log ⁡ z {\displaystyle \log z} is the variant of the complex logarithm that is used, which is a function or a multivalued function such that e log ⁡ z = z {\displaystyle e^{\log z}=z} for every z in its domain of definition. ==== Principal value ==== The principal value of the complex logarithm is the unique continuous function, commonly denoted log , {\displaystyle \log ,} such that, for every nonzero complex number z, e log ⁡ z = z , {\displaystyle e^{\log z}=z,} and the argument of z satisfies − π < Arg ⁡ z ≤ π . {\displaystyle -\pi <\operatorname {Arg} z\leq \pi .} The principal value of the complex logarithm is not defined for z = 0 , {\displaystyle z=0,} it is discontinuous at negative real values of z, and it is holomorphic (that is, complex differentiable) elsewhere. If z is real and positive, the principal value of the complex logarithm is the natural logarithm: log ⁡ z = ln ⁡ z . {\displaystyle \log z=\ln z.} The principal value of z w {\displaystyle z^{w}} is defined as z w = e w log ⁡ z , {\displaystyle z^{w}=e^{w\log z},} where log ⁡ z {\displaystyle \log z} is the principal value of the logarithm. The function ( z , w ) → z w {\displaystyle (z,w)\to z^{w}} is holomorphic except in the neighbourhood of the points where z is real and nonpositive. If z is real and positive, the principal value of z w {\displaystyle z^{w}} equals its usual value defined above. If w = 1 / n , {\displaystyle w=1/n,} where n is an integer, this principal value is the same as the one defined above. ==== Multivalued function ==== In some contexts, there is a problem with the discontinuity of the principal values of log ⁡ z {\displaystyle \log z} and z w {\displaystyle z^{w}} at the negative real values of z. In this case, it is useful to consider these functions as multivalued functions. If log ⁡ z {\displaystyle \log z} denotes one of the values of the multivalued logarithm (typically its principal value), the other values are 2 i k π + log ⁡ z , {\displaystyle 2ik\pi +\log z,} where k is any integer. Similarly, if z w {\displaystyle z^{w}} is one value of the exponentiation, then the other values are given by e w ( 2 i k π + log ⁡ z ) = z w e 2 i k π w , {\displaystyle e^{w(2ik\pi +\log z)}=z^{w}e^{2ik\pi w},} where k is any integer. Different values of k give different values of z w {\displaystyle z^{w}} unless w is a rational number, that is, there is an integer d such that dw is an integer. This results from the periodicity of the exponential function, more specifically, that e a = e b {\displaystyle e^{a}=e^{b}} if and only if a − b {\displaystyle a-b} is an integer multiple of 2 π i . {\displaystyle 2\pi i.} If w = m n {\displaystyle w={\frac {m}{n}}} is a rational number with m and n coprime integers with n > 0 , {\displaystyle n>0,} then z w {\displaystyle z^{w}} has exactly n values. In the case m = 1 , {\displaystyle m=1,} these values are the same as those described in § nth roots of a complex number. If w is an integer, there is only one value that agrees with that of § Integer exponents. The multivalued exponentiation is holomorphic for z ≠ 0 , {\displaystyle z\neq 0,} in the sense that its graph consists of several sheets that define each a holomorphic function in the neighborhood of every point. If z varies continuously along a circle around 0, then, after a turn, the value of z w {\displaystyle z^{w}} has changed of sheet. ==== Computation ==== The canonical form x + i y {\displaystyle x+iy} of z w {\displaystyle z^{w}} can be computed from the canonical form of z and w. Although this can be described by a single formula, it is clearer to split the computation in several steps. Polar form of z. If z = a + i b {\displaystyle z=a+ib} is the canonical form of z (a and b being real), then its polar form is z = ρ e i θ = ρ ( cos ⁡ θ + i sin ⁡ θ ) , {\displaystyle z=\rho e^{i\theta }=\rho (\cos \theta +i\sin \theta ),} with ρ = a 2 + b 2 {\textstyle \rho ={\sqrt {a^{2}+b^{2}}}} and θ = atan2 ⁡ ( b , a ) {\displaystyle \theta =\operatorname {atan2} (b,a)} , where ⁠ atan2 {\displaystyle \operatorname {atan2} } ⁠ is the two-argument arctangent function. Logarithm of z. The principal value of this logarithm is log ⁡ z = ln ⁡ ρ + i θ , {\displaystyle \log z=\ln \rho +i\theta ,} where ln {\displaystyle \ln } denotes the natural logarithm. The other values of the logarithm are obtained by adding 2 i k π {\displaystyle 2ik\pi } for any integer k. Canonical form of w log ⁡ z . {\displaystyle w\log z.} If w = c + d i {\displaystyle w=c+di} with c and d real, the values of w log ⁡ z {\displaystyle w\log z} are w log ⁡ z = ( c ln ⁡ ρ − d θ − 2 d k π ) + i ( d ln ⁡ ρ + c θ + 2 c k π ) , {\displaystyle w\log z=(c\ln \rho -d\theta -2dk\pi )+i(d\ln \rho +c\theta +2ck\pi ),} the principal value corresponding to k = 0. {\displaystyle k=0.} Final result. Using the identities e x + y = e x e y {\displaystyle e^{x+y}=e^{x}e^{y}} and e y ln ⁡ x = x y , {\displaystyle e^{y\ln x}=x^{y},} one gets z w = ρ c e − d ( θ + 2 k π ) ( cos ⁡ ( d ln ⁡ ρ + c θ + 2 c k π ) + i sin ⁡ ( d ln ⁡ ρ + c θ + 2 c k π ) ) , {\displaystyle z^{w}=\rho ^{c}e^{-d(\theta +2k\pi )}\left(\cos(d\ln \rho +c\theta +2ck\pi )+i\sin(d\ln \rho +c\theta +2ck\pi )\right),} with k = 0 {\displaystyle k=0} for the principal value. ===== Examples ===== i i {\displaystyle i^{i}} The polar form of i is i = e i π / 2 , {\displaystyle i=e^{i\pi /2},} and the values of log ⁡ i {\displaystyle \log i} are thus log ⁡ i = i ( π 2 + 2 k π ) . {\displaystyle \log i=i\left({\frac {\pi }{2}}+2k\pi \right).} It follows that i i = e i log ⁡ i = e − π 2 e − 2 k π . {\displaystyle i^{i}=e^{i\log i}=e^{-{\frac {\pi }{2}}}e^{-2k\pi }.} So, all values of i i {\displaystyle i^{i}} are real, the principal one being e − π 2 ≈ 0.2079. {\displaystyle e^{-{\frac {\pi }{2}}}\approx 0.2079.} ( − 2 ) 3 + 4 i {\displaystyle (-2)^{3+4i}} Similarly, the polar form of −2 is − 2 = 2 e i π . {\displaystyle -2=2e^{i\pi }.} So, the above described method gives the values ( − 2 ) 3 + 4 i = 2 3 e − 4 ( π + 2 k π ) ( cos ⁡ ( 4 ln ⁡ 2 + 3 ( π + 2 k π ) ) + i sin ⁡ ( 4 ln ⁡ 2 + 3 ( π + 2 k π ) ) ) = − 2 3 e − 4 ( π + 2 k π ) ( cos ⁡ ( 4 ln ⁡ 2 ) + i sin ⁡ ( 4 ln ⁡ 2 ) ) . {\displaystyle {\begin{aligned}(-2)^{3+4i}&=2^{3}e^{-4(\pi +2k\pi )}(\cos(4\ln 2+3(\pi +2k\pi ))+i\sin(4\ln 2+3(\pi +2k\pi )))\\&=-2^{3}e^{-4(\pi +2k\pi )}(\cos(4\ln 2)+i\sin(4\ln 2)).\end{aligned}}} In this case, all the values have the same argument 4 ln ⁡ 2 , {\displaystyle 4\ln 2,} and different absolute values. In both examples, all values of z w {\displaystyle z^{w}} have the same argument. More generally, this is true if and only if the real part of w is an integer. ==== Failure of power and logarithm identities ==== Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as single-valued functions. For example: == Irrationality and transcendence == If b is a positive real algebraic number, and x is a rational number, then bx is an algebraic number. This results from the theory of algebraic extensions. This remains true if b is any algebraic number, in which case, all values of bx (as a multivalued function) are algebraic. If x is irrational (that is, not rational), and both b and x are algebraic, Gelfond–Schneider theorem asserts that all values of bx are transcendental (that is, not algebraic), except if b equals 0 or 1. In other words, if x is irrational and b ∉ { 0 , 1 } , {\displaystyle b\not \in \{0,1\},} then at least one of b, x and bx is transcendental. == Integer powers in algebra == The definition of exponentiation with positive integer exponents as repeated multiplication may apply to any associative operation denoted as a multiplication. The definition of x0 requires further the existence of a multiplicative identity. An algebraic structure consisting of a set together with an associative operation denoted multiplicatively, and a multiplicative identity denoted by 1 is a monoid. In such a monoid, exponentiation of an element x is defined inductively by x 0 = 1 , {\displaystyle x^{0}=1,} x n + 1 = x x n {\displaystyle x^{n+1}=xx^{n}} for every nonnegative integer n. If n is a negative integer, x n {\displaystyle x^{n}} is defined only if x has a multiplicative inverse. In this case, the inverse of x is denoted x−1, and xn is defined as ( x − 1 ) − n . {\displaystyle \left(x^{-1}\right)^{-n}.} Exponentiation with integer exponents obeys the following laws, for x and y in the algebraic structure, and m and n integers: x 0 = 1 x m + n = x m x n ( x m ) n = x m n ( x y ) n = x n y n if x y = y x , and, in particular, if the multiplication is commutative. {\displaystyle {\begin{aligned}x^{0}&=1\\x^{m+n}&=x^{m}x^{n}\\(x^{m})^{n}&=x^{mn}\\(xy)^{n}&=x^{n}y^{n}\quad {\text{if }}xy=yx,{\text{and, in particular, if the multiplication is commutative.}}\end{aligned}}} These definitions are widely used in many areas of mathematics, notably for groups, rings, fields, square matrices (which form a ring). They apply also to functions from a set to itself, which form a monoid under function composition. This includes, as specific instances, geometric transformations, and endomorphisms of any mathematical structure. When there are several operations that may be repeated, it is common to indicate the repeated operation by placing its symbol in the superscript, before the exponent. For example, if f is a real function whose valued can be multiplied, f n {\displaystyle f^{n}} denotes the exponentiation with respect of multiplication, and f ∘ n {\displaystyle f^{\circ n}} may denote exponentiation with respect of function composition. That is, ( f n ) ( x ) = ( f ( x ) ) n = f ( x ) f ( x ) ⋯ f ( x ) , {\displaystyle (f^{n})(x)=(f(x))^{n}=f(x)\,f(x)\cdots f(x),} and ( f ∘ n ) ( x ) = f ( f ( ⋯ f ( f ( x ) ) ⋯ ) ) . {\displaystyle (f^{\circ n})(x)=f(f(\cdots f(f(x))\cdots )).} Commonly, ( f n ) ( x ) {\displaystyle (f^{n})(x)} is denoted f ( x ) n , {\displaystyle f(x)^{n},} while ( f ∘ n ) ( x ) {\displaystyle (f^{\circ n})(x)} is denoted f n ( x ) . {\displaystyle f^{n}(x).} === In a group === A multiplicative group is a set with as associative operation denoted as multiplication, that has an identity element, and such that every element has an inverse. So, if G is a group, x n {\displaystyle x^{n}} is defined for every x ∈ G {\displaystyle x\in G} and every integer n. The set of all powers of an element of a group form a subgroup. A group (or subgroup) that consists of all powers of a specific element x is the cyclic group generated by x. If all the powers of x are distinct, the group is isomorphic to the additive group Z {\displaystyle \mathbb {Z} } of the integers. Otherwise, the cyclic group is finite (it has a finite number of elements), and its number of elements is the order of x. If the order of x is n, then x n = x 0 = 1 , {\displaystyle x^{n}=x^{0}=1,} and the cyclic group generated by x consists of the n first powers of x (starting indifferently from the exponent 0 or 1). Order of elements play a fundamental role in group theory. For example, the order of an element in a finite group is always a divisor of the number of elements of the group (the order of the group). The possible orders of group elements are important in the study of the structure of a group (see Sylow theorems), and in the classification of finite simple groups. Superscript notation is also used for conjugation; that is, gh = h−1gh, where g and h are elements of a group. This notation cannot be confused with exponentiation, since the superscript is not an integer. The motivation of this notation is that conjugation obeys some of the laws of exponentiation, namely ( g h ) k = g h k {\displaystyle (g^{h})^{k}=g^{hk}} and ( g h ) k = g k h k . {\displaystyle (gh)^{k}=g^{k}h^{k}.} === In a ring === In a ring, it may occur that some nonzero elements satisfy x n = 0 {\displaystyle x^{n}=0} for some integer n. Such an element is said to be nilpotent. In a commutative ring, the nilpotent elements form an ideal, called the nilradical of the ring. If the nilradical is reduced to the zero ideal (that is, if x ≠ 0 {\displaystyle x\neq 0} implies x n ≠ 0 {\displaystyle x^{n}\neq 0} for every positive integer n), the commutative ring is said to be reduced. Reduced rings are important in algebraic geometry, since the coordinate ring of an affine algebraic set is always a reduced ring. More generally, given an ideal I in a commutative ring R, the set of the elements of R that have a power in I is an ideal, called the radical of I. The nilradical is the radical of the zero ideal. A radical ideal is an ideal that equals its own radical. In a polynomial ring k [ x 1 , … , x n ] {\displaystyle k[x_{1},\ldots ,x_{n}]} over a field k, an ideal is radical if and only if it is the set of all polynomials that are zero on an affine algebraic set (this is a consequence of Hilbert's Nullstellensatz). === Matrices and linear operators === If A is a square matrix, then the product of A with itself n times is called the matrix power. Also A 0 {\displaystyle A^{0}} is defined to be the identity matrix, and if A is invertible, then A − n = ( A − 1 ) n {\displaystyle A^{-n}=\left(A^{-1}\right)^{n}} . Matrix powers appear often in the context of discrete dynamical systems, where the matrix A expresses a transition from a state vector x of some system to the next state Ax of the system. This is the standard interpretation of a Markov chain, for example. Then A 2 x {\displaystyle A^{2}x} is the state of the system after two time steps, and so forth: A n x {\displaystyle A^{n}x} is the state of the system after n time steps. The matrix power A n {\displaystyle A^{n}} is the transition matrix between the state now and the state at a time n steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors. Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus, d / d x {\displaystyle d/dx} , which is a linear operator acting on functions f ( x ) {\displaystyle f(x)} to give a new function ( d / d x ) f ( x ) = f ′ ( x ) {\displaystyle (d/dx)f(x)=f'(x)} . The nth power of the differentiation operator is the nth derivative: ( d d x ) n f ( x ) = d n d x n f ( x ) = f ( n ) ( x ) . {\displaystyle \left({\frac {d}{dx}}\right)^{n}f(x)={\frac {d^{n}}{dx^{n}}}f(x)=f^{(n)}(x).} These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups. Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus. === Finite fields === A field is an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication is associative and every nonzero element has a multiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of 0. Common examples are the field of complex numbers, the real numbers and the rational numbers, considered earlier in this article, which are all infinite. A finite field is a field with a finite number of elements. This number of elements is either a prime number or a prime power; that is, it has the form q = p k , {\displaystyle q=p^{k},} where p is a prime number, and k is a positive integer. For every such q, there are fields with q elements. The fields with q elements are all isomorphic, which allows, in general, working as if there were only one field with q elements, denoted F q . {\displaystyle \mathbb {F} _{q}.} One has x q = x {\displaystyle x^{q}=x} for every x ∈ F q . {\displaystyle x\in \mathbb {F} _{q}.} A primitive element in F q {\displaystyle \mathbb {F} _{q}} is an element g such that the set of the q − 1 first powers of g (that is, { g 1 = g , g 2 , … , g p − 1 = g 0 = 1 } {\displaystyle \{g^{1}=g,g^{2},\ldots ,g^{p-1}=g^{0}=1\}} ) equals the set of the nonzero elements of F q . {\displaystyle \mathbb {F} _{q}.} There are φ ( p − 1 ) {\displaystyle \varphi (p-1)} primitive elements in F q , {\displaystyle \mathbb {F} _{q},} where φ {\displaystyle \varphi } is Euler's totient function. In F q , {\displaystyle \mathbb {F} _{q},} the freshman's dream identity ( x + y ) p = x p + y p {\displaystyle (x+y)^{p}=x^{p}+y^{p}} is true for the exponent p. As x p = x {\displaystyle x^{p}=x} in F q , {\displaystyle \mathbb {F} _{q},} It follows that the map F : F q → F q x ↦ x p {\displaystyle {\begin{aligned}F\colon {}&\mathbb {F} _{q}\to \mathbb {F} _{q}\\&x\mapsto x^{p}\end{aligned}}} is linear over F q , {\displaystyle \mathbb {F} _{q},} and is a field automorphism, called the Frobenius automorphism. If q = p k , {\displaystyle q=p^{k},} the field F q {\displaystyle \mathbb {F} _{q}} has k automorphisms, which are the k first powers (under composition) of F. In other words, the Galois group of F q {\displaystyle \mathbb {F} _{q}} is cyclic of order k, generated by the Frobenius automorphism. The Diffie–Hellman key exchange is an application of exponentiation in finite fields that is widely used for secure communications. It uses the fact that exponentiation is computationally inexpensive, whereas the inverse operation, the discrete logarithm, is computationally expensive. More precisely, if g is a primitive element in F q , {\displaystyle \mathbb {F} _{q},} then g e {\displaystyle g^{e}} can be efficiently computed with exponentiation by squaring for any e, even if q is large, while there is no known computationally practical algorithm that allows retrieving e from g e {\displaystyle g^{e}} if q is sufficiently large. == Powers of sets == The Cartesian product of two sets S and T is the set of the ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ S {\displaystyle x\in S} and y ∈ T . {\displaystyle y\in T.} This operation is not properly commutative nor associative, but has these properties up to canonical isomorphisms, that allow identifying, for example, ( x , ( y , z ) ) , {\displaystyle (x,(y,z)),} ( ( x , y ) , z ) , {\displaystyle ((x,y),z),} and ( x , y , z ) . {\displaystyle (x,y,z).} This allows defining the nth power S n {\displaystyle S^{n}} of a set S as the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} of elements of S. When S is endowed with some structure, it is frequent that S n {\displaystyle S^{n}} is naturally endowed with a similar structure. In this case, the term "direct product" is generally used instead of "Cartesian product", and exponentiation denotes product structure. For example R n {\displaystyle \mathbb {R} ^{n}} (where R {\displaystyle \mathbb {R} } denotes the real numbers) denotes the Cartesian product of n copies of R , {\displaystyle \mathbb {R} ,} as well as their direct product as vector space, topological spaces, rings, etc. === Sets as exponents === A n-tuple ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} of elements of S can be considered as a function from { 1 , … , n } . {\displaystyle \{1,\ldots ,n\}.} This generalizes to the following notation. Given two sets S and T, the set of all functions from T to S is denoted S T {\displaystyle S^{T}} . This exponential notation is justified by the following canonical isomorphisms (for the first one, see Currying): ( S T ) U ≅ S T × U , {\displaystyle (S^{T})^{U}\cong S^{T\times U},} S T ⊔ U ≅ S T × S U , {\displaystyle S^{T\sqcup U}\cong S^{T}\times S^{U},} where × {\displaystyle \times } denotes the Cartesian product, and ⊔ {\displaystyle \sqcup } the disjoint union. One can use sets as exponents for other operations on sets, typically for direct sums of abelian groups, vector spaces, or modules. For distinguishing direct sums from direct products, the exponent of a direct sum is placed between parentheses. For example, R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} denotes the vector space of the infinite sequences of real numbers, and R ( N ) {\displaystyle \mathbb {R} ^{(\mathbb {N} )}} the vector space of those sequences that have a finite number of nonzero elements. The latter has a basis consisting of the sequences with exactly one nonzero element that equals 1, while the Hamel bases of the former cannot be explicitly described (because their existence involves Zorn's lemma). In this context, 2 can represents the set { 0 , 1 } . {\displaystyle \{0,1\}.} So, 2 S {\displaystyle 2^{S}} denotes the power set of S, that is the set of the functions from S to { 0 , 1 } , {\displaystyle \{0,1\},} which can be identified with the set of the subsets of S, by mapping each function to the inverse image of 1. This fits in with the exponentiation of cardinal numbers, in the sense that |ST| = |S||T|, where |X| is the cardinality of X. === In category theory === In the category of sets, the morphisms between sets X and Y are the functions from X to Y. It results that the set of the functions from X to Y that is denoted Y X {\displaystyle Y^{X}} in the preceding section can also be denoted hom ⁡ ( X , Y ) . {\displaystyle \hom(X,Y).} The isomorphism ( S T ) U ≅ S T × U {\displaystyle (S^{T})^{U}\cong S^{T\times U}} can be rewritten hom ⁡ ( U , S T ) ≅ hom ⁡ ( T × U , S ) . {\displaystyle \hom(U,S^{T})\cong \hom(T\times U,S).} This means the functor "exponentiation to the power T " is a right adjoint to the functor "direct product with T ". This generalizes to the definition of exponentiation in a category in which finite direct products exist: in such a category, the functor X → X T {\displaystyle X\to X^{T}} is, if it exists, a right adjoint to the functor Y → T × Y . {\displaystyle Y\to T\times Y.} A category is called a Cartesian closed category, if direct products exist, and the functor Y → X × Y {\displaystyle Y\to X\times Y} has a right adjoint for every T. == Repeated exponentiation == Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at (3, 3), the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and 7625597484987 (=327 = 333 = 33) respectively. == Limits of powers == Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function xy has no limit at the point (0, 0). One may consider at what points this function does have a limit. More precisely, consider the function f ( x , y ) = x y {\displaystyle f(x,y)=x^{y}} defined on D = { ( x , y ) ∈ R 2 : x > 0 } {\displaystyle D=\{(x,y)\in \mathbf {R} ^{2}:x>0\}} . Then D can be viewed as a subset of R2 (that is, the set of all pairs (x, y) with x, y belonging to the extended real number line R = [−∞, +∞], endowed with the product topology), which will contain the points at which the function f has a limit. In fact, f has a limit at all accumulation points of D, except for (0, 0), (+∞, 0), (1, +∞) and (1, −∞). Accordingly, this allows one to define the powers xy by continuity whenever 0 ≤ x ≤ +∞, −∞ ≤ y ≤ +∞, except for 00, (+∞)0, 1+∞ and 1−∞, which remain indeterminate forms. Under this definition by continuity, we obtain: x+∞ = +∞ and x−∞ = 0, when 1 < x ≤ +∞. x+∞ = 0 and x−∞ = +∞, when 0 < x < 1. 0y = 0 and (+∞)y = +∞, when 0 < y ≤ +∞. 0y = +∞ and (+∞)y = 0, when −∞ ≤ y < 0. These powers are obtained by taking limits of xy for positive values of x. This method does not permit a definition of xy when x < 0, since pairs (x, y) with x < 0 are not accumulation points of D. On the other hand, when n is an integer, the power xn is already meaningful for all values of x, including negative ones. This may make the definition 0n = +∞ obtained above for negative n problematic when n is odd, since in this case xn → +∞ as x tends to 0 through positive values, but not negative ones. == Efficient computation with integer exponents == Computing bn using iterated multiplication requires n − 1 multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute 2100, apply Horner's rule to the exponent 100 written in binary: 100 = 2 2 + 2 5 + 2 6 = 2 2 ( 1 + 2 3 ( 1 + 2 ) ) {\displaystyle 100=2^{2}+2^{5}+2^{6}=2^{2}(1+2^{3}(1+2))} . Then compute the following terms in order, reading Horner's rule from right to left. This series of steps only requires 8 multiplications instead of 99. In general, the number of multiplication operations required to compute bn can be reduced to ♯ n + ⌊ log 2 ⁡ n ⌋ − 1 , {\displaystyle \sharp n+\lfloor \log _{2}n\rfloor -1,} by using exponentiation by squaring, where ♯ n {\displaystyle \sharp n} denotes the number of 1s in the binary representation of n. For some exponents (100 is not among them), the number of multiplications can be further reduced by computing and using the minimal addition-chain exponentiation. Finding the minimal sequence of multiplications (the minimal-length addition chain for the exponent) for bn is a difficult problem, for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available. However, in practical computations, exponentiation by squaring is efficient enough, and much more easy to implement. == Iterated functions == Function composition is a binary operation that is defined on functions such that the codomain of the function written on the right is included in the domain of the function written on the left. It is denoted g ∘ f , {\displaystyle g\circ f,} and defined as ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle (g\circ f)(x)=g(f(x))} for every x in the domain of f. If the domain of a function f equals its codomain, one may compose the function with itself an arbitrary number of time, and this defines the nth power of the function under composition, commonly called the nth iterate of the function. Thus f n {\displaystyle f^{n}} denotes generally the nth iterate of f; for example, f 3 ( x ) {\displaystyle f^{3}(x)} means f ( f ( f ( x ) ) ) . {\displaystyle f(f(f(x))).} When a multiplication is defined on the codomain of the function, this defines a multiplication on functions, the pointwise multiplication, which induces another exponentiation. When using functional notation, the two kinds of exponentiation are generally distinguished by placing the exponent of the functional iteration before the parentheses enclosing the arguments of the function, and placing the exponent of pointwise multiplication after the parentheses. Thus f 2 ( x ) = f ( f ( x ) ) , {\displaystyle f^{2}(x)=f(f(x)),} and f ( x ) 2 = f ( x ) ⋅ f ( x ) . {\displaystyle f(x)^{2}=f(x)\cdot f(x).} When functional notation is not used, disambiguation is often done by placing the composition symbol before the exponent; for example f ∘ 3 = f ∘ f ∘ f , {\displaystyle f^{\circ 3}=f\circ f\circ f,} and f 3 = f ⋅ f ⋅ f . {\displaystyle f^{3}=f\cdot f\cdot f.} For historical reasons, the exponent of a repeated multiplication is placed before the argument for some specific functions, typically the trigonometric functions. So, sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} both mean sin ⁡ ( x ) ⋅ sin ⁡ ( x ) {\displaystyle \sin(x)\cdot \sin(x)} and not sin ⁡ ( sin ⁡ ( x ) ) , {\displaystyle \sin(\sin(x)),} which, in any case, is rarely considered. Historically, several variants of these notations were used by different authors. In this context, the exponent − 1 {\displaystyle -1} denotes always the inverse function, if it exists. So sin − 1 ⁡ x = sin − 1 ⁡ ( x ) = arcsin ⁡ x . {\displaystyle \sin ^{-1}x=\sin ^{-1}(x)=\arcsin x.} For the multiplicative inverse fractions are generally used as in 1 / sin ⁡ ( x ) = 1 sin ⁡ x . {\displaystyle 1/\sin(x)={\frac {1}{\sin x}}.} == In programming languages == Programming languages generally express exponentiation either as an infix operator or as a function application, as they do not support superscripts. The most common operator symbol for exponentiation is the caret (^). The original version of ASCII included an uparrow symbol (↑), intended for exponentiation, but this was replaced by the caret in 1967, so the caret became usual in programming languages. The notations include: x ^ y: AWK, BASIC, J, MATLAB, Wolfram Language (Mathematica), R, Microsoft Excel, Analytica, TeX (and its derivatives), TI-BASIC, bc (for integer exponents), Haskell (for nonnegative integer exponents), Lua, and most computer algebra systems. x ** y. The Fortran character set did not include lowercase characters or punctuation symbols other than +-*/()&=.,' and so used ** for exponentiation (the initial version used a xx b instead.). Many other languages followed suit: Ada, Z shell, KornShell, Bash, COBOL, CoffeeScript, Fortran, FoxPro, Gnuplot, Groovy, JavaScript, OCaml, ooRexx, F#, Perl, PHP, PL/I, Python, Rexx, Ruby, SAS, Seed7, Tcl, ABAP, Mercury, Haskell (for floating-point exponents), Turing, and VHDL. x ↑ y: Algol Reference language, Commodore BASIC, TRS-80 Level II/III BASIC. x ^^ y: Haskell (for fractional base, integer exponents), D. x⋆y: APL. In most programming languages with an infix exponentiation operator, it is right-associative, that is, a^b^c is interpreted as a^(b^c). This is because (a^b)^c is equal to a^(b*c) and thus not as useful. In some languages, it is left-associative, notably in Algol, MATLAB, and the Microsoft Excel formula language. Other programming languages use functional notation: (expt x y): Common Lisp. pown x y: F# (for integer base, integer exponent). Still others only provide exponentiation as part of standard libraries: pow(x, y): C, C++ (in math library). Math.Pow(x, y): C#. math:pow(X, Y): Erlang. Math.pow(x, y): Java. [Math]::Pow(x, y): PowerShell. In some statically typed languages that prioritize type safety such as Rust, exponentiation is performed via a multitude of methods: x.pow(y) for x and y as integers x.powf(y) for x and y as floating-point numbers x.powi(y) for x as a float and y as an integer == See also == == Notes == == References ==
Wikipedia/Exponential_functions
In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms. The subset of all total recursive functions with values in {0,1} is known in computational complexity theory as the complexity class R. == Definition == The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the minimization operator μ. The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive. Primitive or "basic" functions: Constant functions Ckn: For each natural number n and every k C n k ( x 1 , … , x k ) = d e f n {\displaystyle C_{n}^{k}(x_{1},\ldots ,x_{k})\ {\stackrel {\mathrm {def} }{=}}\ n} Alternative definitions use instead a zero function as a primitive function that always returns zero, and build the constant functions from the zero function, the successor function and the composition operator. Successor function S: S ( x ) = d e f x + 1 {\displaystyle S(x)\ {\stackrel {\mathrm {def} }{=}}\ x+1\,} Projection function P i k {\displaystyle P_{i}^{k}} (also called the Identity function): For all natural numbers i , k {\displaystyle i,k} such that 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} : P i k ( x 1 , … , x k ) = d e f x i . {\displaystyle P_{i}^{k}(x_{1},\ldots ,x_{k})\ {\stackrel {\mathrm {def} }{=}}\ x_{i}\,.} Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result): Composition operator ∘ {\displaystyle \circ \,} (also called the substitution operator): Given an m-ary function h ( x 1 , … , x m ) {\displaystyle h(x_{1},\ldots ,x_{m})\,} and m k-ary functions g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) {\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})} : h ∘ ( g 1 , … , g m ) = d e f f , where f ( x 1 , … , x k ) = h ( g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) ) . {\displaystyle h\circ (g_{1},\ldots ,g_{m})\ {\stackrel {\mathrm {def} }{=}}\ f,\quad {\text{where}}\quad f(x_{1},\ldots ,x_{k})=h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})).} This means that f ( x 1 , … , x k ) {\displaystyle f(x_{1},\ldots ,x_{k})} is defined only if g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) , {\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}),} and h ( g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) ) {\displaystyle h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}))} are all defined. Primitive recursion operator ρ: Given the k-ary function g ( x 1 , … , x k ) {\displaystyle g(x_{1},\ldots ,x_{k})\,} and k+2 -ary function h ( y , z , x 1 , … , x k ) {\displaystyle h(y,z,x_{1},\ldots ,x_{k})\,} : ρ ( g , h ) = d e f f where the k+1 -ary function f is defined by f ( 0 , x 1 , … , x k ) = g ( x 1 , … , x k ) f ( S ( y ) , x 1 , … , x k ) = h ( y , f ( y , x 1 , … , x k ) , x 1 , … , x k ) . {\displaystyle {\begin{aligned}\rho (g,h)&\ {\stackrel {\mathrm {def} }{=}}\ f\quad {\text{where the k+1 -ary function }}f{\text{ is defined by}}\\f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k})\\f(S(y),x_{1},\ldots ,x_{k})&=h(y,f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})\,.\end{aligned}}} This means that f ( y , x 1 , … , x k ) {\displaystyle f(y,x_{1},\ldots ,x_{k})} is defined only if g ( x 1 , … , x k ) {\displaystyle g(x_{1},\ldots ,x_{k})} and h ( z , f ( z , x 1 , … , x k ) , x 1 , … , x k ) {\displaystyle h(z,f(z,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})} are defined for all z < y . {\displaystyle z<y.} Minimization operator μ: Given a (k+1)-ary function f ( y , x 1 , … , x k ) {\displaystyle f(y,x_{1},\ldots ,x_{k})\,} , the k-ary function μ ( f ) {\displaystyle \mu (f)} is defined by: μ ( f ) ( x 1 , … , x k ) = z ⟺ d e f f ( i , x 1 , … , x k ) > 0 for i = 0 , … , z − 1 and f ( z , x 1 , … , x k ) = 0 {\displaystyle {\begin{aligned}\mu (f)(x_{1},\ldots ,x_{k})=z{\stackrel {\mathrm {def} }{\iff }}\ f(i,x_{1},\ldots ,x_{k})&>0\quad {\text{for}}\quad i=0,\ldots ,z-1\quad {\text{and}}\\f(z,x_{1},\ldots ,x_{k})&=0\quad \end{aligned}}} Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for which f is not defined, then the search never terminates, and μ ( f ) {\displaystyle \mu (f)} is not defined for the argument ( x 1 , … , x k ) . {\displaystyle (x_{1},\ldots ,x_{k}).} While some textbooks use the μ-operator as defined here, others demand that the μ-operator is applied to total functions f only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (see below). The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total. The strong equality relation ≃ {\displaystyle \simeq } can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that f ( x 1 , … , x k ) ≃ g ( x 1 , … , x l ) {\displaystyle f(x_{1},\ldots ,x_{k})\simeq g(x_{1},\ldots ,x_{l})} holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined. == Examples == Examples not involving the minimization operator can be found at Primitive recursive function#Examples. The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive. The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator. == Total recursive function == A general recursive function is called total recursive function if it is defined for every input, or, equivalently, if it can be computed by a total Turing machine. There is no way to computably tell if a given general recursive function is total - see Halting problem. == Equivalence with other models of computability == In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values). == Normal form theorem == A normal form theorem due to Kleene says that for each k there are primitive recursive functions U ( y ) {\displaystyle U(y)\!} and T ( y , e , x 1 , … , x k ) {\displaystyle T(y,e,x_{1},\ldots ,x_{k})\!} such that for any μ-recursive function f ( x 1 , … , x k ) {\displaystyle f(x_{1},\ldots ,x_{k})\!} with k free variables there is an e such that f ( x 1 , … , x k ) ≃ U ( μ ( T ) ( e , x 1 , … , x k ) ) {\displaystyle f(x_{1},\ldots ,x_{k})\simeq U(\mu (T)(e,x_{1},\ldots ,x_{k}))} . The number e is called an index or Gödel number for the function f.: 52–53  A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function. Minsky observes the U {\displaystyle U} defined above is in essence the μ-recursive equivalent of the universal Turing machine: To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort, and essentially the same ideas, as we have invested in constructing the universal Turing machine == Symbolism == A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xn is abbreviated as x: Constant function: Kleene uses " Cnq(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ": e.g. C713 ( r, s, t, u, v, w, x ) = 13 e.g. const13 ( r, s, t, u, v, w, x ) = 13 Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows: S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc. Identity function: Kleene (1952) uses " Uni " to indicate the identity function over the variables xi; B-B-J use the identity function idni over the variables x1 to xn: Uni( x ) = idni( x ) = xi e.g. U73 = id73 ( r, s, t, u, v, w, x ) = t Composition (Substitution) operator: Kleene uses a bold-face Smn (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn": If we are given h( x )= g( f1(x), ... , fm(x) ) h(x) = Snm(g, f1, ... , fm ) In a similar manner, but without the sub- and superscripts, B-B-J write: h(x')= Cn[g, f1 ,..., fm](x) Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given: base step: h( 0, x )= f( x ), and induction step: h( y+1, x ) = g( y, h(y, x),x ) Example: primitive recursion definition of a + b: base step: f( 0, a ) = a = U11(a) induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U32( b, c, a )) R2 { U11(a), S [ (U32( b, c, a ) ] } Pr{ U11(a), S[ (U32( b, c, a ) ] } Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions S(a) = a' U11(a) = a U32( b, c, a ) = c g(b, c, a) = S(U32( b, c, a )) = c' base step: h( 0, a ) = U11(a) induction step: h( b', a ) = g( b, h( b, a ), a ) He arrives at: a+b = R2[ U11, S31(S, U32) ] == Examples == Fibonacci number McCarthy 91 function == See also == Recursion theory Recursion Recursion (computer science) == References == == External links == Stanford Encyclopedia of Philosophy entry A compiler for transforming a recursive function into an equivalent Turing machine
Wikipedia/General_recursive_function
In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. More precisely, given a system of m equations fi (x1, ..., xn, y1, ..., ym) = 0, i = 1, ..., m (often abbreviated into F(x, y) = 0), the theorem states that, under a mild condition on the partial derivatives (with respect to each yi ) at a point, the m variables yi are differentiable functions of the xj in some neighborhood of the point. As these functions generally cannot be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem. In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function. == History == Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables. == Two variables case == Let f : R 2 → R {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} } be a continuously differentiable function defining the implicit equation of a curve f ( x , y ) = 0 {\displaystyle f(x,y)=0} . Let ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} be a point on the curve, that is, a point such that f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} . In this simple case, the implicit function theorem can be stated as follows: Proof. By differentiating the equation ⁠ f ( x , φ ( x ) ) = 0 {\displaystyle f(x,\varphi (x))=0} ⁠, one gets ∂ f ∂ x ( x , φ ( x ) ) + φ ′ ( x ) ∂ f ∂ y ( x , φ ( x ) ) = 0. {\displaystyle {\frac {\partial f}{\partial x}}(x,\varphi (x))+\varphi '(x)\,{\frac {\partial f}{\partial y}}(x,\varphi (x))=0.} and thus φ ′ ( x ) = − ∂ f ∂ x ( x , φ ( x ) ) ∂ f ∂ y ( x , φ ( x ) ) . {\displaystyle \varphi '(x)=-{\frac {{\frac {\partial f}{\partial x}}(x,\varphi (x))}{{\frac {\partial f}{\partial y}}(x,\varphi (x))}}.} This gives an ordinary differential equation for ⁠ φ {\displaystyle \varphi } ⁠, with the initial condition ⁠ φ ( x 0 ) = y 0 {\displaystyle \varphi (x_{0})=y_{0}} ⁠. Since ∂ f ∂ y ( x 0 , y 0 ) ≠ 0 , {\textstyle {\frac {\partial f}{\partial y}}(x_{0},y_{0})\neq 0,} the right-hand side of the differential equation is continuous. Hence, the Peano existence theorem applies so there is a (possibly non-unique) solution. To see why φ {\textstyle \varphi } is unique, note that the function g x ( y ) = f ( x , y ) {\textstyle g_{x}(y)=f(x,y)} is strictly monotone in a neighborhood of x 0 , y 0 {\textstyle x_{0},y_{0}} (as ∂ f ∂ y ( x 0 , y 0 ) ≠ 0 {\textstyle {\frac {\partial f}{\partial y}}(x_{0},y_{0})\neq 0} ), thus it is injective. If φ , ϕ {\textstyle \varphi ,\phi } are solutions to the differential equation, then g x ( φ ( x ) ) = g x ( ϕ ( x ) ) = 0 {\textstyle g_{x}(\varphi (x))=g_{x}(\phi (x))=0} and by injectivity we get, φ ( x ) = ϕ ( x ) {\textstyle \varphi (x)=\phi (x)} . == First example == If we define the function f(x, y) = x2 + y2, then the equation f(x, y) = 1 cuts out the unit circle as the level set {(x, y) | f(x, y) = 1}. There is no way to represent the unit circle as the graph of a function of one variable y = g(x) because for each choice of x ∈ (−1, 1), there are two choices of y, namely ± 1 − x 2 {\displaystyle \pm {\sqrt {1-x^{2}}}} . However, it is possible to represent part of the circle as the graph of a function of one variable. If we let g 1 ( x ) = 1 − x 2 {\displaystyle g_{1}(x)={\sqrt {1-x^{2}}}} for −1 ≤ x ≤ 1, then the graph of y = g1(x) provides the upper half of the circle. Similarly, if g 2 ( x ) = − 1 − x 2 {\displaystyle g_{2}(x)=-{\sqrt {1-x^{2}}}} , then the graph of y = g2(x) gives the lower half of the circle. The purpose of the implicit function theorem is to tell us that functions like g1(x) and g2(x) almost always exist, even in situations where we cannot write down explicit formulas. It guarantees that g1(x) and g2(x) are differentiable, and it even works in situations where we do not have a formula for f(x, y). == Definitions == Let f : R n + m → R m {\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m}} be a continuously differentiable function. We think of R n + m {\displaystyle \mathbb {R} ^{n+m}} as the Cartesian product R n × R m , {\displaystyle \mathbb {R} ^{n}\times \mathbb {R} ^{m},} and we write a point of this product as ( x , y ) = ( x 1 , … , x n , y 1 , … y m ) . {\displaystyle (\mathbf {x} ,\mathbf {y} )=(x_{1},\ldots ,x_{n},y_{1},\ldots y_{m}).} Starting from the given function f {\displaystyle f} , our goal is to construct a function g : R n → R m {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} whose graph ( x , g ( x ) ) {\displaystyle ({\textbf {x}},g({\textbf {x}}))} is precisely the set of all ( x , y ) {\displaystyle ({\textbf {x}},{\textbf {y}})} such that f ( x , y ) = 0 {\displaystyle f({\textbf {x}},{\textbf {y}})={\textbf {0}}} . As noted above, this may not always be possible. We will therefore fix a point ( a , b ) = ( a 1 , … , a n , b 1 , … , b m ) {\displaystyle ({\textbf {a}},{\textbf {b}})=(a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})} which satisfies f ( a , b ) = 0 {\displaystyle f({\textbf {a}},{\textbf {b}})={\textbf {0}}} , and we will ask for a g {\displaystyle g} that works near the point ( a , b ) {\displaystyle ({\textbf {a}},{\textbf {b}})} . In other words, we want an open set U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} containing a {\displaystyle {\textbf {a}}} , an open set V ⊂ R m {\displaystyle V\subset \mathbb {R} ^{m}} containing b {\displaystyle {\textbf {b}}} , and a function g : U → V {\displaystyle g:U\to V} such that the graph of g {\displaystyle g} satisfies the relation f = 0 {\displaystyle f={\textbf {0}}} on U × V {\displaystyle U\times V} , and that no other points within U × V {\displaystyle U\times V} do so. In symbols, { ( x , g ( x ) ) ∣ x ∈ U } = { ( x , y ) ∈ U × V ∣ f ( x , y ) = 0 } . {\displaystyle \{(\mathbf {x} ,g(\mathbf {x} ))\mid \mathbf {x} \in U\}=\{(\mathbf {x} ,\mathbf {y} )\in U\times V\mid f(\mathbf {x} ,\mathbf {y} )=\mathbf {0} \}.} To state the implicit function theorem, we need the Jacobian matrix of f {\displaystyle f} , which is the matrix of the partial derivatives of f {\displaystyle f} . Abbreviating ( a 1 , … , a n , b 1 , … , b m ) {\displaystyle (a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})} to ( a , b ) {\displaystyle ({\textbf {a}},{\textbf {b}})} , the Jacobian matrix is ( D f ) ( a , b ) = [ ∂ f 1 ∂ x 1 ( a , b ) ⋯ ∂ f 1 ∂ x n ( a , b ) ∂ f 1 ∂ y 1 ( a , b ) ⋯ ∂ f 1 ∂ y m ( a , b ) ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ∂ f m ∂ x 1 ( a , b ) ⋯ ∂ f m ∂ x n ( a , b ) ∂ f m ∂ y 1 ( a , b ) ⋯ ∂ f m ∂ y m ( a , b ) ] = [ X Y ] {\displaystyle (Df)(\mathbf {a} ,\mathbf {b} )=\left[{\begin{array}{ccc|ccc}{\frac {\partial f_{1}}{\partial x_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{1}}{\partial x_{n}}}(\mathbf {a} ,\mathbf {b} )&{\frac {\partial f_{1}}{\partial y_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{1}}{\partial y_{m}}}(\mathbf {a} ,\mathbf {b} )\\\vdots &\ddots &\vdots &\vdots &\ddots &\vdots \\{\frac {\partial f_{m}}{\partial x_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{m}}{\partial x_{n}}}(\mathbf {a} ,\mathbf {b} )&{\frac {\partial f_{m}}{\partial y_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{m}}{\partial y_{m}}}(\mathbf {a} ,\mathbf {b} )\end{array}}\right]=\left[{\begin{array}{c|c}X&Y\end{array}}\right]} where X {\displaystyle X} is the matrix of partial derivatives in the variables x i {\displaystyle x_{i}} and Y {\displaystyle Y} is the matrix of partial derivatives in the variables y j {\displaystyle y_{j}} . The implicit function theorem says that if Y {\displaystyle Y} is an invertible matrix, then there are U {\displaystyle U} , V {\displaystyle V} , and g {\displaystyle g} as desired. Writing all the hypotheses together gives the following statement. == Statement of the theorem == Let f : R n + m → R m {\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m}} be a continuously differentiable function, and let R n + m {\displaystyle \mathbb {R} ^{n+m}} have coordinates ( x , y ) {\displaystyle ({\textbf {x}},{\textbf {y}})} . Fix a point ( a , b ) = ( a 1 , … , a n , b 1 , … , b m ) {\displaystyle ({\textbf {a}},{\textbf {b}})=(a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})} with f ( a , b ) = 0 {\displaystyle f({\textbf {a}},{\textbf {b}})=\mathbf {0} } , where 0 ∈ R m {\displaystyle \mathbf {0} \in \mathbb {R} ^{m}} is the zero vector. If the Jacobian matrix (this is the right-hand panel of the Jacobian matrix shown in the previous section): J f , y ( a , b ) = [ ∂ f i ∂ y j ( a , b ) ] {\displaystyle J_{f,\mathbf {y} }(\mathbf {a} ,\mathbf {b} )=\left[{\frac {\partial f_{i}}{\partial y_{j}}}(\mathbf {a} ,\mathbf {b} )\right]} is invertible, then there exists an open set U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} containing a {\displaystyle {\textbf {a}}} such that there exists a unique function g : U → R m {\displaystyle g:U\to \mathbb {R} ^{m}} such that g ( a ) = b {\displaystyle g(\mathbf {a} )=\mathbf {b} } , and f ( x , g ( x ) ) = 0 for all x ∈ U {\displaystyle f(\mathbf {x} ,g(\mathbf {x} ))=\mathbf {0} ~{\text{for all}}~\mathbf {x} \in U} . Moreover, g {\displaystyle g} is continuously differentiable and, denoting the left-hand panel of the Jacobian matrix shown in the previous section as: J f , x ( a , b ) = [ ∂ f i ∂ x j ( a , b ) ] , {\displaystyle J_{f,\mathbf {x} }(\mathbf {a} ,\mathbf {b} )=\left[{\frac {\partial f_{i}}{\partial x_{j}}}(\mathbf {a} ,\mathbf {b} )\right],} the Jacobian matrix of partial derivatives of g {\displaystyle g} in U {\displaystyle U} is given by the matrix product: [ ∂ g i ∂ x j ( x ) ] m × n = − [ J f , y ( x , g ( x ) ) ] m × m − 1 [ J f , x ( x , g ( x ) ) ] m × n {\displaystyle \left[{\frac {\partial g_{i}}{\partial x_{j}}}(\mathbf {x} )\right]_{m\times n}=-\left[J_{f,\mathbf {y} }(\mathbf {x} ,g(\mathbf {x} ))\right]_{m\times m}^{-1}\,\left[J_{f,\mathbf {x} }(\mathbf {x} ,g(\mathbf {x} ))\right]_{m\times n}} For a proof, see Inverse function theorem#Implicit_function_theorem. Here, the two-dimensional case is detailed. === Higher derivatives === If, moreover, f {\displaystyle f} is analytic or continuously differentiable k {\displaystyle k} times in a neighborhood of ( a , b ) {\displaystyle ({\textbf {a}},{\textbf {b}})} , then one may choose U {\displaystyle U} in order that the same holds true for g {\displaystyle g} inside U {\displaystyle U} . In the analytic case, this is called the analytic implicit function theorem. == The circle example == Let us go back to the example of the unit circle. In this case n = m = 1 and f ( x , y ) = x 2 + y 2 − 1 {\displaystyle f(x,y)=x^{2}+y^{2}-1} . The matrix of partial derivatives is just a 1 × 2 matrix, given by ( D f ) ( a , b ) = [ ∂ f ∂ x ( a , b ) ∂ f ∂ y ( a , b ) ] = [ 2 a 2 b ] {\displaystyle (Df)(a,b)={\begin{bmatrix}{\dfrac {\partial f}{\partial x}}(a,b)&{\dfrac {\partial f}{\partial y}}(a,b)\end{bmatrix}}={\begin{bmatrix}2a&2b\end{bmatrix}}} Thus, here, the Y in the statement of the theorem is just the number 2b; the linear map defined by it is invertible if and only if b ≠ 0. By the implicit function theorem we see that we can locally write the circle in the form y = g(x) for all points where y ≠ 0. For (±1, 0) we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writing x as a function of y, that is, x = h ( y ) {\displaystyle x=h(y)} ; now the graph of the function will be ( h ( y ) , y ) {\displaystyle \left(h(y),y\right)} , since where b = 0 we have a = 1, and the conditions to locally express the function in this form are satisfied. The implicit derivative of y with respect to x, and that of x with respect to y, can be found by totally differentiating the implicit function x 2 + y 2 − 1 {\displaystyle x^{2}+y^{2}-1} and equating to 0: 2 x d x + 2 y d y = 0 , {\displaystyle 2x\,dx+2y\,dy=0,} giving d y d x = − x y {\displaystyle {\frac {dy}{dx}}=-{\frac {x}{y}}} and d x d y = − y x . {\displaystyle {\frac {dx}{dy}}=-{\frac {y}{x}}.} == Application: change of coordinates == Suppose we have an m-dimensional space, parametrised by a set of coordinates ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} . We can introduce a new coordinate system ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} by supplying m functions h 1 … h m {\displaystyle h_{1}\ldots h_{m}} each being continuously differentiable. These functions allow us to calculate the new coordinates ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} of a point, given the point's old coordinates ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} using x 1 ′ = h 1 ( x 1 , … , x m ) , … , x m ′ = h m ( x 1 , … , x m ) {\displaystyle x'_{1}=h_{1}(x_{1},\ldots ,x_{m}),\ldots ,x'_{m}=h_{m}(x_{1},\ldots ,x_{m})} . One might want to verify if the opposite is possible: given coordinates ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} , can we 'go back' and calculate the same point's original coordinates ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} ? The implicit function theorem will provide an answer to this question. The (new and old) coordinates ( x 1 ′ , … , x m ′ , x 1 , … , x m ) {\displaystyle (x'_{1},\ldots ,x'_{m},x_{1},\ldots ,x_{m})} are related by f = 0, with f ( x 1 ′ , … , x m ′ , x 1 , … , x m ) = ( h 1 ( x 1 , … , x m ) − x 1 ′ , … , h m ( x 1 , … , x m ) − x m ′ ) . {\displaystyle f(x'_{1},\ldots ,x'_{m},x_{1},\ldots ,x_{m})=(h_{1}(x_{1},\ldots ,x_{m})-x'_{1},\ldots ,h_{m}(x_{1},\ldots ,x_{m})-x'_{m}).} Now the Jacobian matrix of f at a certain point (a, b) [ where a = ( x 1 ′ , … , x m ′ ) , b = ( x 1 , … , x m ) {\displaystyle a=(x'_{1},\ldots ,x'_{m}),b=(x_{1},\ldots ,x_{m})} ] is given by ( D f ) ( a , b ) = [ − 1 ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ − 1 | ∂ h 1 ∂ x 1 ( b ) ⋯ ∂ h 1 ∂ x m ( b ) ⋮ ⋱ ⋮ ∂ h m ∂ x 1 ( b ) ⋯ ∂ h m ∂ x m ( b ) ] = [ − I m | J ] . {\displaystyle (Df)(a,b)=\left[{\begin{matrix}-1&\cdots &0\\\vdots &\ddots &\vdots \\0&\cdots &-1\end{matrix}}\left|{\begin{matrix}{\frac {\partial h_{1}}{\partial x_{1}}}(b)&\cdots &{\frac {\partial h_{1}}{\partial x_{m}}}(b)\\\vdots &\ddots &\vdots \\{\frac {\partial h_{m}}{\partial x_{1}}}(b)&\cdots &{\frac {\partial h_{m}}{\partial x_{m}}}(b)\\\end{matrix}}\right.\right]=[-I_{m}|J].} where Im denotes the m × m identity matrix, and J is the m × m matrix of partial derivatives, evaluated at (a, b). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on a.) The implicit function theorem now states that we can locally express ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} as a function of ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} if J is invertible. Demanding J is invertible is equivalent to det J ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian J is non-zero. This statement is also known as the inverse function theorem. === Example: polar coordinates === As a simple application of the above, consider the plane, parametrised by polar coordinates (R, θ). We can go to a new coordinate system (cartesian coordinates) by defining functions x(R, θ) = R cos(θ) and y(R, θ) = R sin(θ). This makes it possible given any point (R, θ) to find corresponding Cartesian coordinates (x, y). When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to have det J ≠ 0, with J = [ ∂ x ( R , θ ) ∂ R ∂ x ( R , θ ) ∂ θ ∂ y ( R , θ ) ∂ R ∂ y ( R , θ ) ∂ θ ] = [ cos ⁡ θ − R sin ⁡ θ sin ⁡ θ R cos ⁡ θ ] . {\displaystyle J={\begin{bmatrix}{\frac {\partial x(R,\theta )}{\partial R}}&{\frac {\partial x(R,\theta )}{\partial \theta }}\\{\frac {\partial y(R,\theta )}{\partial R}}&{\frac {\partial y(R,\theta )}{\partial \theta }}\\\end{bmatrix}}={\begin{bmatrix}\cos \theta &-R\sin \theta \\\sin \theta &R\cos \theta \end{bmatrix}}.} Since det J = R, conversion back to polar coordinates is possible if R ≠ 0. So it remains to check the case R = 0. It is easy to see that in case R = 0, our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined. == Generalizations == === Banach space version === Based on the inverse function theorem in Banach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings. Let X, Y, Z be Banach spaces. Let the mapping f : X × Y → Z be continuously Fréchet differentiable. If ( x 0 , y 0 ) ∈ X × Y {\displaystyle (x_{0},y_{0})\in X\times Y} , f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} , and y ↦ D f ( x 0 , y 0 ) ( 0 , y ) {\displaystyle y\mapsto Df(x_{0},y_{0})(0,y)} is a Banach space isomorphism from Y onto Z, then there exist neighbourhoods U of x0 and V of y0 and a Fréchet differentiable function g : U → V such that f(x, g(x)) = 0 and f(x, y) = 0 if and only if y = g(x), for all ( x , y ) ∈ U × V {\displaystyle (x,y)\in U\times V} . === Implicit functions from non-differentiable functions === Various forms of the implicit function theorem exist for the case when the function f is not differentiable. It is standard that local strict monotonicity suffices in one dimension. The following more general form was proven by Kumagai based on an observation by Jittorntrum. Consider a continuous function f : R n × R m → R n {\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\to \mathbb {R} ^{n}} such that f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} . If there exist open neighbourhoods A ⊂ R n {\displaystyle A\subset \mathbb {R} ^{n}} and B ⊂ R m {\displaystyle B\subset \mathbb {R} ^{m}} of x0 and y0, respectively, such that, for all y in B, f ( ⋅ , y ) : A → R n {\displaystyle f(\cdot ,y):A\to \mathbb {R} ^{n}} is locally one-to-one, then there exist open neighbourhoods A 0 ⊂ R n {\displaystyle A_{0}\subset \mathbb {R} ^{n}} and B 0 ⊂ R m {\displaystyle B_{0}\subset \mathbb {R} ^{m}} of x0 and y0, such that, for all y ∈ B 0 {\displaystyle y\in B_{0}} , the equation f(x, y) = 0 has a unique solution x = g ( y ) ∈ A 0 , {\displaystyle x=g(y)\in A_{0},} where g is a continuous function from B0 into A0. === Collapsing manifolds === Perelman’s collapsing theorem for 3-manifolds, the capstone of his proof of Thurston's geometrization conjecture, can be understood as an extension of the implicit function theorem. == See also == Inverse function theorem Constant rank theorem: Both the implicit function theorem and the inverse function theorem can be seen as special cases of the constant rank theorem. == Notes == == References == == Further reading == Allendoerfer, Carl B. (1974). "Theorems about Differentiable Functions". Calculus of Several Variables and Differentiable Manifolds. New York: Macmillan. pp. 54–88. ISBN 0-02-301840-2. Binmore, K. G. (1983). "Implicit Functions". Calculus. New York: Cambridge University Press. pp. 198–211. ISBN 0-521-28952-1. Loomis, Lynn H.; Sternberg, Shlomo (1990). Advanced Calculus (Revised ed.). Boston: Jones and Bartlett. pp. 164–171. ISBN 0-86720-122-3. Protter, Murray H.; Morrey, Charles B. Jr. (1985). "Implicit Function Theorems. Jacobians". Intermediate Calculus (2nd ed.). New York: Springer. pp. 390–420. ISBN 0-387-96058-9.
Wikipedia/Implicit_function_theorem
In mathematics, a square is the result of multiplying a number by itself. The verb "to square" is used to denote this operation. Squaring is the same as raising to the power 2, and is denoted by a superscript 2; for instance, the square of 3 may be written as 32, which is the number 9. In some cases when superscripts are not available, as for instance in programming languages or plain text files, the notations x^2 (caret) or x**2 may be used in place of x2. The adjective which corresponds to squaring is quadratic. The square of an integer may also be called a square number or a perfect square. In algebra, the operation of squaring is often generalized to polynomials, other expressions, or values in systems of mathematical values other than the numbers. For instance, the square of the linear polynomial x + 1 is the quadratic polynomial (x + 1)2 = x2 + 2x + 1. One of the important properties of squaring, for numbers as well as in many other mathematical systems, is that (for all numbers x), the square of x is the same as the square of its additive inverse −x. That is, the square function satisfies the identity x2 = (−x)2. This can also be expressed by saying that the square function is an even function. == In real numbers == The squaring operation defines a real function called the square function or the squaring function. Its domain is the whole real line, and its image is the set of nonnegative real numbers. The square function preserves the order of positive numbers: larger numbers have larger squares. In other words, the square is a monotonic function on the interval [0, +∞). On the negative numbers, numbers with greater absolute value have greater squares, so the square is a monotonically decreasing function on (−∞,0]. Hence, zero is the (global) minimum of the square function. The square x2 of a number x is less than x (that is x2 < x) if and only if 0 < x < 1, that is, if x belongs to the open interval (0,1). This implies that the square of an integer is never less than the original number x. Every positive real number is the square of exactly two numbers, one of which is strictly positive and the other of which is strictly negative. Zero is the square of only one number, itself. For this reason, it is possible to define the square root function, which associates with a non-negative real number the non-negative number whose square is the original number. No square root can be taken of a negative number within the system of real numbers, because squares of all real numbers are non-negative. The lack of real square roots for the negative numbers can be used to expand the real number system to the complex numbers, by postulating the imaginary unit i, which is one of the square roots of −1. The property "every non-negative real number is a square" has been generalized to the notion of a real closed field, which is an ordered field such that every non-negative element is a square and every polynomial of odd degree has a root. The real closed fields cannot be distinguished from the field of real numbers by their algebraic properties: every property of the real numbers, which may be expressed in first-order logic (that is expressed by a formula in which the variables that are quantified by ∀ or ∃ represent elements, not sets), is true for every real closed field, and conversely every property of the first-order logic, which is true for a specific real closed field is also true for the real numbers. == In geometry == There are several major uses of the square function in geometry. The name of the square function shows its importance in the definition of the area: it comes from the fact that the area of a square with sides of length l is equal to l2. The area depends quadratically on the size: the area of a shape n times larger is n2 times greater. This holds for areas in three dimensions as well as in the plane: for instance, the surface area of a sphere is proportional to the square of its radius, a fact that is manifested physically by the inverse-square law describing how the strength of physical forces such as gravity varies according to distance. The square function is related to distance through the Pythagorean theorem and its generalization, the parallelogram law. Euclidean distance is not a smooth function: the three-dimensional graph of distance from a fixed point forms a cone, with a non-smooth point at the tip of the cone. However, the square of the distance (denoted d2 or r2), which has a paraboloid as its graph, is a smooth and analytic function. The dot product of a Euclidean vector with itself is equal to the square of its length: v⋅v = v2. This is further generalised to quadratic forms in linear spaces via the inner product. The inertia tensor in mechanics is an example of a quadratic form. It demonstrates a quadratic relation of the moment of inertia to the size (length). There are infinitely many Pythagorean triples, sets of three positive integers such that the sum of the squares of the first two equals the square of the third. Each of these triples gives the integer sides of a right triangle. == In abstract algebra and number theory == The square function is defined in any field or ring. An element in the image of this function is called a square, and the inverse images of a square are called square roots. The notion of squaring is particularly important in the finite fields Z/pZ formed by the numbers modulo an odd prime number p. A non-zero element of this field is called a quadratic residue if it is a square in Z/pZ, and otherwise, it is called a quadratic non-residue. Zero, while a square, is not considered to be a quadratic residue. Every finite field of this type has exactly (p − 1)/2 quadratic residues and exactly (p − 1)/2 quadratic non-residues. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory. More generally, in rings, the square function may have different properties that are sometimes used to classify rings. Zero may be the square of some non-zero elements. A commutative ring such that the square of a non zero element is never zero is called a reduced ring. More generally, in a commutative ring, a radical ideal is an ideal I such that x 2 ∈ I {\displaystyle x^{2}\in I} implies x ∈ I {\displaystyle x\in I} . Both notions are important in algebraic geometry, because of Hilbert's Nullstellensatz. An element of a ring that is equal to its own square is called an idempotent. In any ring, 0 and 1 are idempotents. There are no other idempotents in fields and more generally in integral domains. However, the ring of the integers modulo n has 2k idempotents, where k is the number of distinct prime factors of n. A commutative ring in which every element is equal to its square (every element is idempotent) is called a Boolean ring; an example from computer science is the ring whose elements are binary numbers, with bitwise AND as the multiplication operation and bitwise XOR as the addition operation. In a totally ordered ring, x2 ≥ 0 for any x. Moreover, x2 = 0 if and only if x = 0. In a supercommutative algebra where 2 is invertible, the square of any odd element equals zero. If A is a commutative semigroup, then one has ∀ x , y ∈ A ( x y ) 2 = x y x y = x x y y = x 2 y 2 . {\displaystyle \forall x,y\in A\quad (xy)^{2}=xyxy=xxyy=x^{2}y^{2}.} In the language of quadratic forms, this equality says that the square function is a "form permitting composition". In fact, the square function is the foundation upon which other quadratic forms are constructed which also permit composition. The procedure was introduced by L. E. Dickson to produce the octonions out of quaternions by doubling. The doubling method was formalized by A. A. Albert who started with the real number field R {\displaystyle \mathbb {R} } and the square function, doubling it to obtain the complex number field with quadratic form x2 + y2, and then doubling again to obtain quaternions. The doubling procedure is called the Cayley–Dickson construction, and has been generalized to form algebras of dimension 2n over a field F with involution. The square function z2 is the "norm" of the composition algebra C {\displaystyle \mathbb {C} } , where the identity function forms a trivial involution to begin the Cayley–Dickson constructions leading to bicomplex, biquaternion, and bioctonion composition algebras. == In complex numbers == On complex numbers, the square function z → z 2 {\displaystyle z\to z^{2}} is a twofold cover in the sense that each non-zero complex number has exactly two square roots. The square of the absolute value of a complex number is called its absolute square, squared modulus, or squared magnitude. It is the product of the complex number with its complex conjugate, and equals the sum of the squares of the real and imaginary parts of the complex number. The absolute square of a complex number is always a nonnegative real number, that is zero if and only if the complex number is zero. It is easier to compute than the absolute value (no square root), and is a smooth real-valued function. Because of these two properties, the absolute square is often preferred to the absolute value for explicit computations and when methods of mathematical analysis are involved (for example optimization or integration). For complex vectors, the dot product can be defined involving the conjugate transpose, leading to the squared norm. == Other uses == Squares are ubiquitous in algebra, more generally, in almost every branch of mathematics, and also in physics where many units are defined using squares and inverse squares: see below. Least squares is the standard method used with overdetermined systems. Squaring is used in statistics and probability theory in determining the standard deviation of a set of values, or a random variable. The deviation of each value xi from the mean x ¯ {\displaystyle {\overline {x}}} of the set is defined as the difference x i − x ¯ {\displaystyle x_{i}-{\overline {x}}} . These deviations are squared, then a mean is taken of the new set of numbers (each of which is positive). This mean is the variance, and its square root is the standard deviation. == See also == Cube (algebra) Euclidean distance Exponentiation by squaring Hilbert's seventeenth problem, for the representation of positive polynomials as a sum of squares of rational functions Metric tensor Polynomial ring Polynomial SOS, the representation of a non-negative polynomial as the sum of squares of polynomials Quadratic equation Square-free polynomial Sums of squares (disambiguation page with various relevant links) === Related identities === Algebraic (need a commutative ring) Brahmagupta–Fibonacci identity, related to complex numbers in the sense discussed above Degen's eight-square identity, related to octonions in the same way Difference of two squares Euler's four-square identity, related to quaternions in the same way Lagrange's identity Other Parseval's identity Pythagorean trigonometric identity === Related physical quantities === acceleration, length per square time coupling constant (has square charge in the denominator, and may be expressed with square distance in the numerator) cross section (physics), an area-dimensioned quantity kinetic energy (quadratic dependence on velocity) specific energy, a (square velocity)-dimensioned quantity == Footnotes == == Further reading == Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ISBN 978-0-8218-4402-1, ISBN 0-8218-4402-4 Rajwade, A. R. (1993). Squares. London Mathematical Society Lecture Note Series. Vol. 171. Cambridge University Press. ISBN 0-521-42668-5. Zbl 0785.11022.
Wikipedia/Square_function
In mathematics and physics, a scalar field is a function associating a single number to each point in a region of space – possibly physical space. The scalar may either be a pure mathematical number (dimensionless) or a scalar physical quantity (with units). In a physical context, scalar fields are required to be independent of the choice of reference frame. That is, any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory. == Definition == Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U. The region U may be a set in some Euclidean space, Minkowski space, or more generally a subset of a manifold, and it is typical in mathematics to impose further conditions on the field, such that it be continuous or often continuously differentiable to some order. A scalar field is a tensor field of order zero, and the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density, or differential form. Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should also be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are often contrasted with pseudoscalar fields. == Uses in physics == In physics, scalar fields often describe the potential energy associated with a particular force. The force is a vector field, which can be obtained as a factor of the gradient of the potential energy scalar field. Examples include: Potential fields, such as the Newtonian gravitational potential, or the electric potential in electrostatics, are scalar fields which describe the more familiar forces. A temperature, humidity, or pressure field, such as those used in meteorology. === Examples in quantum theory and relativity === In quantum field theory, a scalar field is associated with spin-0 particles. The scalar field may be real or complex valued. Complex scalar fields represent charged particles. These include the Higgs field of the Standard Model, as well as the charged pions mediating the strong nuclear interaction. In the Standard Model of elementary particles, a scalar Higgs field is used to give the leptons and massive vector bosons their mass, via a combination of the Yukawa interaction and the spontaneous symmetry breaking. This mechanism is known as the Higgs mechanism. A candidate for the Higgs boson was first detected at CERN in 2012. In scalar theories of gravitation scalar fields are used to describe the gravitational field. Scalar–tensor theories represent the gravitational interaction through both a tensor and a scalar. Such attempts are for example the Jordan theory as a generalization of the Kaluza–Klein theory and the Brans–Dicke theory. Scalar fields like the Higgs field can be found within scalar–tensor theories, using as scalar field the Higgs field of the Standard Model. This field interacts gravitationally and Yukawa-like (short-ranged) with the particles that get mass through it. Scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor. Scalar fields are hypothesized to have caused the high accelerated expansion of the early universe (inflation), helping to solve the horizon problem and giving a hypothetical reason for the non-vanishing cosmological constant of cosmology. Massless (i.e. long-ranged) scalar fields in this context are known as inflatons. Massive (i.e. short-ranged) scalar fields are proposed, too, using for example Higgs-like fields. == Other kinds of fields == Vector fields, which associate a vector to every point in space. Some examples of vector fields include the air flow (wind) in meteorology. Tensor fields, which associate a tensor to every point in space. For example, in general relativity gravitation is associated with the tensor field called Einstein tensor. In Kaluza–Klein theory, spacetime is extended to five dimensions and its Riemann curvature tensor can be separated out into ordinary four-dimensional gravitation plus an extra set, which is equivalent to Maxwell's equations for the electromagnetic field, plus an extra scalar field known as the "dilaton". (The dilaton scalar is also found among the massless bosonic fields in string theory.) == See also == Scalar field theory Vector boson Vector-valued function == References ==
Wikipedia/Scalar-valued_function
A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of the whole system from the perspective of a related set of concerns. Since the early 1990s there have been a number of efforts to prescribe approaches for describing and analyzing system architectures. A result of these efforts have been to define a set of views (or viewpoints). They are sometimes referred to as architecture frameworks or enterprise architecture frameworks, but are usually called "view models". Usually a view is a work product that presents specific architecture data for a given system. However, the same term is sometimes used to refer to a view definition, including the particular viewpoint and the corresponding guidance that defines each concrete view. The term view model is related to view definitions. == Overview == The purpose of views and viewpoints is to enable humans to comprehend very complex systems, to organize the elements of the problem and the solution around domains of expertise and to separate concerns. In the engineering of physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization. Most complex system specifications are so extensive that no single individual can fully comprehend all aspects of the specifications. Furthermore, we all have different interests in a given system and different reasons for examining the system's specifications. A business executive will ask different questions of a system make-up than would a system implementer. The concept of viewpoints framework, therefore, is to provide separate viewpoints into the specification of a given complex system in order to facilitate communication with the stakeholders. Each viewpoint satisfies an audience with interest in a particular set of aspects of the system. Each viewpoint may use a specific viewpoint language that optimizes the vocabulary and presentation for the audience of that viewpoint. Viewpoint modeling has become an effective approach for dealing with the inherent complexity of large distributed systems. Architecture description practices, as described in IEEE Std 1471-2000, utilize multiple views to address several areas of concerns, each one focusing on a specific aspect of the system. Examples of architecture frameworks using multiple views include Kruchten's "4+1" view model, the Zachman Framework, TOGAF, DoDAF, and RM-ODP. == History == In the 1970s, methods began to appear in software engineering for modeling with multiple views. Douglas T. Ross and K.E. Schoman in 1977 introduce the constructs context, viewpoint, and vantage point to organize the modeling process in systems requirements definition. According to Ross and Schoman, a viewpoint "makes clear what aspects are considered relevant to achieving ... the overall purpose [of the model]" and determines How do we look at [a subject being modelled]? As examples of viewpoints, the paper offers: Technical, Operational and Economic viewpoints. In 1992, Anthony Finkelstein and others published a very important paper on viewpoints. In that work: "A viewpoint can be thought of as a combination of the idea of an “actor”, “knowledge source”, “role” or “agent” in the development process and the idea of a “view” or “perspective” which an actor maintains." An important idea in this paper was to distinguish "a representation style, the scheme and notation by which the viewpoint expresses what it can see" and "a specification, the statements expressed in the viewpoint's style describing particular domains". Subsequent work, such as IEEE 1471, preserved this distinction by utilizing two separate terms: viewpoint and view, respectively. Since the early 1990s there have been a number of efforts to codify approaches for describing and analyzing system architectures. These are often termed architecture frameworks or sometimes viewpoint sets. Many of these have been funded by the United States Department of Defense, but some have sprung from international or national efforts in ISO or the IEEE. Among these, the IEEE Recommended Practice for Architectural Description of Software-Intensive Systems (IEEE Std 1471-2000) established useful definitions of view, viewpoint, stakeholder and concern and guidelines for documenting a system architecture through the use of multiple views by applying viewpoints to address stakeholder concerns. The advantage of multiple views is that hidden requirements and stakeholder disagreements can be discovered more readily. However, studies show that in practice, the added complexity of reconciling multiple views can undermine this advantage. IEEE 1471 (now ISO/IEC/IEEE 42010:2011, Systems and software engineering — Architecture description) prescribes the contents of architecture descriptions and describes their creation and use under a number of scenarios, including precedented and unprecedented design, evolutionary design, and capture of design of existing systems. In all of these scenarios the overall process is the same: identify stakeholders, elicit concerns, identify a set of viewpoints to be used, and then apply these viewpoint specifications to develop the set of views relevant to the system of interest. Rather than define a particular set of viewpoints, the standard provides uniform mechanisms and requirements for architects and organizations to define their own viewpoints. In 1996 the ISO Reference Model for Open Distributed Processing (RM-ODP) was published to provide a useful framework for describing the architecture and design of large-scale distributed systems. == View model topics == === View === A view of a system is a representation of the system from the perspective of a viewpoint. This viewpoint on a system involves a perspective focusing on specific concerns regarding the system, which suppresses details to provide a simplified model having only those elements related to the concerns of the viewpoint. For example, a security viewpoint focuses on security concerns and a security viewpoint model contains those elements that are related to security from a more general model of a system. A view allows a user to examine a portion of a particular interest area. For example, an Information View may present all functions, organizations, technology, etc. that use a particular piece of information, while the Organizational View may present all functions, technology, and information of concern to a particular organization. In the Zachman Framework views comprise a group of work products whose development requires a particular analytical and technical expertise because they focus on either the “what,” “how,” “who,” “where,” “when,” or “why” of the enterprise. For example, Functional View work products answer the question “how is the mission carried out?” They are most easily developed by experts in functional decomposition using process and activity modeling. They show the enterprise from the point of view of functions. They also may show organizational and information components, but only as they relate to functions. === Viewpoints === In systems engineering, a viewpoint is a partitioning or restriction of concerns in a system. Adoption of a viewpoint is usable so that issues in those aspects can be addressed separately. A good selection of viewpoints also partitions the design of the system into specific areas of expertise. Viewpoints provide the conventions, rules, and languages for constructing, presenting and analysing views. In ISO/IEC 42010:2007 (IEEE-Std-1471-2000) a viewpoint is a specification for an individual view. A view is a representation of a whole system from the perspective of a viewpoint. A view may consist of one or more architectural models. Each such architectural model is developed using the methods established by its associated architectural system, as well as for the system as a whole. === Modeling perspectives === Modeling perspectives is a set of different ways to represent pre-selected aspects of a system. Each perspective has a different focus, conceptualization, dedication and visualization of what the model is representing. In information systems, the traditional way to divide modeling perspectives is to distinguish the structural, functional and behavioral/processual perspectives. This together with rule, object, communication and actor and role perspectives is one way of classifying modeling approaches === Viewpoint model === In any given viewpoint, it is possible to make a model of the system that contains only the objects that are visible from that viewpoint, but also captures all of the objects, relationships and constraints that are present in the system and relevant to that viewpoint. Such a model is said to be a viewpoint model, or a view of the system from that viewpoint. A given view is a specification for the system at a particular level of abstraction from a given viewpoint. Different levels of abstraction contain different levels of detail. Higher-level views allow the engineer to fashion and comprehend the whole design and identify and resolve problems in the large. Lower-level views allow the engineer to concentrate on a part of the design and develop the detailed specifications. In the system itself, however, all of the specifications appearing in the various viewpoint models must be addressed in the realized components of the system. And the specifications for any given component may be drawn from many different viewpoints. On the other hand, the specifications induced by the distribution of functions over specific components and component interactions will typically reflect a different partitioning of concerns than that reflected in the original viewpoints. Thus additional viewpoints, addressing the concerns of the individual components and the bottom-up synthesis of the system, may also be useful. === Architecture description === An architecture description is a representation of a system architecture, at any time, in terms of its component parts, how those parts function, the rules and constraints under which those parts function, and how those parts relate to each other and to the environment. In an architecture description the architecture data is shared across several views and products. At the data layer are the architecture data elements and their defining attributes and relationships. At the presentation layer are the products and views that support a visual means to communicate and understand the purpose of the architecture, what it describes, and the various architectural analyses performed. Products provide a way for visualizing architecture data as graphical, tabular, or textual representations. Views provide the ability to visualize architecture data that stem across products, logically organizing the data for a specific or holistic perspective of the architecture. == Types of system view models == === Three-schema approach === The Three-schema approach for data modeling, introduced in 1977, can be considered one of the first view models. It is an approach to building information systems and systems information management, that promotes the conceptual model as the key to achieving data integration. The Three schema approach defines three schemas and views: External schema for user views Conceptual schema integrates external schemata Internal schema that defines physical storage structures At the center, the conceptual schema defines the ontology of the concepts as the users think of them and talk about them. The physical schema describes the internal formats of the data stored in the database, and the external schema defines the view of the data presented to the application programs. The framework attempted to permit multiple data models to be used for external schemata. Over the years, the skill and interest in building information systems has grown tremendously. However, for the most part, the traditional approach to building systems has only focused on defining data from two distinct views, the "user view" and the "computer view". From the user view, which will be referred to as the “external schema,” the definition of data is in the context of reports and screens designed to aid individuals in doing their specific jobs. The required structure of data from a usage view changes with the business environment and the individual preferences of the user. From the computer view, which will be referred to as the “internal schema,” data is defined in terms of file structures for storage and retrieval. The required structure of data for computer storage depends upon the specific computer technology employed and the need for efficient processing of data. === 4+1 view model of architecture === 4+1 is a view model designed by Philippe Kruchten in 1995 for describing the architecture of software-intensive systems, based on the use of multiple, concurrent views. The views are used to describe the system in the viewpoint of different stakeholders, such as end-users, developers and project managers. The four views of the model are logical, development, process and physical view: The four views of the model are concerned with : Logical view: is concerned with the functionality that the system provides to end-users. Development view: illustrates a system from a programmers perspective and is concerned with software management. Process view: deals with the dynamic aspect of the system, explains the system processes and how they communicate, and focuses on the runtime behavior of the system. Physical view: depicts the system from a system engineer's point of view. It is concerned with the topology of software components on the physical layer, as well as communication between these components. In addition selected use cases or scenarios are utilized to illustrate the architecture. Hence the model contains 4+1 views. == Types of enterprise architecture views == Enterprise architecture framework defines how to organize the structure and views associated with an enterprise architecture. Because the discipline of Enterprise Architecture and Engineering is so broad, and because enterprises can be large and complex, the models associated with the discipline also tend to be large and complex. To manage this scale and complexity, an Architecture Framework provides tools and methods that can bring the task into focus and allow valuable artifacts to be produced when they are most needed. Architecture Frameworks are commonly used in Information technology and Information system governance. An organization may wish to mandate that certain models be produced before a system design can be approved. Similarly, they may wish to specify certain views be used in the documentation of procured systems - the U.S. Department of Defense stipulates that specific DoDAF views be provided by equipment suppliers for capital project above a certain value. === Zachman Framework === The Zachman Framework, originally conceived by John Zachman at IBM in 1987, is a framework for enterprise architecture, which provides a formal and highly structured way of viewing and defining an enterprise. The Framework is used for organizing architectural "artifacts" in a way that takes into account both who the artifact targets (for example, business owner and builder) and what particular issue (for example, data and functionality) is being addressed. These artifacts may include design documents, specifications, and models. The Zachman Framework is often referenced as a standard approach for expressing the basic elements of enterprise architecture. The Zachman Framework has been recognized by the U.S. Federal Government as having "... received worldwide acceptance as an integrated framework for managing change in enterprises and the systems that support them." === RM-ODP views === The International Organization for Standardization (ISO) Reference Model for Open Distributed Processing (RM-ODP) specifies a set of viewpoints for partitioning the design of a distributed software/hardware system. Since most integration problems arise in the design of such systems or in very analogous situations, these viewpoints may prove useful in separating integration concerns. The RMODP viewpoints are: the enterprise viewpoint, which is concerned with the purpose and behaviors of the system as it relates to the business objective and the business processes of the organization the information viewpoint, which is concerned with the nature of the information handled by the system and constraints on the use and interpretation of that information the computational viewpoint, which is concerned with the functional decomposition of the system into a set of components that exhibit specific behaviors and interact at interfaces the engineering viewpoint, which is concerned with the mechanisms and functions required to support the interactions of the computational components the technology viewpoint, which is concerned with the explicit choice of technologies for the implementation of the system, and particularly for the communications among the components RMODP further defines a requirement for a design to contain specifications of consistency between viewpoints, including: the use of enterprise objects and processes in defining information units the use of enterprise objects and behaviors in specifying the behaviors of computational components, and use of the information units in defining computational interfaces the association of engineering choices with computational interfaces and behavior requirements the satisfaction of information, computational and engineering requirements in the chosen technologies === DoDAF views === The Department of Defense Architecture Framework (DoDAF) defines a standard way to organize an enterprise architecture (EA) or systems architecture into complementary and consistent views. It is especially suited to large systems with complex integration and interoperability challenges, and is apparently unique in its use of "operational views" detailing the external customer's operating domain in which the developing system will operate. The DoDAF defines a set of products that act as mechanisms for visualizing, understanding, and assimilating the broad scope and complexities of an architecture description through graphic, tabular, or textual means. These products are organized under four views: Overarching All View (AV), Operational View (OV), Systems View (SV), and the Technical Standards View (TV). Each view depicts certain perspectives of an architecture as described below. Only a subset of the full DoDAF viewset is usually created for each system development. The figure represents the information that links the operational view, systems and services view, and technical standards view. The three views and their interrelationships driven – by common architecture data elements – provide the basis for deriving measures such as interoperability or performance, and for measuring the impact of the values of these metrics on operational mission and task effectiveness. === Federal Enterprise Architecture views === In the US Federal Enterprise Architecture enterprise, segment, and solution architecture provide different business perspectives by varying the level of detail and addressing related but distinct concerns. Just as enterprises are themselves hierarchically organized, so are the different views provided by each type of architecture. The Federal Enterprise Architecture Practice Guidance (2006) has defined three types of architecture: Enterprise architecture, Segment architecture, and Solution architecture. By definition, Enterprise Architecture (EA) is fundamentally concerned with identifying common or shared assets – whether they are strategies, business processes, investments, data, systems, or technologies. EA is driven by strategy; it helps an agency identify whether its resources are properly aligned to the agency mission and strategic goals and objectives. From an investment perspective, EA is used to drive decisions about the IT investment portfolio as a whole. Consequently, the primary stakeholders of the EA are the senior managers and executives tasked with ensuring the agency fulfills its mission as effectively and efficiently as possible. By contrast, segment architecture defines a simple roadmap for a core mission area, business service, or enterprise service. Segment architecture is driven by business management and delivers products that improve the delivery of services to citizens and agency staff. From an investment perspective, segment architecture drives decisions for a business case or group of business cases supporting a core mission area or common or shared service. The primary stakeholders for segment architecture are business owners and managers. Segment architecture is related to EA through three principles: structure, reuse, and alignment. First, segment architecture inherits the framework used by the EA, although it may be extended and specialized to meet the specific needs of a core mission area or common or shared service. Second, segment architecture reuses important assets defined at the enterprise level including: data; common business processes and investments; and applications and technologies. Third, segment architecture aligns with elements defined at the enterprise level, such as business strategies, mandates, standards, and performance measures. === Nominal set of views === In search of "Framework for Modeling Space Systems Architectures" Peter Shames and Joseph Skipper (2006) defined a "nominal set of views", Derived from CCSDS RASDS, RM-ODP, ISO 10746 and compliant with IEEE 1471. This "set of views", as described below, is a listing of possible modeling viewpoints. Not all of these views may be used for any one project and other views may be defined as necessary. Note that for some analyses elements from multiple viewpoints may be combined into a new view, possibly using a layered representation. In a latter presentation this nominal set of views was presented as an Extended RASDS Semantic Information Model Derivation. Hereby RASDS stands for Reference Architecture for Space Data Systems. see second image. Enterprise Viewpoint Organization view – Includes organizational elements and their structures and relationships. May include agreements, contracts, policies and organizational interactions. Requirements view – Describes the requirements, goals, and objectives that drive the system. Says what the system must be able to do. Scenario view – Describes the way that the system is intended to be used, see scenario planning. Includes user views and descriptions of how the system is expected to behave. Information viewpoint Metamodel view – An abstract view that defines information model elements and their structures and relationships. Defines the classes of data that are created and managed by the system and the data architecture. Information view – Describes the actual data and information as it is realized and manipulated within the system. Data elements are defined by the metamodel view and they are referred to by functional objects in other views. Functional viewpoint Functional Dataflow view – An abstract view that describes the functional elements in the system, their interactions, behavior, provided services, constraints and data flows among them. Defines which functions the system is capable of performing, regardless of how these functions are actually implemented. Functional Control view – Describes the control flows and interactions among functional elements within the system. Includes overall system control interactions, interactions between control elements and sensor / effector elements and management interactions. Physical viewpoint Data System view – Describes instruments, computers, and data storage components, their data system attributes and the communications connectors (busses, networks, point to point links) that are used in the system. Telecomm view – Describes the telecomm components (antenna, transceiver), their attributes and their connectors (RF or optical links). Navigation view – Describes the motion of the major elements within the system (trajectory, path, orbit), including their interaction with external elements and forces that are outside of the control of the system, but that must be modeled with it to understand system behavior (planets, asteroids, solar pressure, gravity) Structural view – Describes the structural components in the system (s/c bus, struts, panels, articulation), their physical attributes and connectors, along with the relevant structural aspects of other components (mass, stiffness, attachment) Thermal view – Describes the active and passive thermal components in the system (radiators, coolers, vents) and their connectors (physical and free space radiation) and attributes, along with the thermal properties of other components (i.e. antenna as sun shade) Power view – Describes the active and passive power components in the system (solar panels, batteries, RTGs) within the system and their connectors, along with the power properties of other components (data system and propulsion elements as power sinks and structural panels as grounding plane) Propulsion view – Describes the active and passive propulsion components in the system (thrusters, gyros, motors, wheels) within the system and their connectors, along with the propulsive properties of other components Engineering viewpoint Allocation view – Describes the allocation of functional objects to engineered physical and computational components within the system, permits analysis of performance and used to verify satisfaction of requirements Software view - Describes the software engineering aspects of the system, software design and implementation of functionality within software components, select languages and libraries to be used, define APIs, do the engineering of abstract functional objects into tangible software elements. Some functional elements, described using a software language, may actually be implemented as hardware (FPGA, ASIC) Hardware views – Describes the hardware engineering aspects of the system, hardware design, selection and implementation of all of the physical components to be assembled into the system. There may be many of these views, each specific to a different engineering discipline. Communications Protocol view – Describes the end to end design of the communications protocols and related data transport and data management services, shows the protocol stacks as they are implemented on each of the physical components of the system. Risk view – Describes the risks associated with the system design, processes, and technologies, assigns additional risk assessment attributes to other elements described in the architecture Control Engineering view - Analyzes system from the perspective of its controllability, allocation of elements into system under control and control system Integration and Test view – Looks at the system from the perspective of what must be done to assemble, integrate and test system and sub-systems, and assemblies. Includes verification of proper functionality, driven by scenarios, in satisfaction of requirements. IV&V view – independent validation and verification of functionality and proper operation of the system in satisfaction of requirements. Does system as designed and developed meet goals and objectives. Technology viewpoint Standards view – Defines the standards to be adopted during design of the system (e.g. communication protocols, radiation tolerance, soldering). These are essentially constraints on the design and implementation processes. Infrastructure view – Defines the infrastructure elements that are to support the engineering, design, and fabrication process. May include data system elements (design repositories, frameworks, tools, networks) and hardware elements (chip fabrication, thermal vacuum facility, machine shop, RF testing lab) Technology Development & Assessment view – Includes description of technology development programs designed to produce algorithms or components that may be included in a system development project. Includes evaluation of properties of selected hardware and software components to determine if they are at a sufficient state of maturity to be adopted for the mission being designed. In contrast to the previous listed view models, this "nominal set of views" lists a whole range of views, possible to develop powerful and extensible approaches for describing a general class of software intensive system architectures. == See also == Enterprise architecture framework Organizational architecture Software development methodology Treasury Enterprise Architecture Framework TOGAF Zachman Framework Ontology (information science) Knowledge acquisition == References == Attribution This article incorporates public domain material from the National Institute of Standards and Technology == External links == Media related to View models at Wikimedia Commons
Wikipedia/View_model
In mathematics and computer science, an algorithm ( ) is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning). In contrast, a heuristic is an approach to solving problems without well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation. As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. == Etymology == Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath. Here, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi". The word algorism in English came to mean the use of place-value notation in calculations; it occurs in the Ancrene Wisse from circa 1225. By the time Geoffrey Chaucer wrote The Canterbury Tales in the late 14th century, he used a variant of the same word in describing augrym stones, stones used for place-value calculation. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus. By 1596, this form of the word was used in English, as algorithm, by Thomas Hood. == Definition == One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure or cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols. Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device. == History == === Ancient algorithms === Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later), the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD). The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to c. 2500 BC describes the earliest division algorithm. During the Hammurabi dynasty c. 1800 – c. 1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events. Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus c. 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus,: Ch 9.2  and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC).: Ch 9.1 Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta. The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm. === Computers === ==== Weight-driven clocks ==== Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer". ==== Electromechanical relay ==== Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape (c. 1870s) was in use, as were Hollerith cards (c. 1890). Then came the teleprinter (c. 1910) with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device". === Formalization === In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939. == Representations == Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms. === Turing machines === There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description. A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine. === Flowchart representation === The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. == Algorithmic analysis == It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of ⁠ O ( n ) {\displaystyle O(n)} ⁠, using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of ⁠ O ( 1 ) {\displaystyle O(1)} ⁠, otherwise ⁠ O ( n ) {\displaystyle O(n)} ⁠ is required. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ⁠ O ( log ⁡ n ) {\displaystyle O(\log n)} ⁠) outperforms a sequential search (cost ⁠ O ( n ) {\displaystyle O(n)} ⁠ ) when used for table lookups on sorted lists or arrays. === Formal versus empirical === The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly. === Execution efficiency === To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. === Best Case and Worst Case === The best case of an algorithm refers to the scenario or input for which the algorithm or data structure takes the least time and resources to complete its tasks. The worst case of an algorithm is the case that causes the algorithm or data structure to consume the maximum period of time and computational resources. == Design == Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases. === Structured programming === Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction. == Legal status == By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography). == Classification == === By implementation === Recursion A recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decisions at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value. Quantum algorithm Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement. === By design paradigm === Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are: Brute-force or exhaustive search Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords. Divide and conquer A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called prune and search or decrease-and-conquer algorithm, which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the binary search algorithm. Search and enumeration Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking. Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. Reduction of complexity This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution. === Optimization problems === For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. Dynamic programming When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial. The greedy method Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem. The heuristic method In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm. == Examples == One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as: High-level description: If a set of numbers is empty, then there is no highest number. Assume the first number in the set is the largest. For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest. When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set. (Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: == See also == == Notes == == Bibliography == Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363 == Further reading == == External links == "Algorithm". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Weisstein, Eric W. "Algorithm". MathWorld. Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology Algorithm repositories The Stony Brook Algorithm Repository – State University of New York at Stony Brook Collected Algorithms of the ACM – Associations for Computing Machinery The Stanford GraphBase Archived December 6, 2015, at the Wayback Machine – Stanford University
Wikipedia/Algorithmics
The systems modeling language (SysML) is a general-purpose modeling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML was originally developed by an open source specification project, and includes an open source license for distribution and use. SysML is defined as an extension of a subset of the Unified Modeling Language (UML) using UML's profile mechanism. The language's extensions were designed to support systems engineering activities. == Contrast with UML == SysML offers several systems engineering specific improvements over UML, which has been developed as a software modeling language. These improvements include the following: SysML's diagrams express system engineering concepts better due to the removal of UML's software-centric restrictions and adds two new diagram types, requirement and parametric diagrams. The former can be used for requirements engineering; the latter can be used for performance analysis and quantitative analysis. Consequent to these enhancements, SysML is able to model a wide range of systems, which may include hardware, software, information, processes, personnel, and facilities. SysML is a comparatively small language that is easier to learn and apply. Since SysML removes many of UML's software-centric constructs, the overall language is smaller both in diagram types and total constructs. SysML allocation tables support common kinds of allocations. Whereas UML provides only limited support for tabular notations, SysML furnishes flexible allocation tables that support requirements allocation, functional allocation, and structural allocation. This capability facilitates automated verification and validation (V&V) and gap analysis. SysML model management constructs support models, views, and viewpoints. These constructs extend UML's capabilities and are architecturally aligned with IEEE-Std-1471-2000 (IEEE Recommended Practice for Architectural Description of Software Intensive Systems). SysML reuses seven of UML 2's fourteen "nominative" types of diagrams, and adds two diagrams (requirement and parametric diagrams) for a total of nine diagram types. SysML also supports allocation tables, a tabular format that can be dynamically derived from SysML allocation relationships. A table which compares SysML and UML 2 diagrams is available in the SysML FAQ. Consider modeling an automotive system: with SysML one can use Requirement diagrams to efficiently capture functional, performance, and interface requirements, whereas with UML one is subject to the limitations of use case diagrams to define high-level functional requirements. Likewise, with SysML one can use Parametric diagrams to precisely define performance and quantitative constraints like maximum acceleration, minimum curb weight, and total air conditioning capacity. UML provides no straightforward mechanism to capture this sort of essential performance and quantitative information. Concerning the rest of the automotive system, enhanced activity diagrams and state machine diagrams can be used to specify the embedded software control logic and information flows for the on-board automotive computers. Other SysML structural and behavioral diagrams can be used to model factories that build the automobiles, as well as the interfaces between the organizations that work in the factories. == History == The SysML initiative originated in a January 2001 decision by the International Council on Systems Engineering (INCOSE) Model Driven Systems Design workgroup to customize the UML for systems engineering applications. Following this decision, INCOSE and the Object Management Group (OMG), which maintains the UML specification, jointly chartered the OMG Systems Engineering Domain Special Interest Group (SE DSIG) in July 2001. The SE DSIG, with support from INCOSE and the ISO AP 233 workgroup, developed the requirements for the modeling language, which were subsequently issued by the OMG parting in the UML for Systems Engineering Request for Proposal (UML for SE RFP; OMG document ad/03-03-41) in March 2003. In 2003 David Oliver and Sanford Friedenthal of INCOSE requested that Cris Kobryn, who successfully led the UML 1 and UML 2 language design teams, lead their joint effort to respond to the UML for SE RFP. As Chair of the SysML Partners, Kobryn coined the language name "SysML" (short for "Systems Modeling Language"), designed the original SysML logo, and organized the SysML Language Design team as an open source specification project. Friedenthal served as Deputy Chair, and helped organize the original SysML Partners team. In January 2005, the SysML Partners published the SysML v0.9 draft specification. Later, in August 2005, Friedenthal and several other original SysML Partners left to establish a competing SysML Submission Team (SST). The SysML Partners released the SysML v1.0 Alpha specification in November 2005. === OMG SysML === After a series of competing SysML specification proposals, a SysML Merge Team was proposed to the OMG in April 2006. This proposal was voted upon and adopted by the OMG in July 2006 as OMG SysML, to differentiate it from the original open source specification from which it was derived. Because OMG SysML is derived from open source SysML, it also includes an open source license for distribution and use. The OMG SysML v. 1.0 specification was issued by the OMG as an Available Specification in September 2007. The current version of OMG SysML is v1.6, which was issued by the OMG in December 2019. In addition, SysML was published by the International Organization for Standardization (ISO) in 2017 as a full International Standard (IS), ISO/IEC 19514:2017 (Information technology -- Object management group systems modeling language). The OMG has been working on the next generation of SysML and issued a Request for Proposals (RFP) for version 2 on December 8, 2017, following its open standardization process. The resulting specification, which will incorporate language enhancements from experience applying the language, will include a UML profile, a metamodel, and a mapping between the profile and metamodel. A second RFP for a SysML v2 Application Programming Interface (API) and Services RFP was issued in June 2018. Its aim is to enhance the interoperability of model-based systems engineering tools. == Diagrams == SysML includes 9 types of diagram, some of which are taken from UML. Activity diagram Block definition diagram Internal block diagram Package diagram Parametric diagram Requirement diagram Sequence diagram State machine diagram Use case diagram == Tools == There are several modeling tool vendors offering SysML support. Lists of tool vendors who support SysML or OMG SysML can be found on the SysML Forum or SysML websites, respectively. === Model exchange === As an OMG UML 2.0 profile, SysML models are designed to be exchanged using the XML Metadata Interchange (XMI) standard. In addition, architectural alignment work is underway to support the ISO 10303 (also known as STEP, the Standard for the Exchange of Product model data) AP-233 standard for exchanging and sharing information between systems engineering software applications and tools. == See also == SoaML Energy systems language Object process methodology Universal Systems Language List of SysML tools == References == == Further reading == Balmelli, Laurent (2007). An Overview of the Systems Modeling Language for Products and Systems Development (PDF). Journal of Object Technology, vol. 6, no. 6, July–August 2007, pp. 149-177. Delligatti, Lenny (2013). SysML Distilled: A Brief Guide to the Systems Modeling Language. Addison-Wesley Professional. ISBN 978-0-321-92786-6. Holt, Jon (2008). SysML for Systems Engineering. The Institution of Engineering and Technology. ISBN 978-0-86341-825-9. Weilkiens, Tim (2008). Systems Engineering with SysML/UML: Modeling, Analysis, Design. Morgan Kaufmann / The OMG Press. ISBN 978-0-12-374274-2. Friedenthal, Sanford; Moore, Alan; Steiner, Rick (2016). A Practical Guide to SysML: The Systems Modeling Language (Third ed.). Morgan Kaufmann / The OMG Press. ISBN 978-0-12-800202-5. Douglass, Bruce (2015). Agile Systems Engineering. Morgan Kaufmann. ISBN 978-0128021200. == External links == Introduction to Systems Modeling Language (SysML), Part 1 and Part 2. YouTube. SysML Open Source Specification Project Provides information related to SysML open source specifications, FAQ, mailing lists, and open source licenses. OMG SysML Website Furnishes information related to the OMG SysML specification, SysML tutorial, papers, and tool vendor information. Article "EE Times article on SysML (May 8, 2006)" SE^2 MBSE Challenge team: "Telescope Modeling" Paper "System Modelling Language explained" (PDF format) Bruce Douglass: Real-Time Agile Systems and Software Development List of Popular SysML Modeling Tools
Wikipedia/Systems_modeling_language
Processor design is a subfield of computer science and computer engineering (fabrication) that deals with creating a processor, a key component of computer hardware. The design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture, which might be described in e.g. VHDL or Verilog. For microprocessor design, this description is then manufactured employing some of the various semiconductor device fabrication processes, resulting in a die which is bonded onto a chip carrier. This chip carrier is then soldered onto, or inserted into a socket on, a printed circuit board (PCB). The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values using registers, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow. Processor designs are often tested and validated on one or several FPGAs before sending the design of the processor to a foundry for semiconductor fabrication. == Details == === Basics === CPU design is divided into multiple components. Information is transferred through datapaths (such as ALUs and pipelines). These datapaths are controlled through logic by control units. Memory components include register files and caches to retain information, or certain actions. Clock circuitry maintains internal rhythms and timing through clock drivers, PLLs, and clock distribution networks. Pad transceiver circuitry which allows signals to be received and sent and a logic gate cell library which is used to implement the logic. Logic gates are the foundation for processor design as they are used to implement most of the processor's components. CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency, power-dissipation, and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them as intellectual property. Control logic implementation techniques (logic synthesis using CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic, finite-state machines, microprogramming (common from 1965 to 1985), and Programmable logic arrays (common in the 1980s, no longer common). === Implementation logic === Device types used to implement the logic include: Individual vacuum tubes, individual transistors and semiconductor diodes, and transistor-transistor logic small-scale integration logic chips – no longer used for CPUs Programmable array logic and programmable logic devices – no longer used for CPUs Emitter-coupled logic (ECL) gate arrays – no longer common CMOS gate arrays – no longer used for CPUs CMOS mass-produced ICs – the vast majority of CPUs by volume CMOS ASICs – only for a minority of special applications due to expense Field-programmable gate arrays (FPGA) – common for soft microprocessors, and more or less required for reconfigurable computing A CPU design project generally has these major tasks: Programmer-visible instruction set architecture, which can be implemented by a variety of microarchitectures Architectural study and performance modeling in ANSI C/C++ or SystemC High-level synthesis (HLS) or register transfer level (RTL, e.g. logic) implementation RTL verification Circuit design of speed critical components (caches, registers, ALUs) Logic synthesis or logic-gate-level design Timing analysis to confirm that all logic and circuits will run at the specified operating frequency Physical design including floorplanning, place and route of logic gates Checking that RTL, gate-level, transistor-level and physical-level representations are equivalent Checks for signal integrity, chip manufacturability Re-designing a CPU core to a smaller die area helps to shrink everything (a "photomask shrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have less parasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one very-large-scale integration chip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost. As with most complex electronic designs, the logic verification effort (proving that the design does not have bugs) now dominates the project schedule of a CPU. Key CPU architectural innovations include index register, cache, virtual memory, instruction pipelining, superscalar, CISC, RISC, virtual machine, emulators, microprogram, and stack. === Microarchitectural concepts === === Research topics === A variety of new CPU design ideas have been proposed, including reconfigurable logic, clockless CPUs, computational RAM, and optical computing. === Performance analysis and benchmarking === Benchmarking is a way of testing CPU speed. Examples include SPECint and SPECfp, developed by Standard Performance Evaluation Corporation, and ConsumerMark developed by the Embedded Microprocessor Benchmark Consortium EEMBC. Some of the commonly used metrics include: Instructions per second - Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see Megahertz Myth). FLOPS - The number of floating point operations per second is often important in selecting computers for scientific computations. Performance per watt - System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself. Some system designers building parallel computers pick CPUs based on the speed per dollar. System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP) Computer programmers who program directly in assembly language want a CPU to support a full featured instruction set. Low power - For systems with limited power sources (e.g. solar, batteries, human power). Small size or low weight - for portable embedded systems, systems for spacecraft. Environmental impact - Minimizing environmental impact of computers during manufacturing and recycling as well during use. Reducing waste, reducing hazardous materials. (see Green computing). There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa. == Markets == There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets. === General-purpose computing === As of 2010, in the general-purpose computing market, that is, desktop, laptop, and server computers commonly used in businesses and homes, the Intel IA-32 and the 64-bit version x86-64 architecture dominate the market, with its rivals PowerPC and SPARC maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops. Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption. ==== High-end processor economics ==== In 1984, most high-performance CPUs required four to five years to develop. === Scientific computing === Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs. === Embedded design === As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors. These single-function devices differ from the more familiar general-purpose CPUs in several ways: Low cost is of high importance. It is important to maintain a low power dissipation as embedded devices often have a limited battery life and it is often impractical to include cooling fans. To give lower system cost, peripherals are integrated with the processor on the same silicon chip. Keeping peripherals on-chip also reduces power consumption as external GPIO ports typically require buffering so that they can source or sink the relatively high current loads that are required to maintain a strong signal outside of the chip. Many embedded applications have a limited amount of physical space for circuitry; keeping peripherals on-chip will reduce the space required for the circuit board. The program and data memories are often integrated on the same chip. When the only allowed program memory is ROM, the device is known as a microcontroller. For many embedded applications, interrupt latency will be more critical than in some general-purpose processors. ==== Embedded processor economics ==== The embedded CPU family with the largest number of total units shipped is the 8051, averaging nearly a billion units per year. The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.4730 square millimeters of silicon. As of 2009, more CPUs are produced using the ARM architecture family instruction sets than any other 32-bit instruction set. The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time. The 32-bit Parallax Propeller microcontroller architecture and the first chip were designed by two people in about 10 human years of work time. The 8-bit AVR architecture and first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology. The 8-bit 6502 architecture and the first MOS Technology 6502 chip were designed in 13 months by a group of about 9 people. ==== Research and educational CPU design ==== The 32-bit Berkeley RISC I and RISC II processors were mostly designed by a series of students as part of a four quarter sequence of graduate courses. This design became the basis of the commercial SPARC processor design. For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of 7400 series integrated circuits. One team of 4 students designed and built a simple 32 bit CPU during that semester. Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester. The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time. 24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU. ==== Soft microprocessor cores ==== For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by logic synthesis techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker time-to-market. == See also == Amdahl's law Central processing unit Comparison of instruction set architectures Complex instruction set computer CPU cache Electronic design automation Heterogeneous computing High-level synthesis History of general-purpose CPUs Integrated circuit design Microarchitecture Microprocessor Minimal instruction set computer Moore's law Reduced instruction set computer System on a chip Network on a chip Process design kit – a set of documents created or accumulated for a semiconductor device production process Uncore == References == === General references === Hwang, Enoch (2006). Digital Logic and Microprocessor Design with VHDL. Thomson. ISBN 0-534-46593-5. Processor Design: An Introduction
Wikipedia/Processor_design
The spiral model is a risk-driven software development process model. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping. == History == This model was first described by Barry Boehm in his 1986 paper, "A Spiral Model of Software Development and Enhancement." In 1988 Boehm published a similar paper to a wider audience. These papers introduce a diagram that has been reproduced in many subsequent publications discussing the spiral model. These early papers use the term "process model" to refer to the spiral model as well as to incremental, waterfall, prototyping, and other approaches. However, the spiral model's characteristic risk-driven blending of other process models' features is already present: [R]isk-driven subsetting of the spiral model steps allows the model to accommodate any appropriate mixture of a specification-oriented, prototype-oriented, simulation-oriented, automatic transformation-oriented, or other approach to software development. In later publications, Boehm describes the spiral model as a "process model generator," where choices based on a project's risks generate an appropriate process model for the project. Thus, the incremental, waterfall, prototyping, and other process models are special cases of the spiral model that fit the risk patterns of certain projects. Boehm also identifies a number of misconceptions arising from oversimplifications in the original spiral model diagram. He says the most dangerous of these misconceptions are: that the spiral is simply a sequence of waterfall increments; that all project activities follow a single spiral sequence; that every activity in the diagram must be performed, and in the order shown. While these misconceptions may fit the risk patterns of a few projects, they are not true for most projects. In a National Research Council report this model was extended to include risks related to human users. To better distinguish them from "hazardous spiral look-alikes," Boehm lists six characteristics common to all authentic applications of the spiral model. == The six invariants of spiral model == Authentic applications of the spiral model are driven by cycles that always display six characteristics. Boehm illustrates each with an example of a "dangerous spiral look-alike" that violates the invariant. === Define artifacts concurrently === Sequentially defining the key artifacts for a project often increases the possibility of developing a system that meets stakeholder "win conditions" (objectives and constraints). This invariant excludes “hazardous spiral look-alike” processes that use a sequence of incremental waterfall passes in settings where the underlying assumptions of the waterfall model do not apply. Boehm lists these assumptions as follows: The requirements are known in advance of implementation. The requirements have no unresolved, high-risk implications, such as risks due to cost, schedule, performance, safety, user interfaces, organizational impacts, etc. The nature of the requirements will not change very much during development or evolution. The requirements are compatible with all the key system stakeholders’ expectations, including users, customer, developers, maintainers, and investors. The right architecture for implementing the requirements is well understood. There is enough calendar time to proceed sequentially. In situations where these assumptions do apply, it is a project risk not to specify the requirements and proceed sequentially. The waterfall model thus becomes a risk-driven special case of the spiral model. === Perform four basic activities in every cycle === This invariant identifies the four activities that must occur in each cycle of the spiral model: Consider the win conditions of all success-critical stakeholders. Identify and evaluate alternative approaches for satisfying the win conditions. Identify and resolve risks that stem from the selected approach(es). Obtain approval from all success-critical stakeholders, plus commitment to pursue the next cycle. Project cycles that omit or shortchange any of these activities risk wasting effort by pursuing options that are unacceptable to key stakeholders, or are too risky. Some "hazardous spiral look-alike" processes violate this invariant by excluding key stakeholders from certain sequential phases or cycles. For example, system maintainers and administrators might not be invited to participate in definition and development of the system. As a result, the system is at risk of failing to satisfy their win conditions. === Risk determines level of effort === For any project activity (e.g., requirements analysis, design, prototyping, testing), the project team must decide how much effort is enough. In authentic spiral process cycles, these decisions are made by minimizing overall risk. For example, investing additional time testing a software product often reduces the risk due to the marketplace rejecting a shoddy product. However, additional testing time might increase the risk due to a competitor's early market entry. From a spiral model perspective, testing should be performed until the total risk is minimized, and no further. "Hazardous spiral look-alikes" that violate this invariant include evolutionary processes that ignore risk due to scalability issues, and incremental processes that invest heavily in a technical architecture that must be redesigned or replaced to accommodate future increments of the product. === Risk determines degree of details === For any project artifact (e.g., requirements specification, design document, test plan), the project team must decide how much detail is enough. In authentic spiral process cycles, these decisions are made by minimizing overall risk. Considering requirements specification as an example, the project should precisely specify those features where risk is reduced through precise specification (e.g., interfaces between hardware and software, interfaces between prime and sub-contractors). Conversely, the project should not precisely specify those features where precise specification increases the risk (e.g., graphical screen layouts, the behavior of off-the-shelf components). === Use anchor point milestones === Boehm's original description of the spiral model did not include any process milestones. In later refinements, he introduces three anchor point milestones that serve as progress indicators and points of commitment. These anchor point milestones can be characterized by key questions. Life Cycle Objectives. Is there a sufficient definition of a technical and management approach to satisfying everyone's win conditions? If the stakeholders agree that the answer is "Yes", then the project has cleared this LCO milestone. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to "Yes." Life Cycle Architecture. Is there a sufficient definition of the preferred approach to satisfying everyone's win conditions, and are all significant risks eliminated or mitigated? If the stakeholders agree that the answer is "Yes", then the project has cleared this LCA milestone. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to "Yes." Initial Operational Capability. Is there sufficient preparation of the software, site, users, operators, and maintainers to satisfy everyone's win conditions by launching the system? If the stakeholders agree that the answer is "Yes", then the project has cleared the IOC milestone and is launched. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to "Yes." "Hazardous spiral look-alikes" that violate this invariant include evolutionary and incremental processes that commit significant resources to implementing a solution with a poorly defined architecture. The three anchor point milestones fit easily into the Rational Unified Process (RUP), with LCO marking the boundary between RUP's Inception and Elaboration phases, LCA marking the boundary between Elaboration and Construction phases, and IOC marking the boundary between Construction and Transition phases. === Focus on the system and its life cycle === This invariant highlights the importance of the overall system and the long-term concerns spanning its entire life cycle. It excludes "hazardous spiral look-alikes" that focus too much on the initial development of software code. These processes can result from following published approaches to object-oriented or structured software analysis and design while neglecting other aspects of the project's process needs. == References ==
Wikipedia/Spiral_model
An information model in software engineering is a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context. == Overview == The term information model in general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised to facility information model, building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility. Within the field of software engineering and data modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations. An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas. == Information modeling languages == In 1976, an entity-relationship (ER) graphic notation was introduced by Peter Chen. He stressed that it was a "semantic" modelling technique and independent of any database modelling techniques such as Hierarchical, CODASYL, Relational etc. Since then, languages for information models have continued to evolve. Some examples are the Integrated Definition Language 1 Extended (IDEF1X), the EXPRESS language and the Unified Modeling Language (UML). Research by contemporaries of Peter Chen such as J.R.Abrial (1974) and G.M Nijssen (1976) led to today's Fact Oriented Modeling (FOM) languages which are based on linguistic propositions rather than on "entities". FOM tools can be used to generate an ER model which means that the modeler can avoid the time-consuming and error prone practice of manual normalization. Object-Role Modeling language (ORM) and Fully Communication Oriented Information Modeling (FCO-IM) are both research results developed in the early 1990s, based upon earlier research. In the 1980s there were several approaches to extend Chen’s Entity Relationship Model. Also important in this decade is REMORA by Colette Rolland. The ICAM Definition (IDEF) Language was developed from the U.S. Air Force ICAM Program during the 1976 to 1982 timeframe. The objective of the ICAM Program, according to Lee (1999), was to increase manufacturing productivity through the systematic application of computer technology. IDEF includes three different modeling methods: IDEF0, IDEF1, and IDEF2 for producing a functional model, an information model, and a dynamic model respectively. IDEF1X is an extended version of IDEF1. The language is in the public domain. It is a graphical representation and is designed using the ER approach and the relational theory. It is used to represent the “real world” in terms of entities, attributes, and relationships between entities. Normalization is enforced by KEY Structures and KEY Migration. The language identifies property groupings (Aggregation) to form complete entity definitions. EXPRESS was created as ISO 10303-11 for formally specifying information requirements of product data model. It is part of a suite of standards informally known as the STandard for the Exchange of Product model data (STEP). It was first introduced in the early 1990s. The language, according to Lee (1999), is a textual representation. In addition, a graphical subset of EXPRESS called EXPRESS-G is available. EXPRESS is based on programming languages and the O-O paradigm. A number of languages have contributed to EXPRESS. In particular, Ada, Algol, C, C++, Euler, Modula-2, Pascal, PL/1, and SQL. EXPRESS consists of language elements that allow an unambiguous object definition and specification of constraints on the objects defined. It uses SCHEMA declaration to provide partitioning and it supports specification of data properties, constraints, and operations. UML is a modeling language for specifying, visualizing, constructing, and documenting the artifacts, rather than processes, of software systems. It was conceived originally by Grady Booch, James Rumbaugh, and Ivar Jacobson. UML was approved by the Object Management Group (OMG) as a standard in 1997. The language, according to Lee (1999), is non-proprietary and is available to the public. It is a graphical representation. The language is based on the objected-oriented paradigm. UML contains notations and rules and is designed to represent data requirements in terms of O-O diagrams. UML organizes a model in a number of views that present different aspects of a system. The contents of a view are described in diagrams that are graphs with model elements. A diagram contains model elements that represent common O-O concepts such as classes, objects, messages, and relationships among these concepts. IDEF1X, EXPRESS, and UML all can be used to create a conceptual model and, according to Lee (1999), each has its own characteristics. Although some may lead to a natural usage (e.g., implementation), one is not necessarily better than another. In practice, it may require more than one language to develop all information models when an application is complex. In fact, the modeling practice is often more important than the language chosen. Information models can also be expressed in formalized natural languages, such as Gellish. Gellish, which has natural language variants Gellish Formal English, Gellish Formal Dutch (Gellish Formeel Nederlands), etc. is an information representation language or modeling language that is defined in the Gellish smart Dictionary-Taxonomy, which has the form of a Taxonomy/Ontology. A Gellish Database is not only suitable to store information models, but also knowledge models, requirements models and dictionaries, taxonomies and ontologies. Information models in Gellish English use Gellish Formal English expressions. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as: - the Eiffel tower <is located in> Paris - Paris <is classified as a> city whereas information requirements and knowledge can be expressed for example as follows: - tower <shall be located in a> geographical area - city <is a kind of> geographical area Such Gellish expressions use names of concepts (such as 'city') and relation types (such as ⟨is located in⟩ and ⟨is classified as a⟩) that should be selected from the Gellish Formal English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains definitions of more than 40000 concepts, including more than 600 standard relation types. Thus, an information model in Gellish consists of a collection of Gellish expressions that use those phrases and dictionary concepts to express facts or make statements, queries and answers. == Standard sets of information models == The Distributed Management Task Force (DMTF) provides a standard set of information models for various enterprise domains under the general title of the Common Information Model (CIM). Specific information models are derived from CIM for particular management domains. The TeleManagement Forum (TMF) has defined an advanced model for the Telecommunication domain (the Shared Information/Data model, or SID) as another. This includes views from the business, service and resource domains within the Telecommunication industry. The TMF has established a set of principles that an OSS integration should adopt, along with a set of models that provide standardized approaches. The models interact with the information model (the Shared Information/Data Model, or SID), via a process model (the Business Process Framework (eTOM), or eTOM) and a life cycle model. == See also == Building information modeling Concept map Conceptual model (computer science) System information modelling == Notes == == References == ISO/IEC TR9007 Conceptual Schema, 1986 Andries van Renssen, Gellish, A Generic Extensible Ontological Language, (PhD, Delft University of Technology, 2005) This article incorporates public domain material from the National Institute of Standards and Technology == Further reading == Richard Veryard (1992). Information modelling : practical guidance. New York : Prentice Hall. Repa, Vaclav (2012). Information Modeling of Organizations. Bruckner Publishing. ISBN 978-80-904661-3-5. Berner, Stefan (2019). Information modelling, A method for improving understanding and accuracy in your collaboration. vdf Zurich. ISBN 978-3-7281-3943-6. == External links == RFC 3198 – Terminology for Policy-Based Management
Wikipedia/Information_model
A system architecture is the conceptual model that defines the structure, behavior, and views of a system. An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structures and behaviors of the system. A system architecture can consist of system components and the sub-systems developed, that will work together to implement the overall system. There have been efforts to formalize languages to describe system architecture, collectively these are called architecture description languages (ADLs). == Overview == Various organizations can define systems architecture in different ways, including: The fundamental organization of a system, embodied in its components, their relationships to each other and to the environment, and the principles governing its design and evolution. A representation of a system, including a mapping of functionality onto hardware and software components, a mapping of the software architecture onto the hardware architecture, and human interaction with these components. An allocated arrangement of physical elements which provides the design solution for a consumer product or life-cycle process intended to satisfy the requirements of the functional architecture and the requirements baseline. An architecture consists of the most important, pervasive, top-level, strategic inventions, decisions, and their associated rationales about the overall structure (i.e., essential elements and their relationships) and associated characteristics and behavior. A description of the design and contents of a computer system. If documented, it may include information such as a detailed inventory of current hardware, software and networking capabilities; a description of long-range plans and priorities for future purchases, and a plan for upgrading and/or replacing dated equipment and software. A formal description of a system, or a detailed plan of the system at component level to guide its implementation. The composite of the design architectures for products and their life-cycle processes. The structure of components, their interrelationships, and the principles and guidelines governing their design and evolution over time. One can think of system architecture as a set of representations of an existing (or future) system. These representations initially describe a general, high-level functional organization, and are progressively refined to more detailed and concrete descriptions. System architecture conveys the informational content of the elements consisting of a system, the relationships among those elements, and the rules governing those relationships. The architectural components and set of relationships between these components that an architecture description may consist of hardware, software, documentation, facilities, manual procedures, or roles played by organizations or people. A system architecture primarily concentrates on the internal interfaces among the system's components or subsystems, and on the interface(s) between the system and its external environment, especially the user. (In the specific case of computer systems, this latter, special, interface is known as the computer human interface, AKA human computer interface, or HCI; formerly called the man-machine interface.) One can contrast a system architecture with system architecture engineering (SAE) - the method and discipline for effectively implementing the architecture of a system: SAE is a method because a sequence of steps is prescribed to produce or to change the architecture of a system within a set of constraints. SAE is a discipline because a body of knowledge is used to inform practitioners as to the most effective way to design the system within a set of constraints. == History == Systems architecture depends heavily on practices and techniques which were developed over thousands of years in many other fields, perhaps the most important being civil architecture. Prior to the advent of digital computers, the electronics and other engineering disciplines used the term "system" as it is still commonly used today. However, with the arrival of digital computers and the development of software engineering as a separate discipline, it was often necessary to distinguish among engineered hardware artifacts, software artifacts, and the combined artifacts. A programmable hardware artifact, or computing machine, that lacks its computer program is impotent; even as a software artifact, or program, is equally impotent unless it can be used to alter the sequential states of a suitable (hardware) machine. However, a hardware machine and its programming can be designed to perform an almost illimitable number of abstract and physical tasks. Within the computer and software engineering disciplines (and, often, other engineering disciplines, such as communications), then, the term system came to be defined as containing all of the elements necessary (which generally includes both hardware and software) to perform a useful function. Consequently, within these engineering disciplines, a system generally refers to a programmable hardware machine and its included program. And a systems engineer is defined as one concerned with the complete device, both hardware and software and, more particularly, all of the interfaces of the device, including that between hardware and software, and especially between the complete device and its user (the CHI). The hardware engineer deals (more or less) exclusively with the hardware device; the software engineer deals (more or less) exclusively with the computer program; and the systems engineer is responsible for seeing that the program is capable of properly running within the hardware device, and that the system composed of the two entities is capable of properly interacting with its external environment, especially the user, and performing its intended function. A systems architecture makes use of elements of both software and hardware and is used to enable the design of such a composite system. A good architecture may be viewed as a 'partitioning scheme,' or algorithm, which partitions all of the system's present and foreseeable requirements into a workable set of cleanly bounded subsystems with nothing left over. That is, it is a partitioning scheme which is exclusive, inclusive, and exhaustive. A major purpose of the partitioning is to arrange the elements in the sub systems so that there is a minimum of interdependencies needed among them. In both software and hardware, a good sub system tends to be seen to be a meaningful "object". Moreover, a good architecture provides for an easy mapping to the user's requirements and the validation tests of the user's requirements. Ideally, a mapping also exists from every least element to every requirement and test. == Modern Trends in Systems Architecture == With the increasing complexity of digital systems, modern systems architecture has evolved to incorporate advanced principles such as modularization, microservices, and artificial intelligence-driven optimizations. Cloud computing, edge computing, and distributed ledger technologies (DLTs) have also influenced architectural decisions, enabling more scalable, secure, and fault-tolerant designs. One of the most significant shifts in recent years has been the adoption of Software-Defined Architectures (SDA), which decouple hardware from software, allowing systems to be more flexible and adaptable to changing requirements. This trend is particularly evident in network architectures, where Software-Defined Networking (SDN) and Network Function Virtualization (NFV) enable more dynamic management of network resources. In addition, AI-enhanced system architectures have gained traction, leveraging machine learning for predictive maintenance, anomaly detection, and automated system optimization. The rise of cyber-physical systems (CPS) and digital twins has further extended system architecture principles beyond traditional computing, integrating real-world data into virtual models for better decision-making. With the rise of edge computing, system architectures now focus on decentralization and real-time processing, reducing dependency on centralized data centers and improving latency-sensitive applications such as autonomous vehicles, robotics, and IoT networks. These advancements continue to redefine how systems are designed, leading to more resilient, scalable, and intelligent architectures suited for the digital age. == Types == Several types of system architectures exist, each catering to different domains and applications. While all system architectures share fundamental principles of structure, behavior, and interaction, they vary in design based on their intended purpose. Several types of systems architectures (underlain by the same fundamental principles) have been identified as follows: Hardware Architecture: Hardware architecture defines the physical components of a system, including processors, memory hierarchies, buses, and input/output interfaces. It encompasses the design and integration of computing hardware elements to ensure performance, reliability, and scalability. Software Architecture: Software architecture focuses on the high-level organization of software systems, including modules, components, and communication patterns. It plays a crucial role in defining system behavior, security, and maintainability. Examples include monolithic, microservices, event-driven, and layered architectures. Enterprise Architecture: Enterprise architecture provides a strategic blueprint for an organization’s IT infrastructure, ensuring that business goals align with technology investments. It includes frameworks such as TOGAF (The Open Group Architecture Framework) and Zachman Framework to standardize IT governance and business operations. Collaborative Systems Architecture: This category includes large-scale interconnected systems designed for seamless interaction among multiple entities. Examples include the Internet, intelligent transportation systems, air traffic control networks, and defense systems. These architectures emphasize interoperability, distributed control, and resilience. Manufacturing Systems Architecture: Manufacturing system architectures integrate automation, robotics, IoT, and AI-driven decision-making to optimize production workflows. Emerging trends include Industry 4.0, cyber-physical systems (CPS), and digital twins, enabling predictive maintenance and real-time monitoring. Cloud and Edge Computing Architecture: With the shift toward cloud-based infrastructures, cloud architecture defines how resources are distributed across data centers and virtualized environments. Edge computing architecture extends this by processing data closer to the source, reducing latency for applications like autonomous vehicles, industrial automation, and smart cities. AI-Driven System Architecture: Artificial Intelligence (AI) and machine learning-driven architectures optimize decision-making by dynamically adapting system behavior based on real-time data. This is widely used in autonomous systems, cybersecurity, and intelligent automation == See also == Arcadia (engineering) Architectural pattern (computer science) Department of Defense Architecture Framework Enterprise architecture framework Enterprise information security architecture Process architecture Requirements analysis Software architecture Software engineering Systems architect Systems analysis Systems design Systems engineering == References == == External links == Principles of system architecture What is Systems Architecture ? INCOSE Systems Architecture Working Group Journal of Systems Architecture (Embedded Software Design)
Wikipedia/Systems_architecture
Essential systems analysis was a new methodology for software specification published in 1984 by Stephen M. McMenamin and John F. Palmer for performing structured systems analysis based on the concept of event partitioning. The essence of a system is "its required behavior independent of the technology used to implement the system". It is an abstract model of what the system must do without describing how it will do it. The methodology proposed that finding the true requirements for an information system entails the development of an essential model for the system, based on the concepts of a perfect internal technology, composed of: a perfect memory, that is infinitely fast and big, and a perfect processor, that is infinitely potent and fast. Edward Yourdon later adapted it to develop modern structured analysis. The main result was a new and more systematic way to develop the data-flow diagrams, which are the most characteristic tool of structured analysis. Essential analysis, as adopted in Yourdon's modern structured analysis, was the main software development methodology until object-oriented analysis became mainstream. == References ==
Wikipedia/Essential_systems_analysis
In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type (for example, integer, floating point, string) to every term (a word, phrase, or other set of symbols). Usually the terms are various language constructs of a computer program, such as variables, expressions, functions, or modules. A type system dictates the operations that can be performed on a term. For variables, the type system determines the allowed values of that term. Type systems formalize and enforce the otherwise implicit categories the programmer uses for algebraic data types, data structures, or other data types, such as "string", "array of float", "function returning boolean". Type systems are often specified as part of programming languages and built into interpreters and compilers, although the type system of a language can be extended by optional tools that perform added checks using the language's original type syntax and grammar. The main purpose of a type system in a programming language is to reduce possibilities for bugs in computer programs due to type errors. The given type system in question determines what constitutes a type error, but in general, the aim is to prevent operations expecting a certain kind of value from being used with values of which that operation does not make sense (validity errors). Type systems allow defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (at compile time), dynamically (at run time), or as a combination of both. Type systems have other purposes as well, such as expressing business rules, enabling certain compiler optimizations, allowing for multiple dispatch, and providing a form of documentation. == Usage overview == An example of a simple type system is that of the C language. The portions of a C program are the function definitions. One function is invoked by another function. The interface of a function states the name of the function and a list of parameters that are passed to the function's code. The code of an invoking function states the name of the invoked, along with the names of variables that hold values to pass to it. During a computer program's execution, the values are placed into temporary storage, then execution jumps to the code of the invoked function. The invoked function's code accesses the values and makes use of them. If the instructions inside the function are written with the assumption of receiving an integer value, but the calling code passed a floating-point value, then the wrong result will be computed by the invoked function. The C compiler checks the types of the arguments passed to a function when it is called against the types of the parameters declared in the function's definition. If the types do not match, the compiler throws a compile-time error or warning. A compiler may also use the static type of a value to optimize the storage it needs and the choice of algorithms for operations on the value. In many C compilers the float data type, for example, is represented in 32 bits, in accord with the IEEE specification for single-precision floating point numbers. They will thus use floating-point-specific microprocessor operations on those values (floating-point addition, multiplication, etc.). The depth of type constraints and the manner of their evaluation affect the typing of the language. A programming language may further associate an operation with various resolutions for each type, in the case of type polymorphism. Type theory is the study of type systems. The concrete types of some programming languages, such as integers and strings, depend on practical issues of computer architecture, compiler implementation, and language design. == Fundamentals == Formally, type theory studies type systems. A programming language must have the opportunity to type check using the type system whether at compile time or runtime, manually annotated or automatically inferred. As Mark Manasse concisely put it: The fundamental problem addressed by a type theory is to ensure that programs have meaning. The fundamental problem caused by a type theory is that meaningful programs may not have meanings ascribed to them. The quest for richer type systems results from this tension. Assigning a data type, termed typing, gives meaning to a sequence of bits such as a value in memory or some object such as a variable. The hardware of a general purpose computer is unable to discriminate between for example a memory address and an instruction code, or between a character, an integer, or a floating-point number, because it makes no intrinsic distinction between any of the possible values that a sequence of bits might mean. Associating a sequence of bits with a type conveys that meaning to the programmable hardware to form a symbolic system composed of that hardware and some program. A program associates each value with at least one specific type, but it also can occur that one value is associated with many subtypes. Other entities, such as objects, modules, communication channels, and dependencies can become associated with a type. Even a type can become associated with a type. An implementation of a type system could in theory associate identifications called data type (a type of a value), class (a type of an object), and kind (a type of a type, or metatype). These are the abstractions that typing can go through, on a hierarchy of levels contained in a system. When a programming language evolves a more elaborate type system, it gains a more finely grained rule set than basic type checking, but this comes at a price when the type inferences (and other properties) become undecidable, and when more attention must be paid by the programmer to annotate code or to consider computer-related operations and functioning. It is challenging to find a sufficiently expressive type system that satisfies all programming practices in a type safe manner. A programming language compiler can also implement a dependent type or an effect system, which enables even more program specifications to be verified by a type checker. Beyond simple value-type pairs, a virtual "region" of code is associated with an "effect" component describing what is being done with what, and enabling for example to "throw" an error report. Thus the symbolic system may be a type and effect system, which endows it with more safety checking than type checking alone. Whether automated by the compiler or specified by a programmer, a type system renders program behavior illegal if it falls outside the type-system rules. Advantages provided by programmer-specified type systems include: Abstraction (or modularity) – Types enable programmers to think at a higher level than the bit or byte, not bothering with low-level implementation. For example, programmers can begin to think of a string as a set of character values instead of as an array of bytes. Higher still, types enable programmers to think about and express interfaces between two of any-sized subsystems. This enables more levels of localization so that the definitions required for interoperability of the subsystems remain consistent when those two subsystems communicate. Documentation – In more expressive type systems, types can serve as a form of documentation clarifying the intent of the programmer. For example, if a programmer declares a function as returning a timestamp type, this documents the function when the timestamp type can be explicitly declared deeper in the code to be an integer type. Advantages provided by compiler-specified type systems include: Optimization – Static type-checking may provide useful compile-time information. For example, if a type requires that a value must align in memory at a multiple of four bytes, the compiler may be able to use more efficient machine instructions. Safety – A type system enables the compiler to detect meaningless or invalid code. For example, we can identify an expression 3 / "Hello, World" as invalid, when the rules do not specify how to divide an integer by a string. Strong typing offers more safety, but cannot guarantee complete type safety. === Type errors === A type error occurs when an operation receives a different type of data than it expected. For example, a type error would happen if a line of code divides two integers, and is passed a string of letters instead of an integer. It is an unintended condition which might manifest in multiple stages of a program's development. Thus a facility for detection of the error is needed in the type system. In some languages, such as Haskell, for which type inference is automated, lint might be available to its compiler to aid in the detection of error. Type safety contributes to program correctness, but might only guarantee correctness at the cost of making the type checking itself an undecidable problem (as in the halting problem). In a type system with automated type checking, a program may prove to run incorrectly yet produce no compiler errors. Division by zero is an unsafe and incorrect operation, but a type checker which only runs at compile time does not scan for division by zero in most languages; that division would surface as a runtime error. To prove the absence of these defects, other kinds of formal methods, collectively known as program analyses, are in common use. Alternatively, a sufficiently expressive type system, such as in dependently typed languages, can prevent these kinds of errors (for example, expressing the type of non-zero numbers). In addition, software testing is an empirical method for finding errors that such a type checker would not detect. == Type checking == The process of verifying and enforcing the constraints of types—type checking—may occur at compile time (a static check) or at run-time (a dynamic check). If a language specification requires its typing rules strongly, more or less allowing only those automatic type conversions that do not lose information, one can refer to the process as strongly typed; if not, as weakly typed. The terms are not usually used in a strict sense. === Static type checking === Static type checking is the process of verifying the type safety of a program based on analysis of a program's text (source code). If a program passes a static type checker, then the program is guaranteed to satisfy some set of type safety properties for all possible inputs. Static type checking can be considered a limited form of program verification (see type safety), and in a type-safe language, can also be considered an optimization. If a compiler can prove that a program is well-typed, then it does not need to emit dynamic safety checks, allowing the resulting compiled binary to run faster and to be smaller. Static type checking for Turing-complete languages is inherently conservative. That is, if a type system is both sound (meaning that it rejects all incorrect programs) and decidable (meaning that it is possible to write an algorithm that determines whether a program is well-typed), then it must be incomplete (meaning there are correct programs, which are also rejected, even though they do not encounter runtime errors). For example, consider a program containing the code: if <complex test> then <do something> else <signal that there is a type error> Even if the expression <complex test> always evaluates to true at run-time, most type checkers will reject the program as ill-typed, because it is difficult (if not impossible) for a static analyzer to determine that the else branch will not be taken. Consequently, a static type checker will quickly detect type errors in rarely used code paths. Without static type checking, even code coverage tests with 100% coverage may be unable to find such type errors. The tests may fail to detect such type errors, because the combination of all places where values are created and all places where a certain value is used must be taken into account. A number of useful and common programming language features cannot be checked statically, such as downcasting. Thus, many languages will have both static and dynamic type checking; the static type checker verifies what it can, and dynamic checks verify the rest. Many languages with static type checking provide a way to bypass the type checker. Some languages allow programmers to choose between static and dynamic type safety. For example, historically C# declares variables statically,: 77, Section 3.2  but C# 4.0 introduces the dynamic keyword, which is used to declare variables to be checked dynamically at runtime.: 117, Section 4.1  Other languages allow writing code that is not type-safe; for example, in C, programmers can freely cast a value between any two types that have the same size, effectively subverting the type concept. === Dynamic type checking and runtime type information === Dynamic type checking is the process of verifying the type safety of a program at runtime. Implementations of dynamically type-checked languages generally associate each runtime object with a type tag (i.e., a reference to a type) containing its type information. This runtime type information (RTTI) can also be used to implement dynamic dispatch, late binding, downcasting, reflective programming (reflection), and similar features. Most type-safe languages include some form of dynamic type checking, even if they also have a static type checker. The reason for this is that many useful features or properties are difficult or impossible to verify statically. For example, suppose that a program defines two types, A and B, where B is a subtype of A. If the program tries to convert a value of type A to type B, which is known as downcasting, then the operation is legal only if the value being converted is actually a value of type B. Thus, a dynamic check is needed to verify that the operation is safe. This requirement is one of the criticisms of downcasting. By definition, dynamic type checking may cause a program to fail at runtime. In some programming languages, it is possible to anticipate and recover from these failures. In others, type-checking errors are considered fatal. Programming languages that include dynamic type checking but not static type checking are often called "dynamically typed programming languages". === Combining static and dynamic type checking === Certain languages allow both static and dynamic typing. For example, Java and some other ostensibly statically typed languages support downcasting types to their subtypes, querying an object to discover its dynamic type and other type operations that depend on runtime type information. Another example is C++ RTTI. More generally, most programming languages include mechanisms for dispatching over different 'kinds' of data, such as disjoint unions, runtime polymorphism, and variant types. Even when not interacting with type annotations or type checking, such mechanisms are materially similar to dynamic typing implementations. See programming language for more discussion of the interactions between static and dynamic typing. Objects in object-oriented languages are usually accessed by a reference whose static target type (or manifest type) is equal to either the object's run-time type (its latent type) or a supertype thereof. This is conformant with the Liskov substitution principle, which states that all operations performed on an instance of a given type can also be performed on an instance of a subtype. This concept is also known as subsumption or subtype polymorphism. In some languages subtypes may also possess covariant or contravariant return types and argument types respectively. Certain languages, for example Clojure, Common Lisp, or Cython are dynamically type checked by default, but allow programs to opt into static type checking by providing optional annotations. One reason to use such hints would be to optimize the performance of critical sections of a program. This is formalized by gradual typing. The programming environment DrRacket, a pedagogic environment based on Lisp, and a precursor of the language Racket is also soft-typed. Conversely, as of version 4.0, the C# language provides a way to indicate that a variable should not be statically type checked. A variable whose type is dynamic will not be subject to static type checking. Instead, the program relies on runtime type information to determine how the variable may be used.: 113–119  In Rust, the dyn std::any::Any type provides dynamic typing of 'static types. === Static and dynamic type checking in practice === The choice between static and dynamic typing requires certain trade-offs. Static typing can find type errors reliably at compile time, which increases the reliability of the delivered program. However, programmers disagree over how commonly type errors occur, resulting in further disagreements over the proportion of those bugs that are coded that would be caught by appropriately representing the designed types in code. Static typing advocates believe programs are more reliable when they have been well type-checked, whereas dynamic-typing advocates point to distributed code that has proven reliable and to small bug databases. The value of static typing increases as the strength of the type system is increased. Advocates of dependent typing, implemented in languages such as Dependent ML and Epigram, have suggested that almost all bugs can be considered type errors, if the types used in a program are properly declared by the programmer or correctly inferred by the compiler. Static typing usually results in compiled code that executes faster. When the compiler knows the exact data types that are in use (which is necessary for static verification, either through declaration or inference) it can produce optimized machine code. Some dynamically typed languages such as Common Lisp allow optional type declarations for optimization for this reason. By contrast, dynamic typing may allow compilers to run faster and interpreters to dynamically load new code, because changes to source code in dynamically typed languages may result in less checking to perform and less code to revisit. This too may reduce the edit-compile-test-debug cycle. Statically typed languages that lack type inference (such as C and Java prior to version 10) require that programmers declare the types that a method or function must use. This can serve as added program documentation, that is active and dynamic, instead of static. This allows a compiler to prevent it from drifting out of synchrony, and from being ignored by programmers. However, a language can be statically typed without requiring type declarations (examples include Haskell, Scala, OCaml, F#, Swift, and to a lesser extent C# and C++), so explicit type declaration is not a necessary requirement for static typing in all languages. Dynamic typing allows constructs that some (simple) static type checking would reject as illegal. For example, eval functions, which execute arbitrary data as code, become possible. An eval function is possible with static typing, but requires advanced uses of algebraic data types. Further, dynamic typing better accommodates transitional code and prototyping, such as allowing a placeholder data structure (mock object) to be transparently used in place of a full data structure (usually for the purposes of experimentation and testing). Dynamic typing typically allows duck typing (which enables easier code reuse). Many languages with static typing also feature duck typing or other mechanisms like generic programming that also enable easier code reuse. Dynamic typing typically makes metaprogramming easier to use. For example, C++ templates are typically more cumbersome to write than the equivalent Ruby or Python code since C++ has stronger rules regarding type definitions (for both functions and variables). This forces a developer to write more boilerplate code for a template than a Python developer would need to. More advanced run-time constructs such as metaclasses and introspection are often harder to use in statically typed languages. In some languages, such features may also be used e.g. to generate new types and behaviors on the fly, based on run-time data. Such advanced constructs are often provided by dynamic programming languages; many of these are dynamically typed, although dynamic typing need not be related to dynamic programming languages. === Strong and weak type systems === Languages are often colloquially referred to as strongly typed or weakly typed. In fact, there is no universally accepted definition of what these terms mean. In general, there are more precise terms to represent the differences between type systems that lead people to call them "strong" or "weak". === Type safety and memory safety === A third way of categorizing the type system of a programming language is by the safety of typed operations and conversions. Computer scientists use the term type-safe language to describe languages that do not allow operations or conversions that violate the rules of the type system. Computer scientists use the term memory-safe language (or just safe language) to describe languages that do not allow programs to access memory that has not been assigned for their use. For example, a memory-safe language will check array bounds, or else statically guarantee (i.e., at compile time before execution) that array accesses out of the array boundaries will cause compile-time and perhaps runtime errors. Consider the following program of a language that is both type-safe and memory-safe: In this example, the variable z will have the value 42. Although this may not be what the programmer anticipated, it is a well-defined result. If y were a different string, one that could not be converted to a number (e.g. "Hello World"), the result would be well-defined as well. Note that a program can be type-safe or memory-safe and still crash on an invalid operation. This is for languages where the type system is not sufficiently advanced to precisely specify the validity of operations on all possible operands. But if a program encounters an operation that is not type-safe, terminating the program is often the only option. Now consider a similar example in C: In this example z will point to a memory address five characters beyond y, equivalent to three characters after the terminating zero character of the string pointed to by y. This is memory that the program is not expected to access. In C terms this is simply undefined behaviour and the program may do anything; with a simple compiler it might actually print whatever byte is stored after the string "37". As this example shows, C is not memory-safe. As arbitrary data was assumed to be a character, it is also not a type-safe language. In general, type-safety and memory-safety go hand in hand. For example, a language that supports pointer arithmetic and number-to-pointer conversions (like C) is neither memory-safe nor type-safe, because it allows arbitrary memory to be accessed as if it were valid memory of any type. === Variable levels of type checking === Some languages allow different levels of checking to apply to different regions of code. Examples include: The use strict directive in JavaScript and Perl applies stronger checking. The declare(strict_types=1) in PHP on a per-file basis allows only a variable of exact type of the type declaration will be accepted, or a TypeError will be thrown. The Option Strict On in VB.NET allows the compiler to require a conversion between objects. Additional tools such as lint and IBM Rational Purify can also be used to achieve a higher level of strictness. === Optional type systems === It has been proposed, chiefly by Gilad Bracha, that the choice of type system be made independent of choice of language; that a type system should be a module that can be plugged into a language as needed. He believes this is advantageous, because what he calls mandatory type systems make languages less expressive and code more fragile. The requirement that the type system does not affect the semantics of the language is difficult to fulfill. Optional typing is related to, but distinct from, gradual typing. While both typing disciplines can be used to perform static analysis of code (static typing), optional type systems do not enforce type safety at runtime (dynamic typing). == Polymorphism and types == The term polymorphism refers to the ability of code (especially, functions or classes) to act on values of multiple types, or to the ability of different instances of the same data structure to contain elements of different types. Type systems that allow polymorphism generally do so in order to improve the potential for code re-use: in a language with polymorphism, programmers need only implement a data structure such as a list or an associative array once, rather than once for each type of element with which they plan to use it. For this reason computer scientists sometimes call the use of certain forms of polymorphism generic programming. The type-theoretic foundations of polymorphism are closely related to those of abstraction, modularity and (in some cases) subtyping. == Specialized type systems == Many type systems have been created that are specialized for use in certain environments with certain types of data, or for out-of-band static program analysis. Frequently, these are based on ideas from formal type theory and are only available as part of prototype research systems. The following table gives an overview over type theoretic concepts that are used in specialized type systems. The names M, N, O range over terms and the names σ , τ {\displaystyle \sigma ,\tau } range over types. The following notation will be used: M : σ {\displaystyle M:\sigma } means that M {\displaystyle M} has type σ {\displaystyle \sigma } ; M ( N ) {\displaystyle M(N)} is that application of M {\displaystyle M} on N {\displaystyle N} ; τ [ α := σ ] {\displaystyle \tau [\alpha :=\sigma ]} (resp. τ [ x := N ] {\displaystyle \tau [x:=N]} ) describes the type which results from replacing all occurrences of the type variable α (resp. term variable x) in τ {\displaystyle \tau } by the type σ (resp. term N). === Dependent types === Dependent types are based on the idea of using scalars or values to more precisely describe the type of some other value. For example, m a t r i x ( 3 , 3 ) {\displaystyle \mathrm {matrix} (3,3)} might be the type of a 3 × 3 {\displaystyle 3\times 3} matrix. We can then define typing rules such as the following rule for matrix multiplication: m a t r i x m u l t i p l y : m a t r i x ( k , m ) × m a t r i x ( m , n ) → m a t r i x ( k , n ) {\displaystyle \mathrm {matrix} _{\mathrm {multiply} }:\mathrm {matrix} (k,m)\times \mathrm {matrix} (m,n)\to \mathrm {matrix} (k,n)} where k, m, n are arbitrary positive integer values. A variant of ML called Dependent ML has been created based on this type system, but because type checking for conventional dependent types is undecidable, not all programs using them can be type-checked without some kind of limits. Dependent ML limits the sort of equality it can decide to Presburger arithmetic. Other languages such as Epigram make the value of all expressions in the language decidable so that type checking can be decidable. However, in general proof of decidability is undecidable, so many programs require hand-written annotations that may be very non-trivial. As this impedes the development process, many language implementations provide an easy way out in the form of an option to disable this condition. This, however, comes at the cost of making the type-checker run in an infinite loop when fed programs that do not type-check, causing the compilation to fail. === Linear types === Linear types, based on the theory of linear logic, and closely related to uniqueness types, are types assigned to values having the property that they have one and only one reference to them at all times. These are valuable for describing large immutable values such as files, strings, and so on, because any operation that simultaneously destroys a linear object and creates a similar object (such as str = str + "a") can be optimized "under the hood" into an in-place mutation. Normally this is not possible, as such mutations could cause side effects on parts of the program holding other references to the object, violating referential transparency. They are also used in the prototype operating system Singularity for interprocess communication, statically ensuring that processes cannot share objects in shared memory in order to prevent race conditions. The Clean language (a Haskell-like language) uses this type system in order to gain a lot of speed (compared to performing a deep copy) while remaining safe. === Intersection types === Intersection types are types describing values that belong to both of two other given types with overlapping value sets. For example, in most implementations of C the signed char has range -128 to 127 and the unsigned char has range 0 to 255, so the intersection type of these two types would have range 0 to 127. Such an intersection type could be safely passed into functions expecting either signed or unsigned chars, because it is compatible with both types. Intersection types are useful for describing overloaded function types: for example, if "int → int" is the type of functions taking an integer argument and returning an integer, and "float → float" is the type of functions taking a float argument and returning a float, then the intersection of these two types can be used to describe functions that do one or the other, based on what type of input they are given. Such a function could be passed into another function expecting an "int → int" function safely; it simply would not use the "float → float" functionality. In a subclassing hierarchy, the intersection of a type and an ancestor type (such as its parent) is the most derived type. The intersection of sibling types is empty. The Forsythe language includes a general implementation of intersection types. A restricted form is refinement types. === Union types === Union types are types describing values that belong to either of two types. For example, in C, the signed char has a -128 to 127 range, and the unsigned char has a 0 to 255 range, so the union of these two types would have an overall "virtual" range of -128 to 255 that may be used partially depending on which union member is accessed. Any function handling this union type would have to deal with integers in this complete range. More generally, the only valid operations on a union type are operations that are valid on both types being unioned. C's "union" concept is similar to union types, but is not typesafe, as it permits operations that are valid on either type, rather than both. Union types are important in program analysis, where they are used to represent symbolic values whose exact nature (e.g., value or type) is not known. In a subclassing hierarchy, the union of a type and an ancestor type (such as its parent) is the ancestor type. The union of sibling types is a subtype of their common ancestor (that is, all operations permitted on their common ancestor are permitted on the union type, but they may also have other valid operations in common). === Existential types === Existential types are frequently used in connection with record types to represent modules and abstract data types, due to their ability to separate implementation from interface. For example, the type "T = ∃X { a: X; f: (X → int); }" describes a module interface that has a data member named a of type X and a function named f that takes a parameter of the same type X and returns an integer. This could be implemented in different ways; for example: intT = { a: int; f: (int → int); } floatT = { a: float; f: (float → int); } These types are both subtypes of the more general existential type T and correspond to concrete implementation types, so any value of one of these types is a value of type T. Given a value "t" of type "T", we know that "t.f(t.a)" is well-typed, regardless of what the abstract type X is. This gives flexibility for choosing types suited to a particular implementation, while clients that use only values of the interface type—the existential type—are isolated from these choices. In general it's impossible for the typechecker to infer which existential type a given module belongs to. In the above example intT { a: int; f: (int → int); } could also have the type ∃X { a: X; f: (int → int); }. The simplest solution is to annotate every module with its intended type, e.g.: intT = { a: int; f: (int → int); } as ∃X { a: X; f: (X → int); } Although abstract data types and modules had been implemented in programming languages for quite some time, it wasn't until 1988 that John C. Mitchell and Gordon Plotkin established the formal theory under the slogan: "Abstract [data] types have existential type". The theory is a second-order typed lambda calculus similar to System F, but with existential instead of universal quantification. === Gradual typing === In a type system with Gradual typing, variables may be assigned a type either at compile-time (which is static typing), or at run-time (which is dynamic typing). This allows software developers to choose either type paradigm as appropriate, from within a single language. Gradual typing uses a special type named dynamic to represent statically unknown types; gradual typing replaces the notion of type equality with a new relation called consistency that relates the dynamic type to every other type. The consistency relation is symmetric but not transitive. == Explicit or implicit declaration and inference == Many static type systems, such as those of C and Java, require type declarations: the programmer must explicitly associate each variable with a specific type. Others, such as Haskell's, perform type inference: the compiler draws conclusions about the types of variables based on how programmers use those variables. For example, given a function f(x, y) that adds x and y together, the compiler can infer that x and y must be numbers—since addition is only defined for numbers. Thus, any call to f elsewhere in the program that specifies a non-numeric type (such as a string or list) as an argument would signal an error. Numerical and string constants and expressions in code can and often do imply type in a particular context. For example, an expression 3.14 might imply a type of floating-point, while [1, 2, 3] might imply a list of integers—typically an array. Type inference is in general possible, if it is computable in the type system in question. Moreover, even if inference is not computable in general for a given type system, inference is often possible for a large subset of real-world programs. Haskell's type system, a version of Hindley–Milner, is a restriction of System Fω to so-called rank-1 polymorphic types, in which type inference is computable. Most Haskell compilers allow arbitrary-rank polymorphism as an extension, but this makes type inference not computable. (Type checking is decidable, however, and rank-1 programs still have type inference; higher rank polymorphic programs are rejected unless given explicit type annotations.) == Decision problems == A type system that assigns types to terms in type environments using typing rules is naturally associated with the decision problems of type checking, typability, and type inhabitation. Given a type environment Γ {\displaystyle \Gamma } , a term e {\displaystyle e} , and a type τ {\displaystyle \tau } , decide whether the term e {\displaystyle e} can be assigned the type τ {\displaystyle \tau } in the type environment. Given a term e {\displaystyle e} , decide whether there exists a type environment Γ {\displaystyle \Gamma } and a type τ {\displaystyle \tau } such that the term e {\displaystyle e} can be assigned the type τ {\displaystyle \tau } in the type environment Γ {\displaystyle \Gamma } . Given a type environment Γ {\displaystyle \Gamma } and a type τ {\displaystyle \tau } , decide whether there exists a term e {\displaystyle e} that can be assigned the type τ {\displaystyle \tau } in the type environment. == Unified type system == Some languages like C# or Scala have a unified type system. This means that all C# types including primitive types inherit from a single root object. Every type in C# inherits from the Object class. Some languages, like Java and Raku, have a root type but also have primitive types that are not objects. Java provides wrapper object types that exist together with the primitive types so developers can use either the wrapper object types or the simpler non-object primitive types. Raku automatically converts primitive types to objects when their methods are accessed. == Compatibility: equivalence and subtyping == A type checker for a statically typed language must verify that the type of any expression is consistent with the type expected by the context in which that expression appears. For example, in an assignment statement of the form x := e, the inferred type of the expression e must be consistent with the declared or inferred type of the variable x. This notion of consistency, called compatibility, is specific to each programming language. If the type of e and the type of x are the same, and assignment is allowed for that type, then this is a valid expression. Thus, in the simplest type systems, the question of whether two types are compatible reduces to that of whether they are equal (or equivalent). Different languages, however, have different criteria for when two type expressions are understood to denote the same type. These different equational theories of types vary widely, two extreme cases being structural type systems, in which any two types that describe values with the same structure are equivalent, and nominative type systems, in which no two syntactically distinct type expressions denote the same type (i.e., types must have the same "name" in order to be equal). In languages with subtyping, the compatibility relation is more complex: If B is a subtype of A, then a value of type B can be used in a context where one of type A is expected (covariant), even if the reverse is not true. Like equivalence, the subtype relation is defined differently for each programming language, with many variations possible. The presence of parametric or ad hoc polymorphism in a language may also have implications for type compatibility. == See also == Comparison of type systems Covariance and contravariance (computer science) Polymorphism in object-oriented programming Type signature Type theory == Notes == == References == == Further reading == Cardelli, Luca; Wegner, Peter (December 1985). "On Understanding Types, Data Abstraction, and Polymorphism" (PDF). ACM Computing Surveys. 17 (4): 471–523. CiteSeerX 10.1.1.117.695. doi:10.1145/6041.6042. S2CID 2921816. Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. ISBN 978-0-262-16209-8. Cardelli, Luca (2004). "Type systems" (PDF). In Allen B. Tucker (ed.). CRC Handbook of Computer Science and Engineering (2nd ed.). CRC Press. ISBN 978-1584883609. Tratt, Laurence (July 2009). "5. Dynamically Typed Languages". Advances in Computers. Vol. 77. Elsevier. pp. 149–184. doi:10.1016/S0065-2458(09)01205-4. ISBN 978-0-12-374812-6. == External links == Media related to Type systems at Wikimedia Commons Smith, Chris (2011). "What to Know Before Debating Type Systems".
Wikipedia/Type_systems
In software development, the V-model represents a development process that may be considered an extension of the waterfall model and is an example of the more general V-model. Instead of moving down linearly, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively. == Project definition phases == === Requirements analysis === In the requirements analysis phase, the first step in the verification process, the requirements of the system are collected by analyzing the needs of the user(s). This phase is concerned with establishing what the ideal system has to perform. However, it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. The user requirements document will typically describe the system's functional, interface, performance, data, security, etc. requirements as expected by the user. It is used by business analysts to communicate their understanding of the system to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. See also Functional requirements. There are different methods for gathering requirements of both soft and hard methodologies including; interviews, questionnaires, document analysis, observation, throw-away prototypes, use case, and static and dynamic views with users. === System design === Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly. The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, and reports to aid understanding. Other technical documentation like entity diagrams, and data dictionaries will also be produced in this phase. The documents for system testing are prepared. === Architecture design === The phase of the design of computer architecture and software architecture can also be referred to as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details, etc. The integration testing design is carried out in the particular phase. === Module design === The module design phase can also be referred to as low-level design. The designed system is broken up into smaller units or modules and each of them is explained so that the programmer can start coding directly. The low-level design document or program specifications will contain a detailed functional logic of the module, in pseudocode: database tables, with all elements, including their type and size all interface details with complete API references all dependency issues error message listings complete input and outputs for a module. The unit test design is developed in this stage. == Validation phases == In the V-model, each stage of the design phase has a corresponding stage in the validation phase. The following are the typical phases of validation in the V-Model, though they may be known by other names. === Unit testing === In the V-Model, Unit Test Plans (UTPs) are developed during the module design phase. These UTPs are executed to eliminate bugs at the code level or unit level. A unit is the smallest entity that can independently exist, e.g. a program module. Unit testing verifies that the smallest entity can function correctly when isolated from the rest of the codes/units. === Integration testing === Integration Test Plans are developed during the Architectural Design Phase. These tests verify that units created and tested independently can coexist and communicate among themselves. Test results are shared with the customer's team. === System testing === System Tests Plans are developed during the System Design Phase. Unlike Unit and Integration Test Plans, System Test Plans are composed by the client's business team. System Test ensures that expectations from the application developed are met. The whole application is tested for its functionality, interdependency, and communication. System Testing verifies that functional and non-functional requirements have been met. Load and performance testing, stress testing, regression testing, etc., are subsets of system testing. === User acceptance testing === User Acceptance Test (UAT) Plans are developed during the Requirements Analysis phase. Test Plans are composed by business users. UAT is performed in a user environment that resembles the production environment, using realistic data. UAT verifies that the delivered system meets the user's requirement and the system is ready for use in real-time. == Criticism == The V-Model has been criticized by Agile advocates and others as an inadequate model of software development for numerous reasons. Criticisms include: It is too simple to accurately reflect the software development process, and can lead managers into a false sense of security. The V-Model reflects a project management view of software development and fits the needs of project managers, accountants and lawyers rather than software developers or users. Although it is easily understood by novices, that early understanding is useful only if the novice goes on to acquire a deeper understanding of the development process and how the V-Model must be adapted and extended in practice. If practitioners persist with their naive view of the V-Model they will have great difficulty applying it successfully. It is inflexible and encourages a rigid and linear view of software development and has no inherent ability to respond to change. It provides only a slight variant on the waterfall model and is therefore subject to the same criticisms as that model. It provides greater emphasis on testing, and particularly the importance of early test planning. However, a common practical criticism of the V-Model is that it leads to testing being squeezed into tight windows at the end of development when earlier stages have overrun but the implementation date remains fixed. It is consistent with, and therefore implicitly encourages, inefficient and ineffective approaches to testing. It implicitly promotes writing test scripts in advance rather than exploratory testing; it encourages testers to look for what they expect to find, rather than discover what is truly there. It also encourages a rigid link between the equivalent levels of either leg (e.g. user acceptance test plans being derived from user requirements documents), rather than encouraging testers to select the most effective and efficient way to plan and execute testing. It lacks coherence and precision. There is widespread confusion about what exactly the V-Model is. If one boils it down to those elements that most people would agree upon it becomes a trite and unhelpful representation of software development. Disagreement about the merits of the V-Model often reflects a lack of shared understanding of its definition. == Current state == Supporters of the V-Model argue that it has evolved and supports flexibility and agility throughout the development process. They argue that in addition to being a highly disciplined approach, it promotes meticulous design, development, and documentation necessary to build stable software products. Lately, it is being adopted by the medical device industry. == See also == Product lifecycle management Systems development life cycle == References == == Further reading == Roger S. Pressman:Software Engineering: A Practitioner's Approach, The McGraw-Hill Companies, ISBN 0-07-301933-X Mark Hoffman & Ted Beaumont: Application Development: Managing the Project Life Cycle, Mc Press, ISBN 1-883884-45-4 Boris Beizer: Software Testing Techniques. Second Edition, International Thomson Computer Press, 1990, ISBN 1-85032-880-3
Wikipedia/V-model_(software_development)
In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.: p1  There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software include decompilers, programs that translate from low-level languages to higher level ones; programs that translate between high-level languages, usually called source-to-source compilers or transpilers; language rewriters, usually programs that translate the form of expressions without a change of language; and compiler-compilers, compilers that produce compilers (or parts of them), often in a generic and reusable way so as to be able to produce many differing compilers. A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness. == Comparison with interpreter == With respect to making source code runnable, an interpreter provides a similar function as a compiler, but via a different mechanism. An interpreter executes code without converting it to machine code.: p2  Some interpreters execute source code while others execute an intermediate form such as bytecode. A program compiled to native code tends to run faster than if interpreted. Environments with a bytecode intermediate form tend toward intermediate speed. Just-in-time compilation allows for native execution speed with a one-time startup processing time cost. Low-level programming languages, such as assembly and C, are typically compiled, especially when speed is a significant concern, rather than cross-platform support. For such languages, there are more one-to-one correspondences between the source code and the resulting machine code, making it easier for programmers to control the use of hardware. In theory, a programming language can be used via either a compiler or an interpreter, but in practice, each language tends to be used with only one or the other. None-the-less, it is possible to write a compiler for a languages that is commonly interpreted. For example, Common Lisp can be compiled to Java bytecode (then interpreted by the Java virtual machine), C code (then compiled to native machine code), or directly to native code. == History == Theoretical computing concepts developed by scientists, mathematicians, and engineers formed the basis of digital modern computing development during World War II. Primitive binary languages evolved because digital devices only understand ones and zeros and the circuit patterns in the underlying machine architecture. In the late 1940s, assembly languages were created to offer a more workable abstraction of the computer architectures. Limited memory capacity of early computers led to substantial technical challenges when the first compilers were designed. Therefore, the compilation process needed to be divided into several small programs. The front end programs produce the analysis products used by the back end programs to generate target code. As computer technology provided more resources, compiler designs could align better with the compilation process. It is usually more productive for a programmer to use a high-level language, so the development of high-level languages followed naturally from the capabilities offered by digital computers. High-level languages are formal languages that are strictly defined by their syntax and semantics which form the high-level language architecture. Elements of these formal languages include: Alphabet, any finite set of symbols; String, a finite sequence of symbols; Language, any set of strings on an alphabet. The sentences in a language may be defined by a set of rules called a grammar. Backus–Naur form (BNF) describes the syntax of "sentences" of a language. It was developed by John Backus and used for the syntax of Algol 60. The ideas derive from the context-free grammar concepts by linguist Noam Chomsky. "BNF and its extensions have become standard tools for describing the syntax of programming notations. In many cases, parts of compilers are generated automatically from a BNF description." Between 1942 and 1945, Konrad Zuse designed the first (algorithmic) programming language for computers called Plankalkül ("Plan Calculus"). Zuse also envisioned a Planfertigungsgerät ("Plan assembly device") to automatically translate the mathematical formulation of a program into machine-readable punched film stock. While no actual implementation occurred until the 1970s, it presented concepts later seen in APL designed by Ken Iverson in the late 1950s. APL is a language for mathematical computations. Between 1949 and 1951, Heinz Rutishauser proposed Superplan, a high-level language and automatic translator. His ideas were later refined by Friedrich L. Bauer and Klaus Samelson. High-level language design during the formative years of digital computing provided useful programming tools for a variety of applications: FORTRAN (Formula Translation) for engineering and science applications is considered to be one of the first actually implemented high-level languages and first optimizing compiler. COBOL (Common Business-Oriented Language) evolved from A-0 and FLOW-MATIC to become the dominant high-level language for business applications. LISP (List Processor) for symbolic computation. Compiler technology evolved from the need for a strictly defined transformation of the high-level source program into a low-level target program for the digital computer. The compiler could be viewed as a front end to deal with the analysis of the source code and a back end to synthesize the analysis into the target code. Optimization between the front end and back end could produce more efficient target code. Some early milestones in the development of compiler technology: May 1952: Grace Hopper's team at Remington Rand wrote the compiler for the A-0 programming language (and coined the term compiler to describe it), although the A-0 compiler functioned more as a loader or linker than the modern notion of a full compiler. 1952, before September: An Autocode compiler developed by Alick Glennie for the Manchester Mark I computer at the University of Manchester is considered by some to be the first compiled programming language. 1954–1957: A team led by John Backus at IBM developed FORTRAN which is usually considered the first high-level language. In 1957, they completed a FORTRAN compiler that is generally credited as having introduced the first unambiguously complete compiler. 1959: The Conference on Data Systems Language (CODASYL) initiated development of COBOL. The COBOL design drew on A-0 and FLOW-MATIC. By the early 1960s COBOL was compiled on multiple architectures. 1958–1960: Algol 58 was the precursor to ALGOL 60. It introduced code blocks, a key advance in the rise of structured programming. ALGOL 60 was the first language to implement nested function definitions with lexical scope. It included recursion. Its syntax was defined using BNF. ALGOL 60 inspired many languages that followed it. Tony Hoare remarked: "... it was not only an improvement on its predecessors but also on nearly all its successors." 1958–1962: John McCarthy at MIT designed LISP. The symbol processing capabilities provided useful features for artificial intelligence research. In 1962, LISP 1.5 release noted some tools: an interpreter written by Stephen Russell and Daniel J. Edwards, a compiler and assembler written by Tim Hart and Mike Levin. Early operating systems and software were written in assembly language. In the 1960s and early 1970s, the use of high-level languages for system programming was still controversial due to resource limitations. However, several research and industry efforts began the shift toward high-level systems programming languages, for example, BCPL, BLISS, B, and C. BCPL (Basic Combined Programming Language) designed in 1966 by Martin Richards at the University of Cambridge was originally developed as a compiler writing tool. Several compilers have been implemented, Richards' book provides insights to the language and its compiler. BCPL was not only an influential systems programming language that is still used in research but also provided a basis for the design of B and C languages. BLISS (Basic Language for Implementation of System Software) was developed for a Digital Equipment Corporation (DEC) PDP-10 computer by W. A. Wulf's Carnegie Mellon University (CMU) research team. The CMU team went on to develop BLISS-11 compiler one year later in 1970. Multics (Multiplexed Information and Computing Service), a time-sharing operating system project, involved MIT, Bell Labs, General Electric (later Honeywell) and was led by Fernando Corbató from MIT. Multics was written in the PL/I language developed by IBM and IBM User Group. IBM's goal was to satisfy business, scientific, and systems programming requirements. There were other languages that could have been considered but PL/I offered the most complete solution even though it had not been implemented. For the first few years of the Multics project, a subset of the language could be compiled to assembly language with the Early PL/I (EPL) compiler by Doug McIlory and Bob Morris from Bell Labs. EPL supported the project until a boot-strapping compiler for the full PL/I could be developed. Bell Labs left the Multics project in 1969, and developed a system programming language B based on BCPL concepts, written by Dennis Ritchie and Ken Thompson. Ritchie created a boot-strapping compiler for B and wrote Unics (Uniplexed Information and Computing Service) operating system for a PDP-7 in B. Unics eventually became spelled Unix. Bell Labs started the development and expansion of C based on B and BCPL. The BCPL compiler had been transported to Multics by Bell Labs and BCPL was a preferred language at Bell Labs. Initially, a front-end program to Bell Labs' B compiler was used while a C compiler was developed. In 1971, a new PDP-11 provided the resource to define extensions to B and rewrite the compiler. By 1973 the design of C language was essentially complete and the Unix kernel for a PDP-11 was rewritten in C. Steve Johnson started development of Portable C Compiler (PCC) to support retargeting of C compilers to new machines. Object-oriented programming (OOP) offered some interesting possibilities for application development and maintenance. OOP concepts go further back but were part of LISP and Simula language science. Bell Labs became interested in OOP with the development of C++. C++ was first used in 1980 for systems programming. The initial design leveraged C language systems programming capabilities with Simula concepts. Object-oriented facilities were added in 1983. The Cfront program implemented a C++ front-end for C84 language compiler. In subsequent years several C++ compilers were developed as C++ popularity grew. In many application domains, the idea of using a higher-level language quickly caught on. Because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers became more complex. DARPA (Defense Advanced Research Projects Agency) sponsored a compiler project with Wulf's CMU research team in 1970. The Production Quality Compiler-Compiler PQCC design would produce a Production Quality Compiler (PQC) from formal definitions of source language and the target. PQCC tried to extend the term compiler-compiler beyond the traditional meaning as a parser generator (e.g., Yacc) without much success. PQCC might more properly be referred to as a compiler generator. PQCC research into code generation process sought to build a truly automatic compiler-writing system. The effort discovered and designed the phase structure of the PQC. The BLISS-11 compiler provided the initial structure. The phases included analyses (front end), intermediate translation to virtual machine (middle end), and translation to the target (back end). TCOL was developed for the PQCC research to handle language specific constructs in the intermediate representation. Variations of TCOL supported various languages. The PQCC project investigated techniques of automated compiler construction. The design concepts proved useful in optimizing compilers and compilers for the (since 1995, object-oriented) programming language Ada. The Ada STONEMAN document formalized the program support environment (APSE) along with the kernel (KAPSE) and minimal (MAPSE). An Ada interpreter NYU/ED supported development and standardization efforts with the American National Standards Institute (ANSI) and the International Standards Organization (ISO). Initial Ada compiler development by the U.S. Military Services included the compilers in a complete integrated design environment along the lines of the STONEMAN document. Army and Navy worked on the Ada Language System (ALS) project targeted to DEC/VAX architecture while the Air Force started on the Ada Integrated Environment (AIE) targeted to IBM 370 series. While the projects did not provide the desired results, they did contribute to the overall effort on Ada development. Other Ada compiler efforts got underway in Britain at the University of York and in Germany at the University of Karlsruhe. In the U. S., Verdix (later acquired by Rational) delivered the Verdix Ada Development System (VADS) to the Army. VADS provided a set of development tools including a compiler. Unix/VADS could be hosted on a variety of Unix platforms such as DEC Ultrix and the Sun 3/60 Solaris targeted to Motorola 68020 in an Army CECOM evaluation. There were soon many Ada compilers available that passed the Ada Validation tests. The Free Software Foundation GNU project developed the GNU Compiler Collection (GCC) which provides a core capability to support multiple languages and targets. The Ada version GNAT is one of the most widely used Ada compilers. GNAT is free but there is also commercial support, for example, AdaCore, was founded in 1994 to provide commercial software solutions for Ada. GNAT Pro includes the GNU GCC based GNAT with a tool suite to provide an integrated development environment. High-level languages continued to drive compiler research and development. Focus areas included optimization and automatic code generation. Trends in programming languages and development environments influenced compiler technology. More compilers became included in language distributions (PERL, Java Development Kit) and as a component of an IDE (VADS, Eclipse, Ada Pro). The interrelationship and interdependence of technologies grew. The advent of web services promoted growth of web languages and scripting languages. Scripts trace back to the early days of Command Line Interfaces (CLI) where the user could enter commands to be executed by the system. User Shell concepts developed with languages to write shell programs. Early Windows designs offered a simple batch programming capability. The conventional transformation of these language used an interpreter. While not widely used, Bash and Batch compilers have been written. More recently sophisticated interpreted languages became part of the developers tool kit. Modern scripting languages include PHP, Python, Ruby and Lua. (Lua is widely used in game development.) All of these have interpreter and compiler support. "When the field of compiling began in the late 50s, its focus was limited to the translation of high-level language programs into machine code ... The compiler field is increasingly intertwined with other disciplines including computer architecture, programming languages, formal methods, software engineering, and computer security." The "Compiler Research: The Next 50 Years" article noted the importance of object-oriented languages and Java. Security and parallel computing were cited among the future research targets. == Compiler construction == A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets. In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once. A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that focus development on the functions in the compilation process. === One-pass vis-à-vis multi-pass compilers === Classifying compilers by number of passes has its background in the hardware resource limitations of computers. Compiling involves performing much work and early computers did not have enough memory to contain one program that did all of this work. As a result, compilers were split up into smaller programs which each made a pass over the source (or some representation of it) performing some of the required analysis and translations. The ability to compile in a single pass has classically been seen as a benefit because it simplifies the job of writing a compiler and one-pass compilers generally perform compilations faster than multi-pass compilers. Thus, partly driven by the resource limitations of early systems, many early languages were specifically designed so that they could be compiled in a single pass (e.g., Pascal). In some cases, the design of a language feature may require a compiler to perform more than one pass over the source. For instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass. The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once. Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program. === Three-stage compiler structure === Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end. The front end scans the input and verifies syntax and semantics according to a specific source language. For statically typed languages it performs type checking by collecting type information. If the input program is syntactically incorrect or has a type error, it generates error and/or warning messages, usually identifying the location in the source code where the problem was detected; in some cases the actual error may be (much) earlier in the program. Aspects of the front end include lexical analysis, syntax analysis, and semantic analysis. The front end transforms the input program into an intermediate representation (IR) for further processing by the middle end. This IR is usually a lower-level representation of the program with respect to the source code. The middle end performs optimizations on the IR that are independent of the CPU architecture being targeted. This source code/machine code independence is intended to enable generic optimizations to be shared between versions of the compiler supporting different languages and target processors. Examples of middle end optimizations are removal of useless (dead-code elimination) or unreachable code (reachability analysis), discovery and propagation of constant values (constant propagation), relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context, eventually producing the "optimized" IR that is used by the back end. The back end takes the optimized IR from the middle end. It may perform more analysis, transformations and optimizations that are specific for the target CPU architecture. The back end generates the target-dependent assembly code, performing register allocation in the process. The back end performs instruction scheduling, which re-orders instructions to keep parallel execution units busy by filling delay slots. Although most optimization problems are NP-hard, heuristic techniques for solving them are well-developed and implemented in production-quality compilers. Typically the output of a back end is machine code specialized for a particular processor and operating system. This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends. ==== Front end ==== The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope. While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare. The main phases of the front end include the following: Line reconstruction converts the input character sequence to a canonical form ready for the parser. Languages which strop their keywords or allow arbitrary spaces within identifiers require this phase. The top-down, recursive-descent, table-driven parsers used in the 1960s typically read the source one character at a time and did not require a separate tokenizing phase. Atlas Autocode and Imp (and some implementations of ALGOL and Coral 66) are examples of stropped languages whose compilers would have a Line Reconstruction phase. Preprocessing supports macro substitution and conditional compilation. Typically the preprocessing phase occurs before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates lexical tokens rather than syntactic forms. However, some languages such as Scheme support macro substitutions based on syntactic forms. Lexical analysis (also known as lexing or tokenization) breaks the source code text into a sequence of small pieces called lexical tokens. This phase can be divided into two stages: the scanning, which segments the input text into syntactic units called lexemes and assigns them a category; and the evaluating, which converts lexemes into a processed value. A token is a pair consisting of a token name and an optional token value. Common token categories may include identifiers, keywords, separators, operators, literals and comments, although the set of token categories varies in different programming languages. The lexeme syntax is typically a regular language, so a finite-state automaton constructed from a regular expression can be used to recognize it. The software doing lexical analysis is called a lexical analyzer. This may not be a separate step—it can be combined with the parsing step in scannerless parsing, in which case parsing is done at the character level, not the token level. Syntax analysis (also known as parsing) involves parsing the token sequence to identify the syntactic structure of the program. This phase typically builds a parse tree, which replaces the linear sequence of tokens with a tree structure built according to the rules of a formal grammar which define the language's syntax. The parse tree is often analyzed, augmented, and transformed by later phases in the compiler. Semantic analysis adds semantic information to the parse tree and builds the symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object binding (associating variable and function references with their definitions), or definite assignment (requiring all local variables to be initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis usually requires a complete parse tree, meaning that this phase logically follows the parsing phase, and logically precedes the code generation phase, though it is often possible to fold multiple phases into one pass over the code in a compiler implementation. ==== Middle end ==== The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted. The main phases of the middle end include the following: Analysis: This is the gathering of program information from the intermediate representation derived from the input; data-flow analysis is used to build use-define chains, together with dependence analysis, alias analysis, pointer analysis, escape analysis, etc. Accurate analysis is the basis for any compiler optimization. The control-flow graph of every compiled function and the call graph of the program are usually also built during the analysis phase. Optimization: the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are inline expansion, dead-code elimination, constant propagation, loop transformation and even automatic parallelization. Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation. The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously. Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes. Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled. ==== Back end ==== The back end is responsible for the CPU architecture specific optimizations and for code generation. The main phases of the back end include the following: Machine dependent optimizations: optimizations that depend on the details of the CPU architecture that the compiler targets. A prominent example is peephole optimizations, which rewrites short sequences of assembler instructions into more efficient instructions. Code generation: the transformed intermediate language is translated into the output language, usually the native machine language of the system. This involves resource and storage decisions, such as deciding which variables to fit into registers and memory and the selection and scheduling of appropriate machine instructions along with their associated addressing modes (see also Sethi–Ullman algorithm). Debug data may also need to be generated to facilitate debugging. === Compiler correctness === Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler. == Compiled vis-à-vis interpreted languages == Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters. Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a set of directly executed machine instructions is needed somewhere at the bottom of the execution stack (see machine language). Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further. Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself. == Types == One classification of compilers is by the platform on which their generated code executes. This is known as the target platform. A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment. The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers. The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers. While a common compiler type outputs machine code, there are many other types: Source-to-source compilers are a type of compiler that takes a high-level language as its input and outputs a high-level language. For example, an automatic parallelizing compiler will frequently take in a high-level language program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALL statements). Other terms for a source-to-source compiler are transcompiler or transpiler. Bytecode compilers compile to assembly language of a theoretical machine, like some Prolog implementations This Prolog machine is also known as the Warren Abstract Machine (or WAM). Bytecode compilers for Java, Python are also examples of this category. Just-in-time compilers (JIT compiler) defer compilation until runtime. JIT compilers exist for many modern languages including Python, JavaScript, Smalltalk, Java, Microsoft .NET's Common Intermediate Language (CIL) and others. A JIT compiler generally runs inside an interpreter. When the interpreter detects that a code path is "hot", meaning it is executed frequently, the JIT compiler will be invoked and compile the "hot" code for increased performance. For some languages, such as Java, applications are first compiled using a bytecode compiler and delivered in a machine-independent intermediate representation. A bytecode interpreter executes the bytecode, but the JIT compiler will translate the bytecode to machine code when increased performance is necessary. Hardware compilers (also known as synthesis tools) are compilers whose input is a hardware description language and whose output is a description, in the form of a netlist or otherwise, of a hardware configuration. The output of these compilers target computer hardware at a very low level, for example a field-programmable gate array (FPGA) or structured application-specific integrated circuit (ASIC). Such compilers are said to be hardware compilers, because the source code they compile effectively controls the final configuration of the hardware and how it operates. The output of the compilation is only an interconnection of transistors or lookup tables. An example of hardware compiler is XST, the Xilinx Synthesis Tool used for configuring FPGAs. Similar tools are available from Altera, Synplicity, Synopsys and other hardware vendors. Research systems compile subsets of high level serial languages, such as Python or C++, directly into parallelized digital logic. This is typically easier to do for functional languages or functional subsets of multi-paradigm languages. A program that translates from a low-level language to a higher level one is a decompiler. A program that translates into an object code format that is not supported on the compilation machine is called a cross compiler and is commonly used to prepare code for execution on embedded software applications. A program that rewrites object code back into the same type of object code while applying optimisations and transformations is a binary recompiler. Assemblers, which translate human readable assembly language to the machine code instructions executed by hardware, are not considered compilers. (The inverse program that translates machine code to assembly language is called a disassembler.) == See also == == Notes and references == == Further reading == == External links == Incremental Approach to Compiler Construction – a PDF tutorial Basics of Compiler Design at the Wayback Machine (archived 15 May 2018) Short animation on YouTube explaining the key conceptual difference between compilers and interpreters Syntax Analysis & LL1 Parsing on YouTube Let's Build a Compiler, by Jack Crenshaw Forum about compiler development at the Wayback Machine (archived 10 October 2014)
Wikipedia/Compiler_design
A database model is a type of data model that determines the logical structure of a database. It fundamentally determines in which manner data can be stored, organized and manipulated. The most popular example of a database model is the relational model, which uses a table-based format. == Types == Common logical data models for databases include: Hierarchical database model This is the oldest form of database model. It was developed by IBM for IMS (information Management System), and is a set of organized data in tree structure. DB record is a tree consisting of many groups called segments. It uses one-to-many relationships, and the data access is also predictable. Network model Relational model Entity–relationship model Enhanced entity–relationship model Object model Document model Entity–attribute–value model Star schema An object–relational database combines the two related structures. Physical data models include: Inverted index Flat file Other models include: Multidimensional model Multivalue model Semantic model XML database Named graph Triplestore == Relationships and functions == A given database management system may provide one or more models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements, which include transaction rate (speed), reliability, maintainability, scalability, and cost. Most database management systems are built around one particular data model, although it is possible for products to offer support for more than one model. Various physical data models can implement any given logical model. Most database software will offer the user some level of control in tuning the physical implementation, since the choices that are made have a significant effect on performance. A model is not just a way of structuring data: it also defines a set of operations that can be performed on the data. The relational model, for example, defines operations such as select, project and join. Although these operations may not be explicit in a particular query language, they provide the foundation on which a query language is built. == Flat model == The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another. For instance, columns for name and password that might be used as a part of a system security database. Each row would have the specific password associated with an individual user. Columns of the table often have a type associated with them, defining them as character data, date or time information, integers, or floating point numbers. This tabular format is a precursor to the relational model. == Early data models == These models were popular in the 1960s, 1970s, but nowadays can be found primarily in old legacy systems. They are characterized primarily by being navigational with strong connections between their logical and physical representations, and deficiencies in data independence. === Hierarchical model === In a hierarchical model, data is organized into a tree-like structure, implying a single parent for each record. A sort field keeps sibling records in a particular order. Hierarchical structures were widely used in the early mainframe database management systems, such as the Information Management System (IMS) by IBM, and now describe the structure of XML documents. This structure allows one-to-many relationship between two types of data. This structure is very efficient to describe many relationships in the real world; recipes, table of contents, ordering of paragraphs/verses, any nested and sorted information. This hierarchy is used as the physical order of records in storage. Record access is done by navigating downward through the data structure using pointers combined with sequential accessing. Because of this, the hierarchical structure is inefficient for certain database operations when a full path (as opposed to upward link and sort field) is not also included for each record. Such limitations have been compensated for in later IMS versions by additional logical hierarchies imposed on the base physical hierarchy. === Network model === The network model expands upon the hierarchical structure, allowing many-to-many relationships in a tree-like structure that allows multiple parents. It was most popular before being replaced by the relational model, and is defined by the CODASYL specification. The network model organizes data using two fundamental concepts, called records and sets. Records contain fields (which may be organized hierarchically, as in the programming language COBOL). Sets (not to be confused with mathematical sets) define one-to-many relationships between records: one owner, many members. A record may be an owner in any number of sets, and a member in any number of sets. A set consists of circular linked lists where one record type, the set owner or parent, appears once in each circle, and a second record type, the subordinate or child, may appear multiple times in each circle. In this way a hierarchy may be established between any two record types, e.g., type A is the owner of B. At the same time another set may be defined where B is the owner of A. Thus all the sets comprise a general directed graph (ownership defines a direction), or network construct. Access to records is either sequential (usually in each record type) or by navigation in the circular linked lists. The network model is able to represent redundancy in data more efficiently than in the hierarchical model, and there can be more than one path from an ancestor node to a descendant. The operations of the network model are navigational in style: a program maintains a current position, and navigates from one record to another by following the relationships in which the record participates. Records can also be located by supplying key values. Although it is not an essential feature of the model, network databases generally implement the set relationships by means of pointers that directly address the location of a record on disk. This gives excellent retrieval performance, at the expense of operations such as database loading and reorganization. Popular DBMS products that utilized it were Cincom Systems' Total and Cullinet's IDMS. IDMS gained a considerable customer base; in the 1980s, it adopted the relational model and SQL in addition to its original tools and languages. Most object databases (invented in the 1990s) use the navigational concept to provide fast navigation across networks of objects, generally using object identifiers as "smart" pointers to related objects. Objectivity/DB, for instance, implements named one-to-one, one-to-many, many-to-one, and many-to-many named relationships that can cross databases. Many object databases also support SQL, combining the strengths of both models. === Inverted file model === In an inverted file or inverted index, the contents of the data are used as keys in a lookup table, and the values in the table are pointers to the location of each instance of a given content item. This is also the logical structure of contemporary database indexes, which might only use the contents from a particular columns in the lookup table. The inverted file data model can put indexes in a set of files next to existing flat database files, in order to efficiently directly access needed records in these files. Notable for using this data model is the ADABAS DBMS of Software AG, introduced in 1970. ADABAS has gained considerable customer base and exists and supported until today. In the 1980s it has adopted the relational model and SQL in addition to its original tools and languages. Document-oriented database Clusterpoint uses inverted indexing model to provide fast full-text search for XML or JSON data objects for example. == Relational model == The relational model was introduced by E.F. Codd in 1970 as a way to make database management systems more independent of any particular application. It is a mathematical model defined in terms of predicate logic and set theory, and implementations of it have been used by mainframe, midrange and microcomputer systems. The products that are generally referred to as relational databases in fact implement a model that is only an approximation to the mathematical model defined by Codd. Three key terms are used extensively in relational database models: relations, attributes, and domains. A relation is a table with columns and rows. The named columns of the relation are called attributes, and the domain is the set of values the attributes are allowed to take. The basic data structure of the relational model is the table, where information about a particular entity (say, an employee) is represented in rows (also called tuples) and columns. Thus, the "relation" in "relational database" refers to the various tables in the database; a relation is a set of tuples. The columns enumerate the various attributes of the entity (the employee's name, address or phone number, for example), and a row is an actual instance of the entity (a specific employee) that is represented by the relation. As a result, each tuple of the employee table represents various attributes of a single employee. All relations (and, thus, tables) in a relational database have to adhere to some basic rules to qualify as relations. First, the ordering of columns is immaterial in a table. Second, there can not be identical tuples or rows in a table. And third, each tuple will contain a single value for each of its attributes. A relational database contains multiple tables, each similar to the one in the "flat" database model. One of the strengths of the relational model is that, in principle, any value occurring in two different records (belonging to the same table or to different tables), implies a relationship among those two records. Yet, in order to enforce explicit integrity constraints, relationships between records in tables can also be defined explicitly, by identifying or non-identifying parent-child relationships characterized by assigning cardinality (1:1, (0)1:M, M:M). Tables can also have a designated single attribute or a set of attributes that can act as a "key", which can be used to uniquely identify each tuple in the table. A key that can be used to uniquely identify a row in a table is called a primary key. Keys are commonly used to join or combine data from two or more tables. For example, an Employee table may contain a column named Location which contains a value that matches the key of a Location table. Keys are also critical in the creation of indexes, which facilitate fast retrieval of data from large tables. Any column can be a key, or multiple columns can be grouped together into a compound key. It is not necessary to define all the keys in advance; a column can be used as a key even if it was not originally intended to be one. A key that has an external, real-world meaning (such as a person's name, a book's ISBN, or a car's serial number) is sometimes called a "natural" key. If no natural key is suitable (think of the many people named Brown), an arbitrary or surrogate key can be assigned (such as by giving employees ID numbers). In practice, most databases have both generated and natural keys, because generated keys can be used internally to create links between rows that cannot break, while natural keys can be used, less reliably, for searches and for integration with other databases. (For example, records in two independently developed databases could be matched up by social security number, except when the social security numbers are incorrect, missing, or have changed.) The most common query language used with the relational model is the Structured Query Language (SQL). === Dimensional model === The dimensional model is a specialized adaptation of the relational model used to represent data in data warehouses in a way that data can be easily summarized using online analytical processing, or OLAP queries. In the dimensional model, a database schema consists of a single large table of facts that are described using dimensions and measures. A dimension provides the context of a fact (such as who participated, when and where it happened, and its type) and is used in queries to group related facts together. Dimensions tend to be discrete and are often hierarchical; for example, the location might include the building, state, and country. A measure is a quantity describing the fact, such as revenue. It is important that measures can be meaningfully aggregated—for example, the revenue from different locations can be added together. In an OLAP query, dimensions are chosen and the facts are grouped and aggregated together to create a summary. The dimensional model is often implemented on top of the relational model using a star schema, consisting of one highly normalized table containing the facts, and surrounding denormalized tables containing each dimension. An alternative physical implementation, called a snowflake schema, normalizes multi-level hierarchies within a dimension into multiple tables. A data warehouse can contain multiple dimensional schemas that share dimension tables, allowing them to be used together. Coming up with a standard set of dimensions is an important part of dimensional modeling. Its high performance has made the dimensional model the most popular database structure for OLAP. == Post-relational database models == Products offering a more general data model than the relational model are sometimes classified as post-relational. Alternate terms include "hybrid database", "Object-enhanced RDBMS" and others. The data model in such products incorporates relations but is not constrained by E.F. Codd's Information Principle, which requires that all information in the database must be cast explicitly in terms of values in relations and in no other way Some of these extensions to the relational model integrate concepts from technologies that pre-date the relational model. For example, they allow representation of a directed graph with trees on the nodes. The German company sones implements this concept in its GraphDB. Some post-relational products extend relational systems with non-relational features. Others arrived in much the same place by adding relational features to pre-relational systems. Paradoxically, this allows products that are historically pre-relational, such as PICK and MUMPS, to make a plausible claim to be post-relational. The resource space model (RSM) is a non-relational data model based on multi-dimensional classification. === Graph model === Graph databases allow even more general structure than a network database; any node may be connected to any other node. === Multivalue model === Multivalue databases are "lumpy" data, in that they can store exactly the same way as relational databases, but they also permit a level of depth which the relational model can only approximate using sub-tables. This is nearly identical to the way XML expresses data, where a given field/attribute can have multiple right answers at the same time. Multivalue can be thought of as a compressed form of XML. An example is an invoice, which in either multivalue or relational data could be seen as (A) Invoice Header Table - one entry per invoice, and (B) Invoice Detail Table - one entry per line item. In the multivalue model, we have the option of storing the data as on table, with an embedded table to represent the detail: (A) Invoice Table - one entry per invoice, no other tables needed. The advantage is that the atomicity of the Invoice (conceptual) and the Invoice (data representation) are one-to-one. This also results in fewer reads, less referential integrity issues, and a dramatic decrease in the hardware needed to support a given transaction volume. === Object-oriented database models === In the 1990s, the object-oriented programming paradigm was applied to database technology, creating a new database model known as object databases. This aims to avoid the object–relational impedance mismatch – the overhead of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). Even further, the type system used in a particular application can be defined directly in the database, allowing the database to enforce the same data integrity invariants. Object databases also introduce the key ideas of object programming, such as encapsulation and polymorphism, into the world of databases. A variety of these ways have been tried for storing objects in a database. Some products have approached the problem from the application programming end, by making the objects manipulated by the program persistent. This typically requires the addition of some kind of query language, since conventional programming languages do not have the ability to find objects based on their information content. Others have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities. Object databases suffered because of a lack of standardization: although standards were defined by ODMG, they were never implemented well enough to ensure interoperability between products. Nevertheless, object databases have been used successfully in many applications: usually specialized applications such as engineering databases or molecular biology databases rather than mainstream commercial data processing. However, object database ideas were picked up by the relational vendors and influenced extensions made to these products and indeed to the SQL language. An alternative to translating between objects and relational databases is to use an object–relational mapping (ORM) library. == See also == Database design == References ==
Wikipedia/Database_model
Data modeling in software engineering is the process of creating a data model for an information system by applying certain formal techniques. It may be applied as part of broader Model-driven engineering (MDE) concept. == Overview == Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system. There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them. Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling: to assist business analysts, programmers, testers, manual writers, IT package selectors, engineers, managers, related organizations and clients to understand and use an agreed-upon semi-formal model that encompasses the concepts of the organization and how they relate to one another to manage data as a resource to integrate information systems to design databases/data warehouses (aka data repositories) Data modelling may be performed during various types of projects and in multiple phases of projects. Data models are progressive; there is no such thing as the final data model for a business or application. Instead, a data model should be considered a living document that will change in response to a changing business. The data models should ideally be stored in a repository so that they can be retrieved, expanded, and edited over time. Whitten et al. (2004) determined two types of data modelling: Strategic data modelling: This is part of the creation of an information systems strategy, which defines an overall vision and architecture for information systems. Information technology engineering is a methodology that embraces this approach. Data modelling during systems analysis: In systems analysis logical data models are created as part of the development of new databases. Data modelling is also used as a technique for detailing business requirements for specific databases. It is sometimes called database modelling because a data model is eventually implemented in a database. == Topics == === Data models === Data models provide a framework for data to be used within information systems by providing specific definitions and formats. If a data model is used consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data seamlessly. The results of this are indicated in the diagram. However, systems and interfaces are often expensive to build, operate, and maintain. They may also constrain the business rather than support it. This may occur when the quality of the data models implemented in systems and interfaces is poor. Some common problems found in data models are: Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces. So, business rules need to be implemented in a flexible way that does not result in complicated dependencies, rather the data model should be flexible enough so that changes in the business can be implemented within the data model in a relatively quick and efficient way. Entity types are often not identified, or are identified incorrectly. This can lead to replication of data, data structure and functionality, together with the attendant costs of that duplication in development and maintenance. Therefore, data definitions should be made as explicit and easy to understand as possible to minimize misinterpretation and duplication. Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25 and 70% of the cost of current systems. Required interfaces should be considered inherently while designing a data model, as a data model on its own would not be usable without interfaces within different systems. Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data have not been standardised. To obtain optimal value from an implemented data model, it is very important to define standards that will ensure that data models will both meet business needs and be consistent. === Conceptual, logical and physical schemas === In 1975 ANSI described three kinds of data-model instance: Conceptual schema: describes the semantics of a domain (the scope of the model). For example, it may be a model of the interest area of an organization or of an industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationship assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial "language" with a scope that is limited by the scope of the model. Simply described, a conceptual schema is the first step in organizing the data requirements. Logical schema: describes the structure of some domain of information. This consists of descriptions of (for example) tables, columns, object-oriented classes, and XML tags. The logical schema and conceptual schema are sometimes implemented as one and the same. Physical schema: describes the physical means used to store data. This is concerned with partitions, CPUs, tablespaces, and the like. According to ANSI, this approach allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual schema. The table/column structure can change without (necessarily) affecting the conceptual schema. In each case, of course, the structures must remain consistent across all schemas of the same data model. === Data modelling process === In the context of business process integration (see figure), data modeling complements business process modeling, and ultimately results in database generation. The process of designing a database involves producing the previously described three types of schemas – conceptual, logical, and physical. The database design documented in these schemas is converted through a Data Definition Language, which can then be used to generate a database. A fully attributed data model contains detailed attributes (descriptions) for every entity within it. The term "database design" can describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term "database design" could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the Database Management System or DBMS. In the process, system interfaces account for 25% to 70% of the development and support costs of current systems. The primary reason for this cost is that these systems do not share a common data model. If data models are developed on a system by system basis, then not only is the same analysis repeated in overlapping areas, but further analysis must be performed to create the interfaces between them. Most systems within an organization contain the same basic data, redeveloped for a specific purpose. Therefore, an efficiently designed basic data model can minimize rework with minimal modifications for the purposes of different systems within the organization === Modeling methodologies === Data models represent information areas of interest. While there are many ways to create data models, according to Len Silverston (1997) only two modeling methodologies stand out, top-down and bottom-up: Bottom-up models or View Integration models are often the result of a reengineering effort. They usually start with existing data structures forms, fields on application screens, or reports. These models are usually physical, application-specific, and incomplete from an enterprise perspective. They may not promote data sharing, especially if they are built without reference to other parts of the organization. Top-down logical data models, on the other hand, are created in an abstract way by getting information from people who know the subject area. A system may not implement all the entities in a logical model, but the model serves as a reference point or template. Sometimes models are created in a mixture of the two methods: by considering the data needs and structure of an application and by consistently referencing a subject-area model. In many environments, the distinction between a logical data model and a physical data model is blurred. In addition, some CASE tools don't make a distinction between logical and physical data models. === Entity–relationship diagrams === There are several notations for data modeling. The actual model is frequently called "entity–relationship model", because it depicts data in terms of the entities and relationships described in the data. An entity–relationship model (ERM) is an abstract conceptual representation of structured data. Entity–relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion. These models are being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain universe of discourse i.e. the area of interest. Several techniques have been developed for the design of data models. While these methodologies guide data modelers in their work, two different people using the same methodology will often come up with very different results. Most notable are: Bachman diagrams Barker's notation Chen's notation Data Vault Modeling Extended Backus–Naur form IDEF1X Object-relational mapping Object-Role Modeling and Fully Communication Oriented Information Modeling Relational Model Relational Model/Tasmania === Generic data modeling === Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. The definition of the generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being a binary relation between an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related. Given an extensible list of classes, this allows the classification of any individual thing and to specification of part-whole relations for any individual object. By standardization of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages. Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model. === Semantic data modeling === The logical data structure of a DBMS, whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. That is unless the semantic data model is implemented in the database on purpose, a choice which may slightly impact performance but generally vastly improves productivity. Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure the real world, in terms of resources, ideas, events, etc., is symbolically defined by its description within physical data stores. A semantic data model is an abstraction which defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world. The purpose of semantic data modeling is to create a structural model of a piece of the real world, called "universe of discourse". For this, three fundamental structural relations are considered: Classification/instantiation: Objects with some structural similarity are described as instances of classes Aggregation/decomposition: Composed objects are obtained by joining their parts Generalization/specialization: Distinct classes with some common properties are reconsidered in a more generic class with the common attributes A semantic data model can be used to serve many purposes, such as: Planning of data resources Building of shareable databases Evaluation of vendor software Integration of existing databases The overall goal of semantic data models is to capture more meaning of data by integrating relational concepts with more powerful abstraction concepts known from the artificial intelligence field. The idea is to provide high-level modeling primitives as integral parts of a data model in order to facilitate the representation of real-world situations. == See also == Architectural pattern Comparison of data modeling tools Data (computer science) Data dictionary Document modeling Enterprise data modelling Entity Data Model Information management Information model Building information modeling Metadata modeling Three-schema approach Zachman Framework == References == This article incorporates public domain material from the National Institute of Standards and Technology == Further reading == ter Bekke, Johannes Hendrikus (June 4, 1991). Semantic Data Modeling in Relational Environments (PDF) (PhD thesis). Technische Universiteit Delft. Archived (PDF) from the original on April 2, 2025. Retrieved April 2, 2025. John Vincent Carlis, Joseph D. Maguire (2001). Mastering Data Modeling: A User-driven Approach. Alan Chmura, J. Mark Heumann (2005). Logical Data Modeling: What it is and how to Do it. Martin E. Modell (1992). Data Analysis, Data Modeling, and Classification. M. Papazoglou, Stefano Spaccapietra, Zahir Tari (2000). Advances in Object-oriented Data Modeling. G. Lawrence Sanders (1995). Data Modeling Graeme C. Simsion, Graham C. Witt (2005). Data Modeling Essentials' Matthew West (2011) Developing High Quality Data Models == External links == Agile/Evolutionary Data Modeling Data modeling articles Archived March 7, 2010, at the Wayback Machine Database Modelling in UML Data Modeling 101 Semantic data modeling System Development, Methodologies and Modeling Archived March 7, 2012, at the Wayback Machine Notes on by Tony Drewry Request For Proposal - Information Management Metamodel (IMM) of the Object Management Group Data Modeling is NOT just for DBMS's Part 1 Chris Bradley Data Modeling is NOT just for DBMS's Part 2 Chris Bradley
Wikipedia/Data_modeling
In systems engineering, software engineering, and computer science, a function model or functional model is a structured representation of the functions (activities, actions, processes, operations) within the modeled system or subject area. A function model, similar with the activity model or process model, is a graphical representation of an enterprise's function within a defined scope. The purposes of the function model are to describe the functions and processes, assist with discovery of information needs, help identify opportunities, and establish a basis for determining product and service costs. == History == The function model in the field of systems engineering and software engineering originates in the 1950s and 1960s, but the origin of functional modelling of organizational activity goes back to the late 19th century. In the late 19th century the first diagrams appeared that pictured business activities, actions, processes, or operations, and in the first half of the 20th century the first structured methods for documenting business process activities emerged. One of those methods was the flow process chart, introduced by Frank Gilbreth to members of American Society of Mechanical Engineers (ASME) in 1921 with the presentation, entitled “Process Charts—First Steps in Finding the One Best Way”. Gilbreth's tools quickly found their way into industrial engineering curricula. The emergence of the field of systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries to apply the discipline. One of the first to define the function model in this field was the British engineer William Gosling. In his book The design of engineering systems (1962, p. 25) he stated: A functional model must thus achieve two aims in order to be of use. It must furnish a throughput description mechanics capable of completely defining the first and last throughput states, and perhaps some of the intervening states. It must also offer some means by which any input, correctly described in terms of this mechanics, can be used to generate an output which is an equally correct description of the output which the actual system would have given for the input concerned. It may also be noted that there are two other things which a functional model may do, but which are not necessary to all functional models. Thus such a system may, but need not, describe the system throughputs other than at the input and output, and it may also contain a description of the operation which each element carries out on the throughput, but once again this is not. One of the first well defined function models, was the functional flow block diagram (FFBD) developed by the defense-related TRW Incorporated in the 1950s. In the 1960s it was exploited by the NASA to visualize the time sequence of events in a space systems and flight missions. It is further widely used in classical systems engineering to show the order of execution of system functions. == Functional modeling topics == === Functional perspective === In systems engineering and software engineering a function model is created with a functional modeling perspective. The functional perspective is one of the perspectives possible in business process modelling, other perspectives are for example behavioural, organisational or informational. A functional modeling perspective concentrates on describing the dynamic process. The main concept in this modeling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modeling language employing this perspective is data flow diagrams. The perspective uses four symbols to describe a process, these being: Process: Illustrates transformation from input to output. Store: Data-collection or some sort of material. Flow: Movement of data or material in the process. External Entity: External to the modeled system, but interacts with it. Now, with these symbols, a process can be represented as a network of these symbols. This decomposed process is a DFD, data flow diagram. In Dynamic Enterprise Modeling a division is made in the Control model, Function Model, Process model and Organizational model. === Functional decomposition === Functional decomposition refers broadly to the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed from those parts by function composition. In general, this process of decomposition is undertaken either for the purpose of gaining insight into the identity of the constituent components, or for the purpose of obtaining a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity. Functional decomposition has a prominent role in computer programming, where a major goal is to modularize processes to the greatest extent possible. For example, a library management system may be broken up into an inventory module, a patron information module, and a fee assessment module. In the early decades of computer programming, this was manifested as the "art of subroutining," as it was called by some prominent practitioners. Functional decomposition of engineering systems is a method for analyzing engineered systems. The basic idea is to try to divide a system in such a way that each block of the block diagram can be described without an "and" or "or" in the description. This exercise forces each part of the system to have a pure function. When a system is composed of pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function. == Functional modeling methods == The functional approach is extended in multiple diagrammic techniques and modeling notations. This section gives an overview of the important techniques in chronological order. === Function block diagram === A functional block diagram is a block diagram, that describes the functions and interrelationships of a system. The functional block diagram can picture: Functions of a system pictured by blocks Input of a block pictured with lines, and Relationships between 9 functions Functional sequences and paths for matter and or signals The block diagram can use additional schematic symbols to show particular properties. Specific function block diagram are the classic functional flow block diagram, and the Function Block Diagram (FBD) used in the design of programmable logic controllers. === Functional flow block diagram === The functional flow block diagram (FFBD) is a multi-tier, time-sequenced, step-by-step flow diagram of the system's functional flow. The diagram is developed in the 1950s and widely used in classical systems engineering. The functional flow block diagram is also referred to as Functional Flow Diagram, functional block diagram, and functional flow. Functional flow block diagrams (FFBD) usually define the detailed, step-by-step operational and support sequences for systems, but they are also used effectively to define processes in developing and producing systems. The software development processes also use FFBDs extensively. In the system context, the functional flow steps may include combinations of hardware, software, personnel, facilities, and/or procedures. In the FFBD method, the functions are organized and depicted by their logical order of execution. Each function is shown with respect to its logical relationship to the execution and completion of other functions. A node labeled with the function name depicts each function. Arrows from left to right show the order of execution of the functions. Logic symbols represent sequential or parallel execution of functions. === HIPO and oPO === HIPO for hierarchical input process output is a popular 1970s systems analysis design aid and documentation technique for representing the modules of a system as a hierarchy and for documenting each module. It was used to develop requirements, construct the design, and support implementation of an expert system to demonstrate automated rendezvous. Verification was then conducted systematically because of the method of design and implementation. The overall design of the system is documented using HIPO charts or structure charts. The structure chart is similar in appearance to an organizational chart, but has been modified to show additional detail. Structure charts can be used to display several types of information, but are used most commonly to diagram either data structures or code structures. === N2 Chart === The N2 Chart is a diagram in the shape of a matrix, representing functional or physical interfaces between system elements. It is used to systematically identify, define, tabulate, design, and analyze functional and physical interfaces. It applies to system interfaces and hardware and/or software interfaces. The N2 diagram has been used extensively to develop data interfaces, primarily in the software areas. However, it can also be used to develop hardware interfaces. The basic N2 chart is shown in Figure 2. The system functions are placed on the diagonal; the remainder of the squares in the N × N matrix represent the interface inputs and outputs. === Structured Analysis and Design Technique === Structured Analysis and Design Technique (SADT) is a software engineering methodology for describing systems as a hierarchy of functions, a diagrammatic notation for constructing a sketch for a software application. It offers building blocks to represent entities and activities, and a variety of arrows to relate boxes. These boxes and arrows have an associated informal semantics. SADT can be used as a functional analysis tool of a given process, using successive levels of details. The SADT method allows to define user needs for IT developments, which is used in industrial Information Systems, but also to explain and to present an activity's manufacturing processes, procedures. The SADT supplies a specific functional view of any enterprise by describing the functions and their relationships in a company. These functions fulfill the objectives of a company, such as sales, order planning, product design, part manufacturing, and human resource management. The SADT can depict simple functional relationships and can reflect data and control flow relationships between different functions. The IDEF0 formalism is based on SADT, developed by Douglas T. Ross in 1985. === IDEF0 === IDEF0 is a function modeling methodology for describing manufacturing functions, which offers a functional modeling language for the analysis, development, re-engineering, and integration of information systems; business processes; or software engineering analysis. It is part of the IDEF family of modeling languages in the field of software engineering, and is built on the functional modeling language building SADT. The IDEF0 Functional Modeling method is designed to model the decisions, actions, and activities of an organization or system. It was derived from the established graphic modeling language structured analysis and design technique (SADT) developed by Douglas T. Ross and SofTech, Inc. In its original form, IDEF0 includes both a definition of a graphical modeling language (syntax and semantics) and a description of a comprehensive methodology for developing models. The US Air Force commissioned the SADT developers to develop a function model method for analyzing and communicating the functional perspective of a system. IDEF0 should assist in organizing system analysis and promote effective communication between the analyst and the customer through simplified graphical devices. === Axiomatic design === Axiomatic design is a top down hierarchical functional decomposition process used as a solution synthesis framework for the analysis, development, re-engineering, and integration of products, information systems, business processes or software engineering solutions. Its structure is suited mathematically to analyze coupling between functions in order to optimize the architectural robustness of potential functional solution models. == Related types of models == In the field of systems and software engineering numerous specific function and functional models and close related models have been defined. Here only a few general types will be explained. === Business function model === A Business Function Model (BFM) is a general description or category of operations performed routinely to carry out an organization's mission. They "provide a conceptual structure for the identification of general business functions". It can show the critical business processes in the context of the business area functions. The processes in the business function model must be consistent with the processes in the value chain models. Processes are a group of related business activities performed to produce an end product or to provide a service. Unlike business functions that are performed on a continual basis, processes are characterized by the fact that they have a specific beginning and an end point marked by the delivery of a desired output. The figure on the right depicts the relationship between the business processes, business functions, and the business area's business reference model. === Business Process Model and Notation === Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a workflow. BPMN was developed by Business Process Management Initiative (BPMI), and is currently maintained by the Object Management Group since the two organizations merged in 2005. The current version of BPMN is 2.0. The Business Process Model and Notation (BPMN) specification provides a graphical notation for specifying business processes in a Business Process Diagram (BPD). The objective of BPMN is to support business process management for both technical users and business users by providing a notation that is intuitive to business users yet able to represent complex process semantics. The BPMN specification also provides a mapping between the graphics of the notation to the underlying constructs of execution languages, particularly BPEL4WS. === Business reference model === A Business reference model is a reference model, concentrating on the functional and organizational aspects of the core business of an enterprise, service organization or government agency. In enterprise engineering a business reference model is part of an Enterprise Architecture Framework or Architecture Framework, which defines how to organize the structure and views associated with an Enterprise Architecture. A reference model in general is a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that perform them. Other types of business reference model can also depict the relationship between the business processes, business functions, and the business area's business reference model. These reference model can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance. === Operator function model === The Operator Function Model (OFM) is proposed as an alternative to traditional task analysis techniques used by human factors engineers. An operator function model attempts to represent in mathematical form how an operator might decompose a complex system into simpler parts and coordinate control actions and system configurations so that acceptable overall system performance is achieved. The model represents basic issues of knowledge representation, information flow, and decision making in complex systems. Miller (1985) suggests that the network structure can be thought of as a possible representation of an operator's internal model of the system plus a control structure which specifies how the model is used to solve the decision problems that comprise operator control functions. == See also == Bus Functional Model Business process modeling Data and information visualization Data model Enterprise modeling Functional Software Architecture Multilevel Flow Modeling Polynomial function model Rational function model Scientific modeling Unified Modeling Language View model == References == This article incorporates public domain material from the National Institute of Standards and Technology This article incorporates public domain material from Operator Function Model (OFM). Federal Aviation Administration.
Wikipedia/Function_model
In computing, object model has two related but distinct meanings: The properties of objects in general in a specific computer programming language, technology, notation or methodology that uses them. Examples are the object models of Java, the Component Object Model (COM), or Object-Modeling Technique (OMT). Such object models are usually defined using concepts such as class, generic function, message, inheritance, polymorphism, and encapsulation. There is an extensive literature on formalized object models as a subset of the formal semantics of programming languages. A collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model [1] for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope. An object model consists of the following important features: Object reference Objects can be accessed via object references. To invoke a method in an object, the object reference and method name are given, together with any arguments. Interfaces An interface provides a definition of the signature of a set of methods without specifying their implementation. An object will provide a particular interface if its class contains code that implement the method of that interface. An interface also defines types that can be used to declare the type of variables or parameters and return values of methods. Actions An action in object-oriented programming (OOP) is initiated by an object invoking a method in another object. An invocation can include additional information needed to carry out the method. The receiver executes the appropriate method and then returns control to the invoking object, sometimes supplying a result. Exceptions Programs can encounter various errors and unexpected conditions of varying seriousness. During the execution of the method many different problems may be discovered. Exceptions provide a clean way to deal with error conditions without complicating the code. A block of code may be defined to throw an exception whenever particular unexpected conditions or errors arise. This means that control passes to another block of code that catches the exception. == See also == Object-oriented programming Object-oriented analysis and design Object database Object Management Group Domain-driven design Eigenclass model == Literature == Weisfeld, Matt (2003). The Object-Oriented Thought Process (2nd ed.). Sams. ISBN 0-672-32611-6. Fowler, Martin (1996). Analysis Patterns: Reusable Object Models. Addison-Wesley. ISBN 0-201-89542-0. Fisher, K.; Honsell, F.; Mitchell, J.C. (1994). "A lambda calculus of objects and method specialization" (PDF). [1993] Proceedings Eighth Annual IEEE Symposium on Logic in Computer Science. Vol. 1. pp. 3–37. doi:10.1109/LICS.1993.287603. ISBN 0-8186-3140-6. S2CID 19578302. Archived from the original (PDF) on 2018-07-03. {{cite book}}: |journal= ignored (help) Marini, Joe (2002). Document Object Model: Processing Structured Documents. Osborne/McGray-Hill. ISBN 0-07-222436-3. Lippman, Stanley (1996). Inside the C++ Object Model. Addison-Wesley. ISBN 0-201-83454-5. == External links == Document Object Model (DOM) The official W3C definition of the DOM. "The Java Object Model" The Ruby Object Model: Data Structure in Detail Object Membership: The core structure of object-oriented programming Object Model Features Matrix A "representative sample of the design space of object models" (sense 1). ASCOM Standards web site
Wikipedia/Object_model
The philosophy of computer science is concerned with the philosophical questions that arise within the study of computer science. There is still no common understanding of the content, aims, focus, or topics of the philosophy of computer science, despite some attempts to develop a philosophy of computer science like the philosophy of physics or the philosophy of mathematics. Due to the abstract nature of computer programs and the technological ambitions of computer science, many of the conceptual questions of the philosophy of computer science are also comparable to the philosophy of science, philosophy of mathematics, and the philosophy of technology. == Overview == Many of the central philosophical questions of computer science are centered on the logical, ethical, methodological, ontological and epistemological issues that concern it. Some of these questions may include: What is computation? Does the Church–Turing thesis capture the mathematical notion of an effective method in logic and mathematics? What are the philosophical consequences of the P vs NP problem? What is information? == Church–Turing thesis == The Church–Turing thesis and its variations are central to the theory of computation. Since, as an informal notion, the concept of effective calculability does not have a formal definition, the thesis, although it has near-universal acceptance, cannot be formally proven. The implications of this thesis is also of philosophical concern. Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. == P versus NP problem == The P versus NP problem is an unsolved problem in computer science and mathematics. It asks whether every problem whose solution can be verified in polynomial time (and so defined to belong to the class NP) can also be solved in polynomial time (and so defined to belong to the class P). Most computer scientists believe that P ≠ NP. Apart from the reason that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems, philosophical reasons that concern its implications may have motivated this belief. For instance, according to Scott Aaronson, the American computer scientist then at MIT: If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps", no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss. == See also == Computer-assisted proof: Philosophical objections Philosophy of artificial intelligence Philosophy of information Philosophy of mathematics Philosophy of science Philosophy of technology == References == == Further reading == Matti Tedre (2014). The Science of Computing: Shaping a Discipline. Chapman Hall. Scott Aaronson. "Why Philosophers Should Care About Computational Complexity". In Computability: Gödel, Turing, Church, and beyond. Timothy Colburn. Philosophy and Computer Science. Explorations in Philosophy. M.E. Sharpe, 1999. ISBN 1-56324-991-X. A.K. Dewdney. New Turing Omnibus: 66 Excursions in Computer Science Luciano Floridi (editor). The Blackwell Guide to the Philosophy of Computing and Information, 2004. Luciano Floridi (editor). Philosophy of Computing and Information: 5 Questions. Automatic Press, 2008. Luciano Floridi. Philosophy and Computing: An Introduction, Routledge, 1999. Christian Jongeneel. The informatical worldview, an inquiry into the methodology of computer science. Jan van Leeuwen. "Towards a philosophy of the information and computing sciences", NIAS Newsletter 42, 2009. Moschovakis, Y. (2001). What is an algorithm? In Enquist, B. and Schmid, W., editors, Mathematics unlimited — 2001 and beyond, pages 919–936. Springer. Alexander Ollongren, Jaap van den Herik. Filosofie van de informatica. London and New York: Routledge, 1999. ISBN 0-415-19749-X Tedre, Matti (2014), The Science of Computing: Shaping a Discipline, CRC Press, ISBN 9781482217698 Taylor and Francis. Ray Turner and Nicola Angius. "The Philosophy of Computer Science". Stanford Encyclopedia of Philosophy. Matti Tedre (2011). Computing as a Science: A Survey of Competing Viewpoints. Minds & Machines 21, 3, 361–387. Ray Turner. Computational Artefacts-Towards a Philosophy of Computer Science. Springer. [1] == External links == The International Association for Computing and Philosophy Philosophy of Computing and Information at PhilPapers Philosophy of Computation at Berkeley Rapaport, William J. (2020-07-27). "Philosophy of Computer Science (draft version)" (PDF). Archived from the original (PDF) on 2021-10-26.
Wikipedia/Philosophy_of_computer_science
Applied science is the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena. There are applied natural sciences, as well as applied formal and social sciences. Applied science examples include genetic epidemiology which applies statistics and probability theory, and applied psychology, including criminology. == Applied research == Applied research is the use of empirical methods to collect data for practical purposes. It accesses and uses accumulated theories, knowledge, methods, and techniques for a specific state, business, or client-driven purpose. In contrast to engineering, applied research does not include analyses or optimization of business, economics, and costs. Applied research can be better understood in any area when contrasting it with basic or pure research. Basic geographical research strives to create new theories and methods that aid in explaining the processes that shape the spatial structure of physical or human environments. Instead, applied research utilizes existing geographical theories and methods to comprehend and address particular empirical issues. Applied research usually has specific commercial objectives related to products, procedures, or services. The comparison of pure research and applied research provides a basic framework and direction for businesses to follow. Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed. For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for the interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered. Moreover, this type of research method applies natural sciences to human conditions: Action research: aids firms in identifying workable solutions to issues influencing them. Evaluation research: researchers examine available data to assist clients in making wise judgments. Industrial research: create new goods/services that will satisfy the demands of a target market. (Industrial development would be scaling up production of the new goods/services for mass consumption to satisfy the economic demand of the customers while maximizing the ratio of the good/service output rate to resource input rate, the ratio of good/service revenue to material & energy costs, and the good/service quality. Industrial development would be considered engineering. Industrial development would fall outside the scope of applied research.) Gauging research: A type of evaluation research that uses a logic of rating to assess a process or program. It is a type of normative assessment and used in accreditation, hiring decisions and process evaluation. It uses standards or the practical ideal type and is associated with deductive qualitative research. Since applied research has a provisional close-to-the-problem and close-to-the-data orientation, it may also use a more provisional conceptual framework, such as working hypotheses or pillar questions. The OECD's Frascati Manual describes applied research as one of the three forms of research, along with basic research & experimental development. Due to its practical focus, applied research information will be found in the literature associated with individual disciplines. == Branches == Applied research is a method of problem-solving and is also practical in areas of science, such as its presence in applied psychology. Applied psychology uses human behavior to grab information to locate a main focus in an area that can contribute to finding a resolution. More specifically, this study is applied in the area of criminal psychology. With the knowledge obtained from applied research, studies are conducted on criminals alongside their behavior to apprehend them. Moreover, the research extends to criminal investigations. Under this category, research methods demonstrate an understanding of the scientific method and social research designs used in criminological research. These reach more branches along the procedure towards the investigations, alongside laws, policy, and criminological theory. Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Some scientific subfields used by engineers include thermodynamics, heat transfer, fluid mechanics, statics, dynamics, mechanics of materials, kinematics, electromagnetism, materials science, earth sciences, and engineering physics. Medical sciences, such as medical microbiology, pharmaceutical research, and clinical virology, are applied sciences that apply biology and chemistry to medicine. == In education == In Canada, the Netherlands, and other places, the Bachelor of Applied Science (BASc) is sometimes equivalent to the Bachelor of Engineering and is classified as a professional degree. This is based on the age of the school where applied science used to include boiler making, surveying, and engineering. There are also Bachelor of Applied Science degrees in Child Studies. The BASc tends to focus more on the application of the engineering sciences. In Australia and New Zealand, this degree is awarded in various fields of study and is considered a highly specialized professional degree. In the United Kingdom's educational system, Applied Science refers to a suite of "vocational" science qualifications that run alongside "traditional" General Certificate of Secondary Education or A-Level Sciences. Applied Science courses generally contain more coursework (also known as portfolio or internally assessed work) compared to their traditional counterparts. These are an evolution of the GNVQ qualifications offered up to 2005. These courses regularly come under scrutiny and are due for review following the Wolf Report 2011; however, their merits are argued elsewhere. In the United States, The College of William & Mary offers an undergraduate minor as well as Master of Science and Doctor of Philosophy degrees in "applied science". Courses and research cover varied fields, including neuroscience, optics, materials science and engineering, nondestructive testing, and nuclear magnetic resonance. University of Nebraska–Lincoln offers a Bachelor of Science in applied science, an online completion Bachelor of Science in applied science, and a Master of Applied Science. Coursework is centered on science, agriculture, and natural resources with a wide range of options, including ecology, food genetics, entrepreneurship, economics, policy, animal science, and plant science. In New York City, the Bloomberg administration awarded the consortium of Cornell-Technion $100 million in City capital to construct the universities' proposed Applied Sciences campus on Roosevelt Island. == See also == Applied mathematics Basic research Exact sciences Hard and soft science Invention Secondary research == References == == External links == Media related to Applied sciences at Wikimedia Commons
Wikipedia/Applied_science
Human-centered design (HCD, also human-centered design, as used in ISO standards) is an approach to problem-solving commonly used in process, product, service and system design, management, and engineering frameworks that develops solutions to problems by involving the human perspective in all steps of the problem-solving process. Human involvement typically takes place in initially observing the problem within context, brainstorming, conceptualizing, developing concepts and implementing the solution. Human-centered design is an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability; and counteracts possible adverse effects of use on human health, safety and performance. Human-centered design builds upon participatory action research by moving beyond participants' involvement and producing solutions to problems rather than solely documenting them. Initial stages usually revolve around immersion, observing, and contextual framing— in which innovators immerse themselves in the problem and community. Subsequent stages may then focus on community brainstorming, modeling and prototyping and implementation in community spaces. Human-centered design can be seen as a philosophy that focuses on analyzing the needs of the user through extensive research. User-oriented design is capable of driving innovation and encourages the practice of iterative design, which can create small improvements in existing products and newer products, thus giving room for the potential to transform markets. == Development == Human-centered design has its origins at the intersection of numerous fields including engineering, psychology, anthropology and the arts. As an approach to creative problem-solving in technical and business fields its origins are often traced to the founding of the Stanford University design program in 1958 by Professor John E. Arnold who first proposed the idea that engineering design should be human-centered. This work coincided with the rise of creativity techniques and the subsequent design methods movement in the 1960s. Since then, as creative design processes and methods have been increasingly popularized for business purposes, the standardized and defined human-centered design is mistakenly equated with the vaguely outlined "design thinking". In Architect or Bee?, Mike Cooley coined the term "human-centered systems" in the context of the transition in his profession from traditional drafting at a drawing board to computer-aided design. Human-centered systems, as used in economics, computing and design, aim to preserve or enhance human skills, in both manual and office work, in environments in which technology tends to undermine the skills that people use in their work. Human centeredness asserts firstly, that we must always put people before machines, however complex or elegant that machine might be, and, secondly, it marvels and delights at the ability and ingenuity of human beings. The Human Centered Systems movement looks sensitively at these forms of science and technology which meet our cultural, historical and societal requirements, and seeks to develop more appropriate forms of technology to meet our long-term aspirations. In the Human Centered System, there exists a symbiotic relation between the human and the machine, in which the human being would handle the qualitative subjective judgements and the machine the quantitative elements. It involves a radical redesign of the interface technologies and at a philosophical level, the objective is to provide tools (in the Heidegger sense) which would support human skill and ingenuity rather than machines which would objectivise that knowledge == User participation == The user-oriented framework relies heavily on user participation and user feedback in the planning process. Users are able to provide new perspective and ideas, which can be considered in a new round of improvements and changes. It is said that increased user participation in the design process can garner a more comprehensive understanding of the design issues, due to more contextual and emotional transparency between researcher and participant. A key element of human centered design is applied ethnography, which is a research method adopted from cultural anthropology. This research method requires researchers to be fully immersed in the observation so that implicit details are also recorded. == Rationale for adoption == Even after decades of thought on Human Centered Design, management and finance systems still believe that "another's liability is one's asset" could be true of porous human bodies, embedded in nature and inseparable from each other. On the contrary, our biological and ecological interconnections ensure that "another's liability is our liability". Sustainable business systems can only emerge if these biological and ecological interconnections are accepted and accounted for. Using a human-centered approach to design and development has substantial economic and social benefits for users, employers and suppliers. Highly usable systems and products tend to be more successful both technically and commercially. In some areas, such as consumer products, purchasers will pay a premium for well-designed products and systems. Support and help-desk costs are reduced when users can understand and use products without additional assistance. In most countries, employers and suppliers have legal obligations to protect users from risks to their health, and safety and human-centered methods can reduce these risks (e.g. musculoskeletal risks). Systems designed using human-centered methods improve quality, for example, by: increasing the productivity of users and the operational efficiency of organizations being easier to understand and use, thus reducing training and support costs increasing usability for people with a wider range of capabilities and thus increasing accessibility improving user experience reducing discomfort and stress providing a competitive advantage, for example by improving brand image contributing towards sustainability objectives Human-centered design may be utilized in multiple fields, including sociological sciences and technology. It has been noted for its ability to consider human dignity, access, and ability roles when developing solutions. Because of this, human-centered design may more fully incorporate culturally sound, human-informed, and appropriate solutions to problems in a variety of fields rather than solely product and technology-based fields. Because human-centered design focuses on the human experience, researchers and designers can address "issues of social justice and inclusion and encourage ethical, reflexive design." Human-centered design arises from underlying principles of human factors. When it comes to those two concepts, they are quite interconnected; human factors are about discovering the attributes of human cognition and behavior that are important for making technology work for people. It is what allows humans as a species to innovate over time. Human-centered design was used to discover that Blackberries have less human usability than an iPhone and that important controls on a panel that look too similar will be easily confused and may cause an increased risk of human error. An important distinction between human-centered design and any other form of design is that human-centered design is not just about aesthetics, and is not always designing for interfaces. It could be designing for controls in the world, tasks in the world, hardware, decision-making, or cognition. For instance, if a nurse is too tired from a long shift, they might confuse the pumps through which might be administered a bag of penicillin to a patient. In this case, the human-centered design would encompass a task redesign, a possible institute policy redesign, and an equipment redesign. Typically, human-centered design is more focused on "methodologies and techniques for interacting with people in such a manner as to facilitate the detection of meanings, desires and needs, either by verbal or non-verbal means." In contrast, user-centered design is another approach and framework of processes which considers the human role in product use, but focuses largely on the production of interactive technology designed around the user's physical attributes rather than social problem-solving. == Human-centered design approach in Health == In the context of health-seeking behaviors, Human Centered Design can be used to understand why people do or do not seek out health services, even when those services are available and affordable. Human centered design is a powerful tool for improving health-seeking behaviors. This understanding can then be used to develop interventions to address the barriers and promote desired behaviors. Demand-related challenges associated with the acceptability, responsiveness, and quality of services can be addressed by working directly with users to understand their needs and perspectives. HCD can help in designing interventions that are more likely to be effective. The integration of the principle of Human Centered Design and anti-racism practices can help in addressing existing health disparities present in the healthcare system, and can help to center the needs of people who being to marginalized communities. This type of design can create fair and equitable health outcomes for marginalized communities, who are often left out due to unmet needs. Researchers who apply Human Centered design are thoughtfully approaching the needs of populations who are traditionally excluded, therefore dismantling oppressive systems which previously or have continued to reinforce structural racism. == Critiques == Human-centered design has been both lauded and criticised for its ability to actively solve problems with affected communities. Criticisms include the inability of human-centered design to push the boundaries of available technology by solely tailoring to the demands of present-day solutions, rather than focus on possible future solutions. In addition, human-centered design often considers context, but does not offer tailored approaches for very specific groups of people. New research on innovative approaches include youth-centered health design, which focuses on youth as the central aspect with particular needs and limitations not always addressed by human-centered design approaches. Nevertheless, human-centered design that doesn't reflect very specific groups of users and their needs is human-centered design poorly executed, since the principles of human-system interaction require the reflection of those specified needs. Whilst users are very important for some types of innovation (namely incremental innovation), focusing too much on the user may result in producing an outdated or no longer necessary product or service. This is because the insights that you achieve from studying the user today are insights that are related to the users of today and the environment she or he lives in today. If your solution will be available only two or three years from now, your user may have developed new preferences, wants and needs by then. == Modern Advances in Human-Centered Design == === Human-Centered Design with Artificial Intelligence === Human-Centered AI (HCAI) is a methodical approach to AI system design that prioritizes human values and requirements. This method places a strong emphasis on boosting human self-efficacy, encouraging innovation, guaranteeing accountability, and promoting social interaction. By putting these human goals first, HCAI also tackles important concerns like privacy, security, environmental preservation, social justice, and human rights. This represents a dramatic change from an algorithmic approach to a human-centered system design, which has been compared to a second Copernican Revolution. HCAI introduces a two-dimensional framework that demonstrates the possibility of combining high levels of human control with high levels of automation. This framework suggests a move away from viewing AI as autonomous teammates, instead positioning AI as powerful tools and tele-operated devices that empower users. Furthermore, HCAI proposes a three-level governance structure to enhance the reliability and trustworthiness of AI systems. At the first level, software engineering teams are encouraged to develop robust and dependable systems. At the second level, managers are urged to cultivate a safety culture across their organizations. At the third level, industry-wide certification can help establish standards that promote trustworthy HCAI systems. These concepts are designed to be dynamic, inviting challenge, refinement, and extension to accommodate new technologies. They aim to reframe design discussions for AI products and services, offering an opportunity to restart and reshape these conversations. The ultimate goal is to deliver greater benefits to individuals, families, communities, businesses, and society, ensuring that AI developments align with human values and societal goals === Integration of Human-Centered Design and Community Based Participatory Research === By joining two people-centered approaches, Human-Centered Design (HCD) and Community-Based Participatory Research (CBPR) offer a fresh way to tackle challenging real-world issues. While CBPR has been used in academic and community partnerships to address health inequities through social action and empowerment, HCD has historically been used in the business sector to guide the creation of products and services. Although the public sector has just started using HCD concepts to inform public policy, more research is still needed to fully understand its cycle and how it might be strategically applied to health promotion. By combining CBPR's emphasis on community trust and collaboration with HCD's emphasis on user-centric design, this integration provides a complimentary approach. The potential of these approaches to improve public health outcomes is demonstrated by CBPR initiatives, such as those that try to lower the spread of STIs and improve handwashing among farmworkers. The combined strategy can result in more lasting and successful health interventions by addressing pertinent concerns, establishing partnerships, and involving community members. === Human-Centered Design in SEIPS 3.0 === In order to improve quality and safety in healthcare, Human Factors and Ergonomics (HFE) are integrated using the Systems Engineering Initiative for Patient Safety (SEIPS) models. These models are based on a human-centered design approach, which gives patients' and healthcare practitioners' wants and experiences top priority when designing systems. By extending the "process" component to handle the intricacies of contemporary healthcare delivery, SEIPS 3.0 builds upon this. The idea of the patient journey is introduced by the SEIPS 3.0 model as healthcare becomes more dispersed across different locations and eras. This journey-centric approach emphasizes a comprehensive view of patients' experiences over time by mapping their contacts with various care venues. By emphasizing the patient journey, SEIPS 3.0 emphasizes how crucial it is to create systems that can adapt to patients' changing demands in order to provide seamless, secure, and encouraging care. In order to implement human-centered design in SEIPS 3.0, HFE professionals must take into account a variety of viewpoints and encourage sincere involvement from all parties involved, including patients, caregivers, and medical professionals. In order to increase interactions across various healthcare settings and capture the intricacies of patient experiences, this approach calls for creative techniques. By putting people first, SEIPS 3.0 seeks to develop healthcare systems that improve the general happiness and well-being of both patients and caregivers in addition to preventing harm == See also == User-centered design Design thinking Human-Centered Artificial Intelligence Humanistic economics == References == == External links == === International Standards === ISO 13407:1999 Human-centred design processes for interactive systems – Now Withdrawn ISO 9241-210:2010 Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems – Now Withdrawn ISO 9241-210:2019 Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems – UPDATED 2019
Wikipedia/Human-centered_design
The incremental build model is a method of software development where the product is designed, implemented, and tested incrementally (a little more is added each time) until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. This model combines the elements of the waterfall model with the iterative philosophy of prototyping. According to the Project Management Institute, an incremental approach is an "adaptive development approach in which the deliverable is produced successively, adding functionality until the deliverable contains the necessary and sufficient capability to be considered complete.": Section 3. Definitions  The product is decomposed into several components, each of which is designed and built separately (termed as builds).: Section 3.5  Each component is delivered to the client when it is complete. This allows partial utilization of the product and avoids a long development time. It also avoids a large initial capital outlay and subsequent long waiting periods. This model of development also helps ease the traumatic effect of introducing a completely new system all at once. == Incremental model == The incremental model applies the waterfall model incrementally. The series of releases is referred to as “increments," with each increment providing more functionality to the customers. After the first increment, a core product is delivered, which can already be used by the customer. Based on customer feedback, a plan is developed for the next increments, and modifications are made accordingly. This process continues, with increments being delivered until the complete product is delivered. The incremental philosophy is also used in the agile process model (see agile modeling).: Section 2.3.3  The Incremental model can be applied to DevOps. DevOps centers around the idea of minimizing the risk and cost of a DevOps adoption whilst building the necessary in-house skillset and momentum. Characteristics of Incremental Model The system is broken down into many mini-development projects. Partial systems are built to produce the final system. First tackled the highest priority requirements. The requirement of a portion is frozen once the incremented portion is developed. Advantages After each iteration, regression testing should be conducted. During this testing, faulty elements of the software can be quickly identified because few changes are made within any single iteration. It is generally easier to test and debug than other methods of software development because relatively smaller changes are made during each iteration. This allows for more targeted and rigorous testing of each element within the overall product. Customers can respond to features and review the product for any needed or useful changes. Initial product delivery is faster and costs less. Disadvantages The resulting cost may exceed the cost of the organization. As additional functionality is added to the product, problems may arise related to system architecture which were not evident in earlier prototypes == Tasks involved == These tasks are common to all the models: Communication: helps to understand the objective. Planning: required as many people (software teams) to work on the same project but with different functions at the same time. Modeling: involves business modeling, data modeling, and process modeling. Construction: this involves the reuse of software components and automatic code. Deployment: integration of all the increments. == See also == Iterative and incremental development Rapid application development Incremental approach == Citations == == References == Project Management Institute (2021). A guide to the project management body of knowledge (PMBOK guide). Project Management Institute (7th ed.). Newtown Square, PA. ISBN 978-1-62825-664-2.{{cite book}}: CS1 maint: location missing publisher (link) == External links == Methodology::Development Models Archived 2016-03-03 at the Wayback Machine Incremental lifecycle What is Incremental model - advantages, disadvantages and when to use it Incremental Model in Software Engineering
Wikipedia/Incremental_build_model
Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and solve complex physical problems. While this typically extends into computational specializations, this field of study includes: Algorithms (numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve sciences (e.g, physical, biological, and social), engineering, and humanities problems Computer hardware that develops and optimizes the advanced system hardware, firmware, networking, and data management components needed to solve computationally demanding problems The computing infrastructure that supports both the science and engineering problem solving and the developmental computer and information science In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve problems in various scientific disciplines. The field is different from theory and laboratory experiments, which are the traditional forms of science and engineering. The scientific computing approach is to gain understanding through the analysis of mathematical models implemented on computers. Scientists and engineers develop computer programs and application software that model systems being studied and run these programs with various sets of input parameters. The essence of computational science is the application of numerical algorithms and computational mathematics. In some cases, these models require massive amounts of calculations (usually floating-point) and are often executed on supercomputers or distributed computing platforms. == The computational scientist == The term computational scientist is used to describe someone skilled in scientific computing. Such a person is usually a scientist, an engineer, or an applied mathematician who applies high-performance computing in different ways to advance the state-of-the-art in their respective applied disciplines in physics, chemistry, or engineering. Computational science is now commonly considered a third mode of science , complementing and adding to experimentation/observation and theory (see image). Here, one defines a system as a potential source of data, an experiment as a process of extracting data from a system by exerting it through its inputs and a model (M) for a system (S) and an experiment (E) as anything to which E can be applied in order to answer questions about S. A computational scientist should be capable of: recognizing complex problems adequately conceptualizing the system containing these problems designing a framework of algorithms suitable for studying this system: the simulation choosing a suitable computing infrastructure (parallel computing/grid computing/supercomputers) hereby, maximizing the computational power of the simulation assessing to what level the output of the simulation resembles the systems: the model is validated adjusting the conceptualization of the system accordingly repeat the cycle until a suitable level of validation is obtained: the computational scientist trusts that the simulation generates adequately realistic results for the system under the studied conditions Substantial effort in computational sciences has been devoted to developing algorithms, efficient implementation in programming languages, and validating computational results. A collection of problems and solutions in computational science can be found in Steeb, Hardy, Hardy, and Stoop (2004). Philosophers of science addressed the question to what degree computational science qualifies as science, among them Humphreys and Gelfert. They address the general question of epistemology: how does gain insight from such computational science approaches? Tolk uses these insights to show the epistemological constraints of computer-based simulation research. As computational science uses mathematical models representing the underlying theory in executable form, in essence, they apply modeling (theory building) and simulation (implementation and execution). While simulation and computational science are our most sophisticated way to express our knowledge and understanding, they also come with all constraints and limits already known for computational solutions. == Applications of computational science == Problem domains for computational science/scientific computing include: === Predictive computational science === Predictive computational science is a scientific discipline concerned with the formulation, calibration, numerical solution, and validation of mathematical models designed to predict specific aspects of physical events, given initial and boundary conditions, and a set of characterizing parameters and associated uncertainties. In typical cases, the predictive statement is formulated in terms of probabilities. For example, given a mechanical component and a periodic loading condition, "the probability is (say) 90% that the number of cycles at failure (Nf) will be in the interval N1<Nf<N2". === Urban complex systems === Cities are massively complex systems created by humans, made up of humans, and governed by humans. Trying to predict, understand and somehow shape the development of cities in the future requires complex thinking and computational models and simulations to help mitigate challenges and possible disasters. The focus of research in urban complex systems is, through modeling and simulation, to build a greater understanding of city dynamics and help prepare for the coming urbanization. === Computational finance === In financial markets, huge volumes of interdependent assets are traded by a large number of interacting market participants in different locations and time zones. Their behavior is of unprecedented complexity and the characterization and measurement of the risk inherent to this highly diverse set of instruments is typically based on complicated mathematical and computational models. Solving these models exactly in closed form, even at a single instrument level, is typically not possible, and therefore we have to look for efficient numerical algorithms. This has become even more urgent and complex recently, as the credit crisis has clearly demonstrated the role of cascading effects going from single instruments through portfolios of single institutions to even the interconnected trading network. Understanding this requires a multi-scale and holistic approach where interdependent risk factors such as market, credit, and liquidity risk are modeled simultaneously and at different interconnected scales. === Computational biology === Exciting new developments in biotechnology are now revolutionizing biology and biomedical research. Examples of these techniques are high-throughput sequencing, high-throughput quantitative PCR, intra-cellular imaging, in-situ hybridization of gene expression, three-dimensional imaging techniques like Light Sheet Fluorescence Microscopy, and Optical Projection (micro)-Computer Tomography. Given the massive amounts of complicated data that is generated by these techniques, their meaningful interpretation, and even their storage, form major challenges calling for new approaches. Going beyond current bioinformatics approaches, computational biology needs to develop new methods to discover meaningful patterns in these large data sets. Model-based reconstruction of gene networks can be used to organize the gene expression data in a systematic way and to guide future data collection. A major challenge here is to understand how gene regulation is controlling fundamental biological processes like biomineralization and embryogenesis. The sub-processes like gene regulation, organic molecules interacting with the mineral deposition process, cellular processes, physiology, and other processes at the tissue and environmental levels are linked. Rather than being directed by a central control mechanism, biomineralization and embryogenesis can be viewed as an emergent behavior resulting from a complex system in which several sub-processes on very different temporal and spatial scales (ranging from nanometer and nanoseconds to meters and years) are connected into a multi-scale system. One of the few available options to understand such systems is by developing a multi-scale model of the system. === Complex systems theory === Using information theory, non-equilibrium dynamics, and explicit simulations, computational systems theory tries to uncover the true nature of complex adaptive systems. === Computational science and engineering === Computational science and engineering (CSE) is a relatively new discipline that deals with the development and application of computational models and simulations, often coupled with high-performance computing, to solve complex physical problems arising in engineering analysis and design (computational engineering) as well as natural phenomena (computational science). CSE has become accepted amongst scientists, engineers and academics as the "third mode of discovery" (next to theory and experimentation). In many fields, computer simulation is integral and therefore essential to business and research. Computer simulation provides the capability to enter fields that are either inaccessible to traditional experimentation or where carrying out traditional empirical inquiries is prohibitively expensive. CSE should neither be confused with pure computer science, nor with computer engineering, although a wide domain in the former is used in CSE (e.g., certain algorithms, data structures, parallel programming, high-performance computing), and some problems in the latter can be modeled and solved with CSE methods (as an application area). == Methods and algorithms == Algorithms and mathematical methods used in computational science are varied. Commonly applied methods include: Historically and today, Fortran remains popular for most applications of scientific computing. Other programming languages and computer algebra systems commonly used for the more mathematical aspects of scientific computing applications include GNU Octave, Haskell, Julia, Maple, Mathematica, MATLAB, Python (with third-party SciPy library), Perl (with third-party PDL library), R, Scilab, and TK Solver. The more computationally intensive aspects of scientific computing will often use some variation of C or Fortran and optimized algebra libraries such as BLAS or LAPACK. In addition, parallel computing is heavily used in scientific computing to find solutions of large problems in a reasonable amount of time. In this framework, the problem is either divided over many cores on a single CPU node (such as with OpenMP), divided over many CPU nodes networked together (such as with MPI), or is run on one or more GPUs (typically using either CUDA or OpenCL). Computational science application programs often model real-world changing conditions, such as weather, airflow around a plane, automobile body distortions in a crash, the motion of stars in a galaxy, an explosive device, etc. Such programs might create a 'logical mesh' in computer memory where each item corresponds to an area in space and contains information about that space relevant to the model. For example, in weather models, each item might be a square kilometer; with land elevation, current wind direction, humidity, temperature, pressure, etc. The program would calculate the likely next state based on the current state, in simulated time steps, solving differential equations that describe how the system operates, and then repeat the process to calculate the next state. == Conferences and journals == In 2001, the International Conference on Computational Science (ICCS) was first organized. Since then, it has been organized yearly. ICCS is an A-rank conference in the CORE ranking. The Journal of Computational Science published its first issue in May 2010. The Journal of Open Research Software was launched in 2012. The ReScience C initiative, which is dedicated to replicating computational results, was started on GitHub in 2015. == Education == At some institutions, a specialization in scientific computation can be earned as a "minor" within another program (which may be at varying levels). However, there are increasingly many bachelor's, master's, and doctoral programs in computational science. The joint degree program master program computational science at the University of Amsterdam and the Vrije Universiteit in computational science was first offered in 2004. In this program, students: learn to build computational models from real-life observations; develop skills in turning these models into computational structures and in performing large-scale simulations; learn theories that will give a firm basis for the analysis of complex systems; learn to analyze the results of simulations in a virtual laboratory using advanced numerical algorithms. ETH Zurich offers a bachelor's and master's degree in Computational Science and Engineering. The degree equips students with the ability to understand scientific problem and apply numerical methods to solve such problems. The directions of specializations include Physics, Chemistry, Biology and other Scientific and Engineering disciplines. George Mason University has offered a multidisciplinary doctorate Ph.D. program in Computational Sciences and Informatics starting from 1992. The School of Computational and Integrative Sciences, Jawaharlal Nehru University (erstwhile School of Information Technology) also offers a vibrant master's science program for computational science with two specialties: Computational Biology and Complex Systems. === Subfields === == See also == Computational science and engineering Modeling and simulation Comparison of computer algebra systems Differentiable programming List of molecular modeling software List of numerical analysis software List of statistical packages Timeline of scientific computing Simulated reality Extensions for Scientific Computation (XSC) == References == == Additional sources == E. Gallopoulos and A. Sameh, "CSE: Content and Product". IEEE Computational Science and Engineering Magazine, 4(2):39–43 (1997) G. Hager and G. Wellein, Introduction to High Performance Computing for Scientists and Engineers, Chapman and Hall (2010) A.K. Hartmann, Practical Guide to Computer Simulations, World Scientific (2009) Journal Computational Methods in Science and Technology (open access), Polish Academy of Sciences Journal Computational Science and Discovery, Institute of Physics R.H. Landau, C.C. Bordeianu, and M. Jose Paez, A Survey of Computational Physics: Introductory Computational Science, Princeton University Press (2008) == External links == Journal of Computational Science The Journal of Open Research Software The National Center for Computational Science at Oak Ridge National Laboratory
Wikipedia/Computational_science
Model-driven engineering (MDE) is a software development methodology that focuses on creating and exploiting domain models, which are conceptual models of all the topics related to a specific problem. Hence, it highlights and aims at abstract representations of the knowledge and activities that govern a particular application domain, rather than the computing (i.e. algorithmic) concepts. MDE is a subfield of a software design approach referred as round-trip engineering. The scope of the MDE is much wider than that of the Model-Driven Architecture. == Overview == The MDE approach is meant to increase productivity by maximizing compatibility between systems (via reuse of standardized models), simplifying the process of design (via models of recurring design patterns in the application domain), and promoting communication between individuals and teams working on the system (via a standardization of the terminology and the best practices used in the application domain). For instance, in model-driven development, technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model. A modeling paradigm for MDE is considered effective if its models make sense from the point of view of a user that is familiar with the domain, and if they can serve as a basis for implementing systems. The models are developed through extensive communication among product managers, designers, developers and users of the application domain. As the models approach completion, they enable the development of software and systems. Some of the better known MDE initiatives are: The Object Management Group (OMG) initiative Model-Driven Architecture (MDA) which is leveraged by several of their standards such as Meta-Object Facility, XMI, CWM, CORBA, Unified Modeling Language (to be more precise, the OMG currently promotes the use of a subset of UML called fUML together with its action language, ALF, for model-driven architecture; a former approach relied on Executable UML and OCL, instead), and QVT. The Eclipse "eco-system" of programming and modelling tools represented in general terms by the (Eclipse Modeling Framework). This framework allows the creation of tools implementing the MDA standards of the OMG; but, it is also possible to use it to implement other modeling-related tools. == History == The first tools to support MDE were the Computer-Aided Software Engineering (CASE) tools developed in the 1980s. Companies like Integrated Development Environments (IDE – StP), Higher Order Software (now Hamilton Technologies, Inc., HTI), Cadre Technologies, Bachman Information Systems, and Logic Works (BP-Win and ER-Win) were pioneers in the field. The US government got involved in the modeling definitions creating the IDEF specifications. With several variations of the modeling definitions (see Booch, Rumbaugh, Jacobson, Gane and Sarson, Harel, Shlaer and Mellor, and others) they were eventually joined creating the Unified Modeling Language (UML). Rational Rose, a product for UML implementation, was done by Rational Corporation (Booch) responding automation yield higher levels of abstraction in software development. This abstraction promotes simpler models with a greater focus on problem space. Combined with executable semantics this elevates the total level of automation possible. The Object Management Group (OMG) has developed a set of standards called Model-Driven Architecture (MDA), building a foundation for this advanced architecture-focused approach. == Advantages == According to Douglas C. Schmidt, model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively. == Tools == Notable software tools for model-driven engineering include: == See also == Application lifecycle management (ALM) Business Process Model and Notation (BPMN) Business-driven development (BDD) Domain-driven design (DDD) Domain-specific language (DSL) Domain-specific modeling (DSM) Domain-specific multimodeling Language-oriented programming (LOP) List of Unified Modeling Language tools Model transformation (e.g. using QVT) Model-based testing (MBT) Modeling Maturity Level (MML) Model-based systems engineering (MBSE) Service-oriented modeling Framework (SOMF) Software factory (SF) Story-driven modeling (SDM) Open API, open source specification for description of models and operations for HTTP interoperation and REST APIc == References == == Further reading == David S. Frankel, Model Driven Architecture: Applying MDA to Enterprise Computing, John Wiley & Sons, ISBN 0-471-31920-1 Marco Brambilla, Jordi Cabot, Manuel Wimmer, Model Driven Software Engineering in Practice, foreword by Richard Soley (OMG Chairman), Morgan & Claypool, USA, 2012, Synthesis Lectures on Software Engineering #1. 182 pages. ISBN 9781608458820 (paperback), ISBN 9781608458837 (ebook). https://www.mdse-book.com da Silva, Alberto Rodrigues (2015). "Model-Driven Engineering: A Survey Supported by a Unified Conceptual Model". Computer Languages, Systems & Structures. 43 (43): 139–155. doi:10.1016/j.cl.2015.06.001. == External links == Model-Driven Architecture: Vision, Standards And Emerging Technologies at omg.org
Wikipedia/Model-driven_engineering
A cryptographic hash function (CHF) is a hash algorithm (a map of an arbitrary binary string to a binary string with a fixed size of n {\displaystyle n} bits) that has special properties desirable for a cryptographic application: the probability of a particular n {\displaystyle n} -bit output result (hash value) for a random input string ("message") is 2 − n {\displaystyle 2^{-n}} (as for any good hash), so the hash value can be used as a representative of the message; finding an input string that matches a given hash value (a pre-image) is infeasible, assuming all input strings are equally likely. The resistance to such search is quantified as security strength: a cryptographic hash with n {\displaystyle n} bits of hash value is expected to have a preimage resistance strength of n {\displaystyle n} bits, unless the space of possible input values is significantly smaller than 2 n {\displaystyle 2^{n}} (a practical example can be found in § Attacks on hashed passwords); a second preimage resistance strength, with the same expectations, refers to a similar problem of finding a second message that matches the given hash value when one message is already known; finding any pair of different messages that yield the same hash value (a collision) is also infeasible: a cryptographic hash is expected to have a collision resistance strength of n / 2 {\displaystyle n/2} bits (lower due to the birthday paradox). Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital) fingerprints, checksums, (message) digests, or just hash values, even though all these terms stand for more general functions with rather different properties and purposes. Non-cryptographic hash functions are used in hash tables and to detect accidental errors; their constructions frequently provide no resistance to a deliberate attack. For example, a denial-of-service attack on hash tables is possible if the collisions are easy to find, as in the case of linear cyclic redundancy check (CRC) functions. == Properties == Most cryptographic hash functions are designed to take a string of any length as input and produce a fixed-length hash value. A cryptographic hash function must be able to withstand all known types of cryptanalytic attack. In theoretical cryptography, the security level of a cryptographic hash function has been defined using the following properties: Pre-image resistance Given a hash value h, it should be difficult to find any message m such that h = hash(m). This concept is related to that of a one-way function. Functions that lack this property are vulnerable to preimage attacks. Second pre-image resistance Given an input m1, it should be difficult to find a different input m2 such that hash(m1) = hash(m2). This property is sometimes referred to as weak collision resistance. Functions that lack this property are vulnerable to second-preimage attacks. Collision resistance It should be difficult to find two different messages m1 and m2 such that hash(m1) = hash(m2). Such a pair is called a cryptographic hash collision. This property is sometimes referred to as strong collision resistance. It requires a hash value at least twice as long as that required for pre-image resistance; otherwise, collisions may be found by a birthday attack. Collision resistance implies second pre-image resistance but does not imply pre-image resistance. The weaker assumption is always preferred in theoretical cryptography, but in practice, a hash-function that is only second pre-image resistant is considered insecure and is therefore not recommended for real applications. Informally, these properties mean that a malicious adversary cannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be very confident that they are identical. Second pre-image resistance prevents an attacker from crafting a document with the same hash as a document the attacker cannot control. Collision resistance prevents an attacker from creating two distinct documents with the same hash. A function meeting these criteria may still have undesirable properties. Currently, popular cryptographic hash functions are vulnerable to length-extension attacks: given hash(m) and len(m) but not m, by choosing a suitable m′ an attacker can calculate hash(m ∥ m′), where ∥ denotes concatenation. This property can be used to break naive authentication schemes based on hash functions. The HMAC construction works around these problems. In practice, collision resistance is insufficient for many practical uses. In addition to collision resistance, it should be impossible for an adversary to find two messages with substantially similar digests; or to infer any useful information about the data, given only its digest. In particular, a hash function should behave as much as possible like a random function (often called a random oracle in proofs of security) while still being deterministic and efficiently computable. This rules out functions like the SWIFFT function, which can be rigorously proven to be collision-resistant assuming that certain problems on ideal lattices are computationally difficult, but, as a linear function, does not satisfy these additional properties. Checksum algorithms, such as CRC32 and other cyclic redundancy checks, are designed to meet much weaker requirements and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in the WEP encryption standard, but an attack was readily discovered, which exploited the linearity of the checksum. === Degree of difficulty === In cryptographic practice, "difficult" generally means "almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important". The meaning of the term is therefore somewhat dependent on the application since the effort that a malicious agent may put into the task is usually proportional to their expected gain. However, since the needed effort usually multiplies with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a dozen bits to the latter. For messages selected from a limited set of messages, for example passwords or other short messages, it can be feasible to invert a hash by trying all possible messages in the set. Because cryptographic hash functions are typically designed to be computed quickly, special key derivation functions that require greater computing resources have been developed that make such brute-force attacks more difficult. In some theoretical analyses "difficult" has a specific mathematical meaning, such as "not solvable in asymptotic polynomial time". Such interpretations of difficulty are important in the study of provably secure cryptographic hash functions but do not usually have a strong connection to practical security. For example, an exponential-time algorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial-time algorithm (e.g., one that requires n20 steps for n-digit keys) may be too slow for any practical use. == Illustration == An illustration of the potential use of a cryptographic hash is as follows: Alice poses a tough math problem to Bob and claims that she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing. Therefore, Alice writes down her solution, computes its hash, and tells Bob the hash value (whilst keeping the solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him before. (This is an example of a simple commitment scheme; in actual practice, Alice and Bob will often be computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution.) == Applications == === Verifying the integrity of messages and files === An important application of secure hashes is the verification of message integrity. Comparing message digests (hash digests over the message) calculated before, and after, transmission can determine whether any changes have been made to the message or file. MD5, SHA-1, or SHA-2 hash digests are sometimes published on websites or forums to allow verification of integrity for downloaded files, including files retrieved using file sharing such as mirroring. This practice establishes a chain of trust as long as the hashes are posted on a trusted site – usually the originating site – authenticated by HTTPS. Using a cryptographic hash and a chain of trust detects malicious changes to the file. Non-cryptographic error-detecting codes such as cyclic redundancy checks only prevent against non-malicious alterations of the file, since an intentional spoof can readily be crafted to have the colliding code value. === Signature generation and verification === Almost all digital signature schemes require a cryptographic hash to be calculated over the message. This allows the signature calculation to be performed on the relatively small, statically sized hash digest. The message is considered authentic if the signature verification succeeds given the signature and recalculated hash digest over the message. So the message integrity property of the cryptographic hash is used to create secure and efficient digital signature schemes. === Password verification === Password verification commonly relies on cryptographic hashes. Storing all user passwords as cleartext can result in a massive security breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash. A password reset method is required when password hashing is performed; original passwords cannot be recalculated from the stored hash value. However, use of standard cryptographic hash functions, such as the SHA series, is no longer considered safe for password storage.: 5.1.1.2  These algorithms are designed to be computed quickly, so if the hashed values are compromised, it is possible to try guessed passwords at high rates. Common graphics processing units can try billions of possible passwords each second. Password hash functions that perform key stretching – such as PBKDF2, scrypt or Argon2 – commonly use repeated invocations of a cryptographic hash to increase the time (and in some cases computer memory) required to perform brute-force attacks on stored password hash digests. For details, see § Attacks on hashed passwords. A password hash also requires the use of a large random, non-secret salt value that can be stored with the password hash. The salt is hashed with the password, altering the password hash mapping for each password, thereby making it infeasible for an adversary to store tables of precomputed hash values to which the password hash digest can be compared or to test a large number of purloined hash values in parallel. === Proof-of-work === A proof-of-work system (or protocol, or function) is an economic measure to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from the service requester, usually meaning processing time by a computer. A key feature of these schemes is their asymmetry: the work must be moderately hard (but feasible) on the requester side but easy to check for the service provider. One popular system – used in Bitcoin mining and Hashcash – uses partial hash inversions to prove that work was done, to unlock a mining reward in Bitcoin, and as a good-will token to send an e-mail in Hashcash. The sender is required to find a message whose hash value begins with a number of zero bits. The average work that the sender needs to perform in order to find a valid message is exponential in the number of zero bits required in the hash value, while the recipient can verify the validity of the message by executing a single hash function. For instance, in Hashcash, a sender is asked to generate a header whose 160-bit SHA-1 hash value has the first 20 bits as zeros. The sender will, on average, have to try 219 times to find a valid header. === File or data identifier === A message digest can also serve as a means of reliably identifying a file; several source code management systems, including Git, Mercurial and Monotone, use the sha1sum of various types of content (file content, directory trees, ancestry information, etc.) to uniquely identify them. Hashes are used to identify files on peer-to-peer filesharing networks. For example, in an ed2k link, an MD4-variant hash is combined with the file size, providing sufficient information for locating file sources, downloading the file, and verifying its contents. Magnet links are another example. Such file hashes are often the top hash of a hash list or a hash tree, which allows for additional benefits. One of the main applications of a hash function is to allow the fast look-up of data in a hash table. Being hash functions of a particular kind, cryptographic hash functions lend themselves well to this application too. However, compared with standard hash functions, cryptographic hash functions tend to be much more expensive computationally. For this reason, they tend to be used in contexts where it is necessary for users to protect themselves against the possibility of forgery (the creation of data with the same digest as the expected data) by potentially malicious participants, such as open source applications with multiple sources of download, where malicious files could be substituted in with the same appearance to the user, or an authentic file is modified to contain malicious data. ==== Content-addressable storage ==== == Hash functions based on block ciphers == There are several methods to use a block cipher to build a cryptographic hash function, specifically a one-way compression function. The methods resemble the block cipher modes of operation usually used for encryption. Many well-known hash functions, including MD4, MD5, SHA-1 and SHA-2, are built from block-cipher-like components designed for the purpose, with feedback to ensure that the resulting function is not invertible. SHA-3 finalists included functions with block-cipher-like components (e.g., Skein, BLAKE) though the function finally selected, Keccak, was built on a cryptographic sponge instead. A standard block cipher such as AES can be used in place of these custom block ciphers; that might be useful when an embedded system needs to implement both encryption and hashing with minimal code size or hardware area. However, that approach can have costs in efficiency and security. The ciphers in hash functions are built for hashing: they use large keys and blocks, can efficiently change keys every block, and have been designed and vetted for resistance to related-key attacks. General-purpose ciphers tend to have different design goals. In particular, AES has key and block sizes that make it nontrivial to use to generate long hash values; AES encryption becomes less efficient when the key changes each block; and related-key attacks make it potentially less secure for use in a hash function than for encryption. == Hash function design == === Merkle–Damgård construction === A hash function must be able to process an arbitrary-length message into a fixed-length output. This can be achieved by breaking the input up into a series of equally sized blocks, and operating on them in sequence using a one-way compression function. The compression function can either be specially designed for hashing or be built from a block cipher. A hash function built with the Merkle–Damgård construction is as resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a collision in the compression function. The last block processed should also be unambiguously length padded; this is crucial to the security of this construction. This construction is called the Merkle–Damgård construction. Most common classical hash functions, including SHA-1 and MD5, take this form. === Wide pipe versus narrow pipe === A straightforward application of the Merkle–Damgård construction, where the size of hash output is equal to the internal state size (between each compression step), results in a narrow-pipe hash design. This design causes many inherent flaws, including length-extension, multicollisions, long message attacks, generate-and-paste attacks, and also cannot be parallelized. As a result, modern hash functions are built on wide-pipe constructions that have a larger internal state size – which range from tweaks of the Merkle–Damgård construction to new constructions such as the sponge construction and HAIFA construction. None of the entrants in the NIST hash function competition use a classical Merkle–Damgård construction. Meanwhile, truncating the output of a longer hash, such as used in SHA-512/256, also defeats many of these attacks. == Use in building other cryptographic primitives == Hash functions can be used to build other cryptographic primitives. For these other primitives to be cryptographically secure, care must be taken to build them correctly. Message authentication codes (MACs) (also called keyed hash functions) are often built from hash functions. HMAC is such a MAC. Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Luby-Rackoff constructions using hash functions can be provably secure if the underlying hash function is secure. Also, many hash functions (including SHA-1 and SHA-2) are built by using a special-purpose block cipher in a Davies–Meyer or other construction. That cipher can also be used in a conventional mode of operation, without the same security guarantees; for example, SHACAL, BEAR and LION. Pseudorandom number generators (PRNGs) can be built using hash functions. This is done by combining a (secret) random seed with a counter and hashing it. Some hash functions, such as Skein, Keccak, and RadioGatún, output an arbitrarily long stream and can be used as a stream cipher, and stream ciphers can also be built from fixed-length digest hash functions. Often this is done by first building a cryptographically secure pseudorandom number generator and then using its stream of random bytes as keystream. SEAL is a stream cipher that uses SHA-1 to generate internal tables, which are then used in a keystream generator more or less unrelated to the hash algorithm. SEAL is not guaranteed to be as strong (or weak) as SHA-1. Similarly, the key expansion of the HC-128 and HC-256 stream ciphers makes heavy use of the SHA-256 hash function. == Concatenation == Concatenating outputs from multiple hash functions provide collision resistance as good as the strongest of the algorithms included in the concatenated result. For example, older versions of Transport Layer Security (TLS) and Secure Sockets Layer (SSL) used concatenated MD5 and SHA-1 sums. This ensures that a method to find collisions in one of the hash functions does not defeat data protected by both hash functions. For Merkle–Damgård construction hash functions, the concatenated function is as collision-resistant as its strongest component, but not more collision-resistant. Antoine Joux observed that 2-collisions lead to n-collisions: if it is feasible for an attacker to find two messages with the same MD5 hash, then they can find as many additional messages with that same MD5 hash as they desire, with no greater difficulty. Among those n messages with the same MD5 hash, there is likely to be a collision in SHA-1. The additional work needed to find the SHA-1 collision (beyond the exponential birthday search) requires only polynomial time. == Cryptographic hash algorithms == There are many cryptographic hash algorithms; this section lists a few algorithms that are referenced relatively often. A more extensive list can be found on the page containing a comparison of cryptographic hash functions. === MD5 === MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4, and was specified in 1992 as RFC 1321. Collisions against MD5 can be calculated within seconds, which makes the algorithm unsuitable for most use cases where a cryptographic hash is required. MD5 produces a digest of 128 bits (16 bytes). === SHA-1 === SHA-1 was developed as part of the U.S. Government's Capstone project. The original specification – now commonly called SHA-0 – of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. Collisions against the full SHA-1 algorithm can be produced using the shattered attack and the hash function should be considered broken. SHA-1 produces a hash digest of 160 bits (20 bytes). Documents may refer to SHA-1 as just "SHA", even though this may conflict with the other Secure Hash Algorithms such as SHA-0, SHA-2, and SHA-3. === RIPEMD-160 === RIPEMD (RACE Integrity Primitives Evaluation Message Digest) is a family of cryptographic hash functions developed in Leuven, Belgium, by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel at the COSIC research group at the Katholieke Universiteit Leuven, and first published in 1996. RIPEMD was based upon the design principles used in MD4 and is similar in performance to the more popular SHA-1. RIPEMD-160 has, however, not been broken. As the name implies, RIPEMD-160 produces a hash digest of 160 bits (20 bytes). === Whirlpool === Whirlpool is a cryptographic hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto, who first described it in 2000. Whirlpool is based on a substantially modified version of the Advanced Encryption Standard (AES). Whirlpool produces a hash digest of 512 bits (64 bytes). === SHA-2 === SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA), first published in 2001. They are built using the Merkle–Damgård structure, from a one-way compression function itself built using the Davies–Meyer structure from a (classified) specialized block cipher. SHA-2 basically consists of two hash algorithms: SHA-256 and SHA-512. SHA-224 is a variant of SHA-256 with different starting values and truncated output. SHA-384 and the lesser-known SHA-512/224 and SHA-512/256 are all variants of SHA-512. SHA-512 is more secure than SHA-256 and is commonly faster than SHA-256 on 64-bit machines such as AMD64. The output size in bits is given by the extension to the "SHA" name, so SHA-224 has an output size of 224 bits (28 bytes); SHA-256, 32 bytes; SHA-384, 48 bytes; and SHA-512, 64 bytes. === SHA-3 === SHA-3 (Secure Hash Algorithm 3) was released by NIST on August 5, 2015. SHA-3 is a subset of the broader cryptographic primitive family Keccak. The Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak is based on a sponge construction, which can also be used to build other cryptographic primitives such as a stream cipher. SHA-3 provides the same output sizes as SHA-2: 224, 256, 384, and 512 bits. Configurable output sizes can also be obtained using the SHAKE-128 and SHAKE-256 functions. Here the -128 and -256 extensions to the name imply the security strength of the function rather than the output size in bits. === BLAKE2 === BLAKE2, an improved version of BLAKE, was announced on December 21, 2012. It was created by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn, and Christian Winnerlein with the goal of replacing the widely used but broken MD5 and SHA-1 algorithms. When run on 64-bit x64 and ARM architectures, BLAKE2b is faster than SHA-3, SHA-2, SHA-1, and MD5. Although BLAKE and BLAKE2 have not been standardized as SHA-3 has, BLAKE2 has been used in many protocols including the Argon2 password hash, for the high efficiency that it offers on modern CPUs. As BLAKE was a candidate for SHA-3, BLAKE and BLAKE2 both offer the same output sizes as SHA-3 – including a configurable output size. === BLAKE3 === BLAKE3, an improved version of BLAKE2, was announced on January 9, 2020. It was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. BLAKE3 is a single algorithm, in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7. Internally, BLAKE3 is a Merkle tree, and it supports higher degrees of parallelism than BLAKE2. == Attacks on cryptographic hash algorithms == There is a long list of cryptographic hash functions but many have been found to be vulnerable and should not be used. For instance, NIST selected 51 hash functions as candidates for round 1 of the SHA-3 hash competition, of which 10 were considered broken and 16 showed significant weaknesses and therefore did not make it to the next round; more information can be found on the main article about the NIST hash function competitions. Even if a hash function has never been broken, a successful attack against a weakened variant may undermine the experts' confidence. For instance, in August 2004 collisions were found in several then-popular hash functions, including MD5. These weaknesses called into question the security of stronger algorithms derived from the weak hash functions – in particular, SHA-1 (a strengthened version of SHA-0), RIPEMD-128, and RIPEMD-160 (both strengthened versions of RIPEMD). On August 12, 2004, Joux, Carribault, Lemuel, and Jalby announced a collision for the full SHA-0 algorithm. Joux et al. accomplished this using a generalization of the Chabaud and Joux attack. They found that the collision had complexity 251 and took about 80,000 CPU hours on a supercomputer with 256 Itanium 2 processors – equivalent to 13 days of full-time use of the supercomputer. In February 2005, an attack on SHA-1 was reported that would find collision in about 269 hashing operations, rather than the 280 expected for a 160-bit hash function. In August 2005, another attack on SHA-1 was reported that would find collisions in 263 operations. Other theoretical weaknesses of SHA-1 have been known, and in February 2017 Google announced a collision in SHA-1. Security researchers recommend that new applications can avoid these problems by using later members of the SHA family, such as SHA-2, or using techniques such as randomized hashing that do not require collision resistance. A successful, practical attack broke MD5 (used within certificates for Transport Layer Security) in 2008. Many cryptographic hashes are based on the Merkle–Damgård construction. All cryptographic hashes that directly use the full output of a Merkle–Damgård construction are vulnerable to length extension attacks. This makes the MD5, SHA-1, RIPEMD-160, Whirlpool, and the SHA-256 / SHA-512 hash algorithms all vulnerable to this specific attack. SHA-3, BLAKE2, BLAKE3, and the truncated SHA-2 variants are not vulnerable to this type of attack. == Attacks on hashed passwords == Rather than store plain user passwords, controlled-access systems frequently store the hash of each user's password in a file or database. When someone requests access, the password they submit is hashed and compared with the stored value. If the database is stolen (an all-too-frequent occurrence), the thief will only have the hash values, not the passwords. Passwords may still be retrieved by an attacker from the hashes, because most people choose passwords in predictable ways. Lists of common passwords are widely circulated and many passwords are short enough that even all possible combinations may be tested if calculation of the hash does not take too much time. The use of cryptographic salt prevents some attacks, such as building files of precomputing hash values, e.g. rainbow tables. But searches on the order of 100 billion tests per second are possible with high-end graphics processors, making direct attacks possible even with salt. The United States National Institute of Standards and Technology recommends storing passwords using special hashes called key derivation functions (KDFs) that have been created to slow brute force searches.: 5.1.1.2  Slow hashes include pbkdf2, bcrypt, scrypt, argon2, Balloon and some recent modes of Unix crypt. For KDFs that perform multiple hashes to slow execution, NIST recommends an iteration count of 10,000 or more.: 5.1.1.2  == See also == == References == === Citations === === Sources === Menezes, Alfred J.; van Oorschot, Paul C.; Vanstone, Scott A. (7 December 2018). "Hash functions". Handbook of Applied Cryptography. CRC Press. pp. 33–. ISBN 978-0-429-88132-9. Aumasson, Jean-Philippe (6 November 2017). Serious Cryptography: A Practical Introduction to Modern Encryption. No Starch Press. ISBN 978-1-59327-826-7. OCLC 1012843116. == External links == Paar, Christof; Pelzl, Jan (2009). "11: Hash Functions". Understanding Cryptography, A Textbook for Students and Practitioners. Springer. Archived from the original on 2012-12-08. (companion web site contains online cryptography course that covers hash functions) "The ECRYPT Hash Function Website". Buldas, A. (2011). "Series of mini-lectures about cryptographic hash functions". Archived from the original on 2012-12-06. Open source python based application with GUI used to verify downloads.
Wikipedia/Cryptographic_hash_function